CN114612654A - Magnetic resonance imaging feature extraction method based on cyclic depth neural network - Google Patents

Magnetic resonance imaging feature extraction method based on cyclic depth neural network Download PDF

Info

Publication number
CN114612654A
CN114612654A CN202210302838.0A CN202210302838A CN114612654A CN 114612654 A CN114612654 A CN 114612654A CN 202210302838 A CN202210302838 A CN 202210302838A CN 114612654 A CN114612654 A CN 114612654A
Authority
CN
China
Prior art keywords
image data
neural network
magnetic resonance
resonance imaging
steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210302838.0A
Other languages
Chinese (zh)
Inventor
刘向军
熊春玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Weiying Zhejiang Medical Technology Co Ltd
Original Assignee
Zhongke Weiying Zhejiang Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Weiying Zhejiang Medical Technology Co Ltd filed Critical Zhongke Weiying Zhejiang Medical Technology Co Ltd
Priority to CN202210302838.0A priority Critical patent/CN114612654A/en
Publication of CN114612654A publication Critical patent/CN114612654A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a magnetic resonance imaging feature extraction method based on a circulating deep neural network, which comprises the following steps: carrying out image reconstruction on the blood oxygen level dependent magnetic resonance imaging, constructing an image data set, dividing each group of images containing the same number into N groups, and selecting one group of image data as a mark; training a set of labeled image data to a cyclic depth neural network; automatically marking the next group of image data by using the weight information of the trained cyclic depth neural network; and adding the image data after screening the corrected marking result into a training set until all the image data are trained, and performing feature extraction on the image data after image reconstruction by using the trained circulating depth neural network to identify a feature region. The invention adopts the feature fusion technology and the attention mechanism to combine to construct the cycle depth neural network, and the spatial domain and the channel domain are embedded into the attention module to carry out image processing, thereby greatly improving the identification efficiency of the blood oxygen level depending on the magnetic resonance imaging.

Description

Magnetic resonance imaging feature extraction method based on cyclic depth neural network
Technical Field
The invention relates to the technical field of medical images, in particular to a magnetic resonance imaging feature extraction method based on a circulating depth neural network.
Background
Magnetic Resonance Imaging (MRI) is a technique for Imaging by using a Magnetic Resonance phenomenon, and the working principle of Magnetic Resonance Imaging is that hydrogen atoms in a human body are excited by using the Magnetic Resonance phenomenon and adopting radio frequency excitation, a gradient field is used for carrying out position encoding, a receiving coil is used for receiving an electromagnetic signal with position information, and finally the electromagnetic signal is subjected to fourier transform to reconstruct image information.
Deep learning has been widely used in the fields of image, video, sound, and natural language processing because of its superior learning ability. Recently, Deep convolution generated adaptive neural network (DCGAN) is applied to blood oxygen level dependent magnetic resonance imaging, a convolution neural network with strong feature extraction capability is introduced to extract local features of different frequency bands of an image, local perception is performed, and the features are combined to obtain global image features.
However, the existing deep convolution generation type countermeasure network has the problem that the quality of the generated image is generally not high. Although compared with a generation-type antagonistic neural network GAN, the feature extraction capability is enhanced by using the convolutional layer in the network, the generation of more complex images with numerous features still has difficulty in obtaining better effects, so that the appearance of the generated images is not close to a real image, the features of image lesion areas are not well extracted when the convolutional layer extracts the image features, and the like, so that the recognition efficiency is greatly influenced, and even misdiagnosis is caused seriously.
Disclosure of Invention
In view of the above, the present invention aims to construct a circulation depth neural network by a method combining a feature fusion technique and an attention mechanism, and embed various attention modules in the circulation depth neural network based on a spatial domain and a channel domain, so as to effectively extract feature information of an image lesion region and improve the recognition efficiency of blood oxygen level dependent magnetic resonance imaging.
The invention provides a magnetic resonance imaging feature extraction method based on a circulating depth neural network, which comprises the following steps:
s1, carrying out image reconstruction on the blood oxygen level dependent magnetic resonance imaging based on the DCGAN (discrete cosine transformation) of the deep convolution generation type countermeasure network;
the method for image reconstruction of blood oxygen level dependent magnetic resonance imaging comprises the following steps:
s11, a generator of DCGAN inputs random noise z, outputs a generated image G (z), a discriminator of DCGAN respectively outputs real data x and the generated image G (z), and outputs D (x) and D [ G (z) ];
s12, calculating a discriminator loss function:
Figure BDA0003563482120000021
in the formula (1), m represents the batch number of magnetic resonance imaging, namely the number of samples extracted each time is adopted, the cross entropy of real samples and generated samples is calculated, and the average value of the cross entropy of all samples is calculated to be used as a loss function of the discriminator so as to optimize the discriminator;
s13, calculating the generated generator loss function:
Figure BDA0003563482120000022
in the formula (2), GlossAfter the generated data passes through the discriminator, calculating the cross entropy of the generated data as a loss function of the generator so as to optimize the generator;
s14, carrying out DCGAN training, requiring the generator to generate data close to reality to deceive the discriminator, and requiring the discriminator to distinguish the generated data from the real data to form a game process;
s15, repeating steps S11-S14 until DCGAN reaches nash equilibrium point D [ g (z) ] -0.5;
s2, constructing an image data set for the magnetic resonance imaging after image reconstruction, dividing the image data set into N groups according to the fact that each group contains the same number of class pictures, and selecting one group of image data for marking;
s3, using the marked group of image data selected in the S2 step for training the constructed cycle depth neural network;
the construction method of the circulation deep neural network comprises the following steps:
s31, constructing a hybrid attention module, and embedding the hybrid attention module into ResNet101 of a cycle depth network structure;
s32, applying the feature pyramid network FPN to ResNets101 of a Faster R-CNN network structure;
s33, constructing a SENEt attention module, and embedding the SENEt attention module into the ResNets101 network structure obtained in the step S32 to obtain a pyramid attention network;
s34, repeating the steps S31-S33, and training and optimizing parameters by adopting the image data set;
s4, automatically marking the next group of image data by using the weight information of the circulation depth neural network trained in the step S3;
s5, screening and correcting the marking result of the S4 step, adding the image data with the corrected marking result into a training set of the image data for training, and repeating the S2-S4 steps until all the image data are trained;
and S6, performing feature extraction on the image data after image reconstruction by using the circulation depth neural network trained in the step S5, and identifying a feature region.
Further, the method for training all image data of step S5 includes the following steps:
s51, judging whether all the groups of image data are trained or not, and if not, repeating the steps S4-S5;
s52, judging whether the image data set needs to be expanded, if so, further judging whether the types of the detected parts of the image data are increased, if so, repeating the steps S3-S6; and if not, finishing the image data training.
Further, the recurrent neural network comprises 14 convolutional layers, 2 pooling layers and a Softmax layer, wherein the pooling layers are respectively arranged behind the 4 th convolutional layer and the 6 th convolutional layer; the 8 th, 11 th and 14 th convolutional layers are feature pyramids, the resolution of feature maps are respectively 16 × 16 pixels, 8 × 8 pixels and 4 × 4 pixels, the convolutional layers are depth separable convolutions and comprise 5 groups of convolution kernels with the size of 3 × 3 and convolution kernel pairing with the size of 1 × 1; the pooling layer is a maximum pooling layer of 2 x 2 in size; and calculating the confidence coefficient of the corresponding lesion category by adopting a Softmax layer, and judging the lesion category.
Further, the construction method of the hybrid attention module comprises the following steps:
s311, designing a spatial domain attention module structure;
s312, designing a channel domain attention module structure;
s313, designing a fast R-CNN structure based on FPN.
Further, the method for constructing the spatial domain attention module comprises the following steps: respectively carrying out global maximum pooling and global average pooling on the input feature map based on channel dimensions, splicing based on the channel dimensions, carrying out convolution dimensionality reduction, and generating a spatial domain attention feature map by a sigmoid activation function, wherein the calculation formula is as follows:
Y=σ(f7*7(Avgpool(X);Maxpool(X))) (3),
in the formula (3), X represents an input feature map of the attention module, Y represents an output feature map, f ^ (7 × 7) represents a convolution layer with a convolution kernel size of 7 × 7, and sigma is a sigmoid activation function.
Further, the method for constructing the channel domain attention module comprises the following steps: and performing element-level addition operation on the output characteristic diagram of the multilayer perceptron through global maximum pooling and global average pooling and through the multilayer perceptron, and activating through a sigmoid activation function to generate a channel domain attention characteristic diagram, wherein the calculation formula is as follows:
Y=σ(MLP(Avgpool(X))+MLP(Maxpool(X))) (4),
in the formula (4), X represents an input feature map of the attention module, Y represents an output feature map, MLP represents a multilayer perceptron, and sigma is a sigmoid activation function.
Further, the method for designing FPN-based Faster R-CNN structure comprises:
based on a 3 multiplied by 3 sliding anchor frame traversal feature map, anchor boxes and Proposals are generated, target candidate frame prediction is carried out, in the process of extracting the candidate frame network RPN training, a target with an intersection ratio IOU (input output) of more than 0.7 to a real labeling frame is a positive label with a lesion area as a target, and a target with an IOU of less than 0.3 is a negative label with a normal area as a target;
the method for generating the Propusals comprises the following steps: according to the area w multiplied by h of each Propusals frame, the Propusals frames are respectively mapped to the corresponding characteristic layer PkROI Pooling feature extraction is carried out, and a k value calculation formula is as follows:
Figure BDA0003563482120000041
in the formula (5), k0W and h are the width and height of the propusals box, 4.
The invention also provides a computer device which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the magnetic resonance imaging feature extraction method based on the circulation depth neural network.
Compared with the prior art, the invention has the beneficial effects that:
the invention adopts a method of combining a feature fusion technology and an attention mechanism to construct a circulation depth neural network, and multiple attention modules are embedded in the circulation depth neural network based on a space domain and a channel domain to perform image processing, so that the feature information of a lesion area can be effectively extracted, and the identification efficiency of magnetic resonance imaging is greatly improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
In the drawings:
FIG. 1 is a flow chart of a magnetic resonance imaging feature extraction method based on a cyclic depth neural network according to the present invention;
FIG. 2 is a schematic diagram of a computer device according to an embodiment of the present invention;
FIG. 3 is a flowchart of the magnetic resonance imaging feature extraction method S6-S7 according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method of constructing a recurrent deep neural network according to an embodiment of the present invention;
FIG. 5 is a flow chart of a method of constructing a hybrid attention module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a network structure of a cycle deep neural network according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating the training procedure of the deep convolution generated countermeasure network DCGAN according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and products consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context.
The embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
The embodiment of the invention provides a magnetic resonance imaging feature extraction method based on a cyclic depth neural network, which is shown in figure 1 and comprises the following steps:
s1, carrying out image reconstruction on the blood oxygen level dependent magnetic resonance imaging based on the deep convolution generation type countermeasure network DCGAN;
the method for image reconstruction of blood oxygen level dependent magnetic resonance imaging comprises, as shown in fig. 7, the following steps:
s11, a generator of DCGAN inputs random noise z, outputs a generated image G (z), a discriminator of DCGAN respectively outputs real data x and the generated image G (z), and outputs D (x) and D [ G (z) ];
s12, calculating a discriminator loss function:
Figure BDA0003563482120000061
in the formula (1), m represents the batch number of magnetic resonance imaging, namely the number of samples extracted each time is adopted, the cross entropy of real samples and generated samples is calculated, and the average value of the cross entropy of all samples is calculated to be used as a loss function of the discriminator so as to optimize the discriminator;
s13, calculating the generated generator loss function:
Figure BDA0003563482120000071
in the formula (2), GlossAfter the generated data passes through the discriminator, calculating the cross entropy of the generated data as a loss function of the generator so as to optimize the generator;
s14, carrying out DCGAN training, requiring the generator to generate data close to reality to deceive the discriminator, and requiring the discriminator to distinguish the generated data from the real data to form a game process;
s15, repeating steps S11-S14 until DCGAN reaches nash equilibrium point D [ g (z) ] — 0.5;
s2, constructing an image data set for the magnetic resonance imaging after image reconstruction, dividing the image data set into N groups according to the fact that each group contains the same number of class pictures, and selecting one group of image data for marking;
s3, using the marked image data selected in the step S2 to train the constructed cycle depth neural network;
the method for constructing the circulation depth neural network is shown in fig. 4 and comprises the following steps:
s31, constructing a hybrid attention module, and embedding the hybrid attention module into ResNet101 of a cycle depth network structure;
s32, applying the feature pyramid network FPN in ResNets101 of the Faster R-CNN network structure;
s33, constructing a SENEt attention module, and embedding the SENEt attention module into the ResNets101 network structure obtained in the step S32 to obtain a pyramid attention network;
and S34, repeating the steps S31-S33, and training and optimizing parameters by adopting the image data set.
Referring to fig. 6, the recurrent neural network comprises 14 convolutional layers and 2 pooling layers, and a Softmax layer, wherein the pooling layers are respectively arranged behind the 4 th convolutional layer and the 6 th convolutional layer; the 8 th, 11 th and 14 th convolutional layers are feature pyramids, the resolution of feature maps are respectively 16 × 16 pixels, 8 × 8 pixels and 4 × 4 pixels, the convolutional layers are depth separable convolutions and comprise 5 groups of convolution kernels with the size of 3 × 3 and convolution kernel pairing with the size of 1 × 1; the pooling layer is a maximum pooling layer of 2 x 2 in size; and calculating the confidence coefficient of the corresponding lesion category by adopting a Softmax layer, and judging the lesion category.
The method for constructing the hybrid attention module, as shown in fig. 5, includes:
s311, designing a spatial domain attention module structure;
s312, designing a channel domain attention module structure;
s313, designing a fast R-CNN structure based on FPN.
The construction method of the spatial domain attention module comprises the following steps: respectively carrying out global maximum pooling and global average pooling on the input feature map based on channel dimensions, splicing based on the channel dimensions, carrying out convolution dimensionality reduction, and generating a spatial domain attention feature map by a sigmoid activation function, wherein the calculation formula is as follows:
Y=σ(f7*7(Avgpool(X);Maxpool(X))) (3),
in the formula (3), X represents an input feature map of the attention module, Y represents an output feature map, f ^ (7 × 7) represents a convolution layer with a convolution kernel size of 7 × 7, and sigma is a sigmoid activation function.
The construction method of the channel domain attention module comprises the following steps: and performing element-level addition operation on the output characteristic diagram of the multilayer perceptron through global maximum pooling and global average pooling and through the multilayer perceptron, and activating through a sigmoid activation function to generate a channel domain attention characteristic diagram, wherein the calculation formula is as follows:
Y=σ(MLP(Avgpool(X))+MLP(Maxpool(x))) (4),
in the formula (4), X represents an input feature map of the attention module, Y represents an output feature map, MLP represents a multilayer perceptron, and sigma is a sigmoid activation function.
The method for designing the FPN-based Faster R-CNN structure comprises the following steps:
based on a 3 multiplied by 3 sliding anchor frame traversal feature map, anchor boxes and Proposals are generated, target candidate frame prediction is carried out, in the process of extracting the candidate frame network RPN training, a target with an intersection ratio IOU (input output) of more than 0.7 to a real labeling frame is a positive label with a lesion area as a target, and a target with an IOU of less than 0.3 is a negative label with a normal area as a target;
the method for generating the Propusals comprises the following steps: according to the area w multiplied by h of each Propusals frame, the Propusals frames are respectively mapped to the corresponding characteristic layer PkROI Pooling feature extraction is carried out, and a k value calculation formula is as follows:
Figure BDA0003563482120000091
in the formula (5), k0W and h are the width and height of the propusals box, 4.
S4, automatically marking the next group of image data by using the weight information of the circulation depth neural network trained in the step S3;
s5, screening and correcting the marking result of the S4 step, adding the image data with the corrected marking result into a training set of the image data for training, and repeating the S2-S4 steps until all the image data are trained;
and S6, performing feature extraction on the image data after image reconstruction by using the circulation depth neural network trained in the step S5, and identifying a feature region.
The method for training all image data in step S5, as shown in fig. 3, includes the following steps:
s51, judging whether all the groups of image data are trained or not, and if not, repeating the steps S4-S5;
s52, judging whether the image data set needs to be expanded, if so, further judging whether the types of the detected parts of the image data are increased, if so, repeating the steps S3-S6; and if not, finishing the image data training.
According to the embodiment of the invention, the circulation depth neural network is constructed by adopting a method of combining a feature fusion technology and an attention mechanism, and various attention modules are embedded in the circulation depth neural network based on a space domain and a channel domain for image processing, so that the feature information of a lesion area can be effectively extracted, and the identification efficiency of magnetic resonance imaging is greatly improved.
Fig. 2 is a schematic structural diagram of a computer device provided in an embodiment of the present invention; referring to fig. 2 of the drawings, the computer apparatus comprises: an input device 23, an output device 24, a memory 22 and a processor 21; the memory 22 for storing one or more programs; when the one or more programs are executed by the one or more processors 21, the one or more processors 21 are enabled to implement the method for magnetic resonance imaging feature extraction based on the cyclic deep neural network as provided in the above embodiments; wherein the input device 23, the output device 24, the memory 22 and the processor 21 may be connected by a bus or other means, as exemplified by the bus connection in fig. 2.
The memory 22 is a computer-readable and writable storage medium, and may be used to store a software program, a computer-executable program, and program instructions corresponding to the magnetic resonance imaging feature extraction method based on the cyclic deep neural network according to the embodiment of the present invention; the memory 22 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like; further, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device; in some examples, the memory 22 may further include memory located remotely from the processor 21, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 23 may be used to receive input numeric or character information and to generate key signal inputs relating to user settings and function control of the apparatus; the output device 24 may include a display device such as a display screen.
The processor 21 executes software programs, instructions and modules stored in the memory 22 so as to execute various functional applications and data processing of the device, that is, to implement the magnetic resonance imaging feature extraction method based on the cyclic deep neural network.
The computer device provided above can be used to execute the magnetic resonance imaging feature extraction method based on the cyclic deep neural network provided above, and has corresponding functions and beneficial effects.
Embodiments of the present invention also provide a storage medium containing computer executable instructions, which when executed by a computer processor, are used to perform the method for magnetic resonance imaging feature extraction based on a cyclic deep neural network as provided in the above embodiments, where the storage medium is any of various types of memory devices or storage devices, and the storage medium includes: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc.; the storage medium may also include other types of memory or combinations thereof; in addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet); the second computer system may provide program instructions to the first computer for execution. A storage medium includes two or more storage media that may reside in different locations, such as in different computer systems connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided by the embodiment of the present invention contains computer executable instructions, and the computer executable instructions are not limited to the magnetic resonance imaging feature extraction method based on the circulation depth neural network described in the above embodiment, and may also perform related operations in the magnetic resonance imaging feature extraction method based on the circulation depth neural network provided by any embodiment of the present invention.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention; various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A magnetic resonance imaging feature extraction method based on a circulation depth neural network is characterized by comprising the following steps:
s1, carrying out image reconstruction on the blood oxygen level dependent magnetic resonance imaging based on the deep convolution generation type countermeasure network DCGAN;
the method for image reconstruction of blood oxygen level dependent magnetic resonance imaging comprises the following steps:
s11, a generator of DCGAN inputs random noise z, outputs a generated image G (z), a discriminator of DCGAN respectively outputs real data x and the generated image G (z), and outputs D (x) and D [ G (z) ];
s12, calculating a discriminator loss function:
Figure FDA0003563482110000011
in the formula (1), m represents the batch number of magnetic resonance imaging, namely the number of samples extracted each time is adopted, the cross entropy of real samples and generated samples is calculated, and the average value of the cross entropy of all samples is calculated to be used as a loss function of the discriminator so as to optimize the discriminator;
s13, calculating the generated generator loss function:
Figure FDA0003563482110000012
in the formula (2), GlossAfter the generated data passes through the discriminator, calculating the cross entropy of the generated data as a loss function of the generator so as to optimize the generator;
s14, carrying out DCGAN training, requiring the generator to generate data close to reality to deceive the discriminator, and requiring the discriminator to distinguish the generated data from the real data to form a game process;
s15, repeating steps S11-S14 until DCGAN reaches nash equilibrium point D [ g (z) ] -0.5;
s2, constructing an image data set for the magnetic resonance imaging after image reconstruction, dividing the image data set into N groups according to the fact that each group contains the same number of class pictures, and selecting one group of image data for marking;
s3, using the marked group of image data selected in the S2 step for training the constructed cycle depth neural network;
the method for constructing the circulation depth neural network comprises the following steps:
s31, constructing a hybrid attention module, and embedding the hybrid attention module into ResNet101 of a cycle depth network structure;
s32, applying the feature pyramid network FPN to ResNets101 of a Faster R-CNN network structure;
s33, constructing a SENEt attention module, and embedding the SENEt attention module into the ResNets101 network structure obtained in the step S32 to obtain a pyramid attention network;
s34, repeating the steps S31-S33, and training and optimizing parameters by adopting the image data set;
s4, automatically marking the next group of image data by using the weight information of the circulation depth neural network trained in the step S3;
s5, screening the marking result obtained in the step S4, adding the image data with the marking result corrected into a training set of the image data for training, and repeating the steps S2-S4 until all the image data are trained;
and S6, performing feature extraction on the image data after image reconstruction by using the circulation depth neural network trained in the step S5, and identifying a feature region.
2. The method for extracting features of magnetic resonance imaging based on the cyclic deep neural network according to claim 1, wherein the method for training all image data of the step S5 comprises the following steps:
s51, judging whether all the groups of image data are trained or not, and if not, repeating the steps S4-S5;
s52, judging whether the image data set needs to be expanded, if so, further judging whether the types of the detected parts of the image data are increased, if so, repeating the steps S3-S6; and if not, finishing the image data training.
3. The method for extracting features of magnetic resonance imaging based on the cyclic deep neural network as claimed in claim 1, wherein the cyclic deep neural network comprises 14 convolutional layers and 2 pooling layers and a Softmax layer, wherein the pooling layers are respectively arranged behind the 4 th convolutional layer and the 6 th convolutional layer; the 8 th, 11 th and 14 th convolutional layers are feature pyramids with feature map resolution of 16 x 16 pixels, 8 x 8 pixels and 4 x 4 pixels, respectively, and are depth separable convolutions comprising 5 sets of convolution kernels of 3 x 3 size and convolution kernel pairings of 1 x 1 size; the pooling layer is a maximum pooling layer of 2 x 2 in size; and calculating the confidence coefficient of the corresponding lesion category by adopting a Softmax layer, and judging the lesion category.
4. The method for extracting features of magnetic resonance imaging based on the cyclic deep neural network of claim 1, wherein the method for constructing the hybrid attention module of the step S31 comprises:
s311, designing a spatial domain attention module structure;
s312, designing a channel domain attention module structure;
s313, designing an FPN-based Faster R-CNN structure.
5. The method for extracting features of magnetic resonance imaging based on the cyclic deep neural network according to claim 4, wherein the method for constructing the spatial domain attention module comprises the following steps: respectively carrying out global maximum pooling and global average pooling on the input feature map based on channel dimensions, splicing based on the channel dimensions, carrying out convolution dimensionality reduction, and generating a spatial domain attention feature map by a sigmoid activation function, wherein the calculation formula is as follows:
Y=σ(f7*7(Avgpool(X);Maxpool(X))) (3),
in the formula (3), X represents an input feature map of the attention module, Y represents an output feature map, f ^ (7 × 7) represents a convolution layer with a convolution kernel size of 7 × 7, and sigma is a sigmoid activation function.
6. The method for extracting features of magnetic resonance imaging based on the cyclic deep neural network according to claim 4, wherein the method for constructing the channel domain attention module comprises the following steps: and performing element-level addition operation on the output characteristic diagram of the multilayer perceptron through global maximum pooling and global average pooling and through the multilayer perceptron, and activating through a sigmoid activation function to generate a channel domain attention characteristic diagram, wherein the calculation formula is as follows:
Y=σ(MLP(Avgpool(X))+MLP(Maxpool(X))) (4),
in the formula (4), X represents an input feature map of the attention module, Y represents an output feature map, MLP represents a multilayer perceptron, and sigma is a sigmoid activation function.
7. The method for extracting features of magnetic resonance imaging based on the recurrent deep neural network as claimed in claim 4, wherein the method for designing the FPN based Faster R-CNN structure comprises:
based on a 3 multiplied by 3 sliding anchor frame traversal feature map, anchor boxes and Proposals are generated, target candidate frame prediction is carried out, in the process of extracting the candidate frame network RPN training, a target with an intersection ratio IOU (input output) of more than 0.7 to a real labeling frame is a positive label with a lesion area as a target, and a target with an IOU of less than 0.3 is a negative label with a normal area as a target;
the method for generating the Propusals comprises the following steps: according to the area w multiplied by h of each Propusals frame, the Propusals frames are respectively mapped to the corresponding characteristic layer PkROI Pooling feature extraction is carried out, and a k value calculation formula is as follows:
Figure FDA0003563482110000041
in the formula (5), k0W and h are the width and height of the propusals box, 4.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for magnetic resonance imaging feature extraction based on a cyclic deep neural network of any one of claims 1-7.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method for magnetic resonance imaging feature extraction based on a recurrent deep neural network as claimed in any one of claims 1 to 7.
CN202210302838.0A 2022-03-24 2022-03-24 Magnetic resonance imaging feature extraction method based on cyclic depth neural network Withdrawn CN114612654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210302838.0A CN114612654A (en) 2022-03-24 2022-03-24 Magnetic resonance imaging feature extraction method based on cyclic depth neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210302838.0A CN114612654A (en) 2022-03-24 2022-03-24 Magnetic resonance imaging feature extraction method based on cyclic depth neural network

Publications (1)

Publication Number Publication Date
CN114612654A true CN114612654A (en) 2022-06-10

Family

ID=81867509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210302838.0A Withdrawn CN114612654A (en) 2022-03-24 2022-03-24 Magnetic resonance imaging feature extraction method based on cyclic depth neural network

Country Status (1)

Country Link
CN (1) CN114612654A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345952A (en) * 2022-08-10 2022-11-15 华中科技大学协和深圳医院 Magnetic resonance image processing method and system based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345952A (en) * 2022-08-10 2022-11-15 华中科技大学协和深圳医院 Magnetic resonance image processing method and system based on neural network
CN115345952B (en) * 2022-08-10 2023-08-11 华中科技大学协和深圳医院 Magnetic resonance image processing method and system based on neural network

Similar Documents

Publication Publication Date Title
CN112446270B (en) Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
CN111667399B (en) Training method of style migration model, video style migration method and device
Ge et al. An attention mechanism based convolutional LSTM network for video action recognition
Zhong et al. Cascade region proposal and global context for deep object detection
US20190294871A1 (en) Human action data set generation in a machine learning system
CN112418074A (en) Coupled posture face recognition method based on self-attention
CN106716450A (en) Image-based feature detection using edge vectors
CN112668519A (en) Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
CN114549567A (en) Disguised target image segmentation method based on omnibearing sensing
CN114241422A (en) Student classroom behavior detection method based on ESRGAN and improved YOLOv5s
Wu et al. A deep residual convolutional neural network for facial keypoint detection with missing labels
CN110969104B (en) Method, system and storage medium for detecting drivable area based on binarization network
Wang et al. Multiple-environment Self-adaptive Network for Aerial-view Geo-localization
Zhang et al. Multiresolution attention extractor for small object detection
CN115222998A (en) Image classification method
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN114612654A (en) Magnetic resonance imaging feature extraction method based on cyclic depth neural network
Jiang et al. Application of a fast RCNN based on upper and lower layers in face recognition
Konrad et al. Fisheyesuperpoint: Keypoint detection and description network for fisheye images
CN114550110A (en) Vehicle weight identification method and system based on unsupervised domain adaptation
CN105844605A (en) Face image synthesis method based on adaptive expression
CN117391938A (en) Infrared image super-resolution reconstruction method, system, equipment and terminal
CN116363368A (en) Image semantic segmentation method and device based on convolutional neural network
CN116012739A (en) Unmanned aerial vehicle remote sensing video blind motion blur removing method based on countermeasure learning and contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220610