CN115631370A - Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network - Google Patents

Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network Download PDF

Info

Publication number
CN115631370A
CN115631370A CN202211226934.8A CN202211226934A CN115631370A CN 115631370 A CN115631370 A CN 115631370A CN 202211226934 A CN202211226934 A CN 202211226934A CN 115631370 A CN115631370 A CN 115631370A
Authority
CN
China
Prior art keywords
target
features
mri
image
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211226934.8A
Other languages
Chinese (zh)
Inventor
孙安澜
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yizhun Medical AI Co Ltd
Original Assignee
Beijing Yizhun Medical AI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhun Medical AI Co Ltd filed Critical Beijing Yizhun Medical AI Co Ltd
Priority to CN202211226934.8A priority Critical patent/CN115631370A/en
Publication of CN115631370A publication Critical patent/CN115631370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The present disclosure provides a method and a device for identifying MRI sequence categories based on a convolutional neural network, wherein the method comprises the following steps: acquiring an MRI image and a corresponding MRI sequence; performing feature extraction on the MRI image through a convolutional neural network to obtain image features; retrieving the image features through a retrieval model to obtain target features matched with the image features; and determining the target class corresponding to the MRI sequence according to the target characteristics. The method disclosed by the invention applies the image retrieval method to the identification of the MRI sequence category, can automatically identify the target category corresponding to the MRI sequence, is not limited by the number of the MRI sequence categories, and is insensitive to the change of the model of the MRI scanning machine and the hospital environment.

Description

Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for identifying MRI sequence categories based on a convolutional neural network.
Background
Magnetic Resonance Imaging (MRI) images are one of the commonly used medical images, and have the characteristic of multiple sequences, and each MRI sequence has different signal values for representing different tissue structures and disease focuses; in practical application, an algorithm matched with the category needs to be selected according to the category of the MRI sequence to accurately analyze the lesion information under the sequence, so that the automatic identification of the category of the MRI sequence is very important.
At present, the class of an MRI sequence is identified, the class of the MRI sequence to be identified can be determined through a classification model trained in advance, but as MRI is a rapidly developed field, new classes of MRI sequences can continuously appear, at the moment, in order to enable the classification model to identify the new classes, training data of the new classes are required to be added to train the classification model again, the classification model needs to be trained again every time a new class appears, and development and maintenance cost is high.
Disclosure of Invention
The present disclosure provides a method and an apparatus for identifying MRI sequence category based on convolutional neural network, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a convolutional neural network-based MRI sequence identification method, the method including: acquiring an MRI image and a corresponding MRI sequence; performing feature extraction on the MRI image through a convolutional neural network to obtain image features; retrieving the image features through a retrieval model to obtain target features matched with the image features; and determining a target class corresponding to the MRI sequence according to the target characteristics.
In an embodiment, the extracting the features of the MRI image through the convolutional neural network to obtain the image features includes: performing feature extraction on the MRI image through the convolutional neural network to obtain a shallow layer initial feature and a high layer semantic feature; the shallow layer initial features are related through a gram matrix to obtain bottom layer texture features; and fusing the bottom layer texture features and the high-layer semantic features to obtain the image features.
In an implementation manner, the retrieving, by a retrieval model, an image feature to obtain a target feature matching the image feature includes: determining candidate projection distances corresponding to the image features and each template feature; determining a target projection distance according to the candidate projection distance with the minimum numerical value; and determining the target characteristic according to the target projection distance.
In one embodiment, the determining the target feature according to the target projection distance includes: determining whether the target projection distance meets a target distance index; and if the target projection distance is determined to accord with the target distance index, determining the template characteristic corresponding to the target projection distance as the target characteristic.
In one embodiment, the method further comprises: and if the target projection distance does not accord with the target distance index, adding corresponding template features into the retrieval model according to the image features.
In one implementation, the determining a class of objects corresponding to the MRI sequence according to the object feature includes: and determining the category corresponding to the target feature, and determining the category corresponding to the target feature as the target category corresponding to the MRI sequence.
In an embodiment, before the retrieving, by the retrieval model, the template feature matching the image feature as the target feature, the method further includes: obtaining at least one template sequence, wherein the template sequence is an MRI sequence with known category; determining template features corresponding to the template sequence; storing all of the template features in the retrieval model.
According to a second aspect of the present disclosure, there is provided an MRI sequence identification apparatus based on a convolutional neural network, the apparatus comprising: the acquisition module is used for acquiring the MRI image and the corresponding MRI sequence; the characteristic extraction module is used for extracting the characteristics of the MRI image through a convolutional neural network to obtain image characteristics; the retrieval module is used for retrieving the image characteristics through retrieval model detection to obtain target characteristics matched with the image characteristics; and the first determining module is used for determining the target class corresponding to the MRI sequence according to the target characteristic.
In one implementation, the feature extraction module includes: the feature extraction submodule is used for extracting features of the MRI image through the convolutional neural network to obtain shallow layer initial features and high layer semantic features; the contact submodule is used for contacting the shallow layer initial characteristic through a gram matrix to obtain a bottom layer texture characteristic; and the fusion submodule is used for fusing the bottom layer texture features and the high-layer semantic features to obtain the image features.
In one embodiment, the retrieving module includes: the first determining submodule is used for determining candidate projection distances corresponding to the image features and each template feature; the second determining submodule is used for determining the target projection distance according to the candidate projection distance with the minimum numerical value; and the third determining submodule is used for determining the target characteristic according to the target projection distance.
In one embodiment, the third determining sub-module includes: a first determination unit, configured to determine whether the target projection distance meets a target distance index; and the second determining unit is used for determining the template characteristic corresponding to the target projection distance as the target characteristic if the target projection distance is determined to accord with the target distance index.
In one embodiment, the apparatus further comprises: and the third determining unit is used for adding corresponding template features into the template feature set according to the image features if the target projection distance is determined not to accord with the target distance index.
In an embodiment, the first determining module includes: and the fourth determining sub-module is used for determining the category corresponding to the target feature and determining the category corresponding to the target feature as the target category corresponding to the MRI sequence.
In an embodiment, the apparatus further comprises: an obtaining module, configured to obtain at least one template sequence, where the template sequence is an MRI sequence with a known category; a second determining module, configured to determine a template feature corresponding to the template sequence; and the storage module is used for storing all the template features into the retrieval model.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
The method and the device for identifying the MRI sequence category based on the convolutional neural network firstly extract the image characteristics corresponding to the MRI sequence to be identified through the convolutional neural network, and then search the template characteristics corresponding to the image characteristics in the template characteristics contained in the search model through the search model, thereby determining the target category corresponding to the MRI sequence to be identified. The method applies the image retrieval method to the identification of the MRI sequence category, can automatically identify the target category corresponding to the MRI sequence, is not limited by the number of the MRI sequence categories, and is insensitive to the change of the model of the MRI scanning machine and the hospital environment.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic flow chart showing an implementation flow of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure;
FIG. 2a shows a first schematic of an MRI sequence;
FIG. 2b shows a second schematic of an MRI sequence;
FIG. 2c shows a schematic diagram three of an MRI sequence;
FIG. 2d shows a schematic diagram four of an MRI sequence;
fig. 3 is a schematic view of a first implementation scenario of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a second implementation flow of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a third implementation flow of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a constituent structure of an MRI sequence identification apparatus based on a convolutional neural network according to an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Fig. 1 shows a first implementation flow diagram of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure.
As shown in fig. 1, a first embodiment of the present disclosure provides an MRI sequence identification method based on a convolutional neural network, where the method includes: an operation 101 of acquiring an MRI image and a corresponding MRI sequence; operation 102, performing feature extraction on the MRI image through a convolutional neural network to obtain image features; operation 103, retrieving the image features through the retrieval model to obtain target features matched with the image features; in operation 104, a target class corresponding to the MRI sequence is determined based on the target features.
The method for identifying the MRI sequence category based on the convolutional neural network mainly comprises two parts, namely an image feature extraction part and a retrieval model, wherein firstly, the image feature corresponding to the MRI sequence to be identified is extracted through the convolutional neural network, and then the template feature corresponding to the image feature is retrieved in the template feature contained in the retrieval model through the retrieval model, so that the target category corresponding to the MRI sequence to be identified is determined; compared with the traditional classification model, newly appeared training data of the MRI sequence category needs to be added for training again, the method applies the image retrieval method to the identification of the MRI sequence category, can automatically identify the target category corresponding to the MRI sequence, is not limited by the number of the MRI sequence categories, and is insensitive to the change of the model of a scanning machine and the hospital environment.
In operation 101 of the method, the MRI image may include various types of MRI sequences, and a plurality of different types of MRI sequences may be obtained by changing the influence factor of magnetic resonance during the Imaging process of the MRI image, where the MRI sequence types include T1 Weighted Imaging (T1 WI), T2 Weighted Imaging (T2 Weighted Imaging, T2 WI), diffusion Weighted Imaging (DWI), and so on.
In the method operation 102, fig. 2a shows a first schematic diagram of an MRI sequence, where the MRI sequence shown in fig. 2a is a T1WI sequence, fig. 2b shows a second schematic diagram of the MRI sequence, and the MRI sequence shown in fig. 2b is a T2WI sequence, where as can be seen from the T1WI sequence and the T2WI sequence, the imaging positions and image structures of the T1WI sequence and the T2WI sequence are both consistent, and the difference is completely that the color distribution and contrast inside the sequences are different, which can be described by using the image texture; fig. 2c shows a schematic diagram three of the MRI sequence, the MRI sequence is a sagittal T2WI sequence as shown in fig. 2c, fig. 2d shows a schematic diagram four of the MRI sequence, the MRI sequence is an out-of-range T2WI sequence as shown in fig. 2d, and as can be seen from the sagittal T2WI sequence and the out-of-range T2WI sequence, both the liquid part and the tissue part in the sagittal T2WI sequence are high signals and both the tissue part and the tissue part are low signals, which are different in the shooting angle and the shooting content; based on the above discussion, it can be concluded that distinguishing different classes of MRI sequences requires distinguishing two features of image texture and image content, which correspond to two features of a convolutional neural network, i.e., a high-level semantic feature and an underlying texture feature, respectively. The high-level semantic features are features obtained after multiple convolutions and downsampling, the size of the high-level semantic features is usually smaller than that of an original image, for example, the length and the width of the high-level semantic features are respectively 1/16 of the length and the width of the original image, and although the size of the high-level semantic features is reduced, the high-level semantic features can better represent semantic information of the image; the underlying texture features are expressed as a gram matrix, and hidden relations among the features can be extracted through the gram matrix, namely the correlation among the features is high or low.
Specifically, the MRI image is input into the convolutional neural network, features output by the convolutional neural network at different stages are obtained, for example, a shallow feature and a deep feature of the convolutional neural network are respectively obtained, the shallow feature has a smaller receptive field and can better represent a bottom texture feature of the MRI image, such as a gray scale change in the MRI image, and the deep feature can better represent a high-level semantic feature of the MRI image, such as a shooting body position of the MRI image, and the shallow feature and the deep feature are fused to obtain image features of a bottom texture bottom layer and a high-level semantic bottom layer including the MRI image.
Fig. 3 is a schematic view of a first implementation scenario of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure.
As shown in fig. 3, the convolutional neural network may be a residual network, for example, a ResNet50 network, the ResNet50 network is divided into five parts, which are respectively a residual block 1, a residual block 2, a residual block 3, a residual block 4, and a residual block 5, an MRI image is input into the ResNet50 network, the extracted features of the residual block 1, the residual block 2, and the residual block 3 are flattened and subjected to an inner product operation to obtain a bottom layer texture feature, the extracted features of the residual block 4 and the residual block 5 are used as a high layer semantic feature, and the bottom layer texture feature and the high layer semantic feature are fused to obtain an image feature.
In operation 103 of the method, an MRI sequence is first determined as a fixed sample, an MRI sequence of the same type as the fixed sample is used as a positive sample, an MRI sequence of a different type from the fixed sample is used as a negative sample, and the neural network is trained by a triplet loss training method to obtain a retrieval model, so that the retrieval model has the capability of judging that the two MRI sequences are the same sequence or different sequences by comparing projection distances. A plurality of template features are stored in a retrieval model in advance, the template features are feature vectors corresponding to MRI sequences of known sequence categories, and the target features are the template features most similar to image features. For example, in a certain hospital, MRI sequences of known sequence classes are respectively obtained as template samples, illustratively, such as T1WI, T2WI and MRI, feature vectors of these template samples are extracted, and template features corresponding to T1WI, T2WI and MRI are obtained; and inputting the picture features into a retrieval model, retrieving the template features which are most similar to the picture features from all the template features stored in advance, and taking the template features as target features.
In operation 104 of the method, the target class is a sequence class corresponding to the MRI sequence to be identified, the target feature is a template feature most similar to the image feature, and the sequence class corresponding to the target feature is determined as the target class corresponding to the MRI sequence to be identified. For example, when the sequence class corresponding to the target feature is T1WI, the target class corresponding to the MRI sequence is T1WI.
The method applies the image retrieval method to the identification of the MRI sequence category, can automatically identify the target category corresponding to the MRI sequence, is not limited by the number of the MRI sequence categories, and is insensitive to the model of a scanning machine and the change of hospital environment; and the search model is trained through a triple loss training method and a plurality of sequence types such as T1WI, T2WI, MRI and the like in a cross-modal manner, so that the model is more robust and has stronger generalization capability.
In an embodiment of the present disclosure, the operation 102 includes: firstly, carrying out feature extraction on an MRI image through a convolutional neural network to obtain a shallow layer initial feature and a high layer semantic feature; then, the shallow layer initial features are associated through a gram matrix to obtain bottom layer texture features; and finally, fusing the bottom texture features and the high-level semantic features to obtain image features.
As shown in the ResNet50 network of fig. 3, the shallow initial features are extracted features of the residual block 1, the residual block 2 and the residual block 3, and the high semantic features are extracted features of the residual block 4 and the residual block 5; the shallow layer initial characteristics are related through a gram matrix, and the method specifically comprises the following steps:
Figure BDA0003880244520000081
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003880244520000082
is a matrix with dimension c x c, H i W i C i Is the corresponding characteristic size of the i-th network, H i Height, W, of the ith layer feature i Width, C, representing the i-th layer feature i Number of channels, f, representing the characteristics of the i-th layer i (x) h,w,c Representing a set of functions computed from the first layer to the ith layer of the neural network,
Figure BDA0003880244520000083
after flattening the two channels c and c ', corresponding element dot product addition is carried out to obtain the similarity of the channels c and c' after flattening. And (3) obtaining bottom layer textural features by passing the shallow layer initial features through the gram matrix, and fusing the bottom layer textural features with the extracted high-layer semantic features of the residual blocks 4 and 5 to obtain image features.
Fig. 4 shows a schematic flow chart of an implementation of the MRI sequence identification method based on the convolutional neural network according to the embodiment of the present disclosure.
As shown in fig. 4, in an embodiment of the present disclosure, the operation 103 includes: in operation 1031, candidate projection distances corresponding to the image features and each template feature are determined; operation 1032, determining a target projection distance according to the candidate projection distance with the smallest value; in operation 1033, a feature of the object is determined based on the object projection distance.
The candidate projection distance is a projection distance between the image feature and each feature template, the target projection distance is a candidate projection distance with the minimum value, and the target feature is a feature template corresponding to the target projection distance. Specifically, the projection distance between the image feature and each feature template is calculated through a projection distance algorithm, all the obtained projection distances are determined as candidate projection distances, the magnitude of all the candidate projection distance values is compared, the candidate projection distance with the smallest value is selected as the target projection distance, and the feature template corresponding to the target projection distance is determined as the target template.
In one embodiment of the present disclosure, the operation 1033 includes: firstly, determining whether the projection distance of a target meets a target distance index; and then if the target projection distance is determined to accord with the target distance index, determining the template characteristic corresponding to the target projection distance as the target characteristic.
And if the value of the target projection distance is less than or equal to the target distance index, determining the template characteristic corresponding to the target projection distance as the target characteristic.
In an embodiment of the present disclosure, the above operation 1033 further includes: if the target projection distance does not accord with the target distance index, determining target characteristics according to the image characteristics; and finally, adding corresponding template features in the template feature set according to the image features.
If the numerical value of the target projection distance is larger than the target distance index, determining the target characteristics according to the image characteristics, specifically, manually judging the sequence type corresponding to the image characteristics on site, and adding the image characteristics into a retrieval model as new template characteristics after determining the sequence type of the image characteristics.
When a new type of MRI sequence appears, the method can identify the type of the MRI sequence only by storing the image characteristics corresponding to the type of MRI sequence as new template characteristics in a retrieval model, thereby greatly reducing the development and maintenance cost.
According to the method, a series of template features are preset, only the MRI sequence to be identified needs to be matched with the template features, the sequence type of the MRI sequence to be identified can be judged according to the matched template features, when the similarity between the image features of the MRI sequence to be identified and all the preset template features is large, the novel MRI sequence can be added into a retrieval model as a new template feature on site, and the MRI sequence of the new type can be identified subsequently.
In an embodiment of the present disclosure, the operation 104 includes: and determining the category corresponding to the target feature, and determining the category corresponding to the target feature as the target category corresponding to the MRI sequence.
The sequence type of the target feature is known, and after the target feature corresponding to the image feature is determined, the sequence type corresponding to the target feature can be determined as the target type corresponding to the MRI sequence to be identified.
In an embodiment of the present disclosure, before the above operation 103, the method further comprises: firstly, obtaining at least one template sequence, wherein the template sequence is an MRI sequence with known category; then determining template features corresponding to the template sequences; and finally determining all the template features as the template feature set.
The template sequence is an MRI sequence with a known sequence type obtained in advance, at least one template sequence with the same sequence type is obtained, feature extraction is carried out on the template sequence, template features corresponding to the template sequence are determined, and all the template features are combined together and stored in a retrieval model. For example, a template sequence 1, a template sequence 2, and a template sequence 3 are obtained, the sequence types corresponding to the template sequence 1, the template sequence 2, and the template sequence 3 are known as T1WI, and T2WI, respectively, feature extraction is performed on the template sequence 1, the template sequence 2, and the template sequence 3, corresponding template features 1, template features 2, and template features 3 are obtained, the template features 1, the template features 2, and the template features 3 are collected together and stored in a search model, it can be understood that the above 3 template features are only exemplary, and in an actual situation, the sequence types and the number of the template features are far greater than that of the template features.
Fig. 5 shows a third implementation flow diagram of an MRI sequence identification method based on a convolutional neural network according to an embodiment of the present disclosure, and for facilitating understanding of the above embodiment, the detailed description of the above embodiment is described with reference to fig. 5, and as shown in fig. 5, includes:
step 501, firstly, obtaining an MRI image to be identified;
step 502, inputting the MRI image into a convolutional neural network to obtain image features including bottom texture features and high semantic features of an MRI sequence;
step 503, respectively calculating the projection distance between the image feature and each template feature in the retrieval model, and determining candidate projection distances from all the calculated projection distances;
step 504, sorting the projection distances in the candidate projection distances according to the numerical values, and selecting the projection distance with the minimum numerical value as a target projection distance;
step 505, comparing the target projection distance with a value of a target distance index, and comparing whether the target projection distance is less than or equal to the target distance index, if yes, executing step 506, and if not, executing step 508;
step 506, if the target projection distance is less than or equal to the target distance index, determining the template characteristic corresponding to the target projection distance as a target characteristic;
step 507, determining a sequence category corresponding to the target feature, and determining the sequence category as a target category of the MRI sequence to be identified;
step 508, if the target projection distance is greater than the target distance index, acquiring a sequence type corresponding to the image characteristics through manual identification on site;
step 509, add the image features as new template features to the search model.
Fig. 6 shows a first intention of an MRI sequence identification apparatus based on a convolutional neural network according to an embodiment of the present disclosure.
As shown in fig. 6, an embodiment of the present disclosure provides an apparatus for identifying an MRI sequence category based on a convolutional neural network, where the apparatus includes: an obtaining module 601, configured to obtain an MRI image and a corresponding MRI sequence; the feature extraction module 602 is configured to perform feature extraction on the MRI image through a convolutional neural network to obtain an image feature; the retrieval module 603 is configured to retrieve the image features through a retrieval model to obtain target features matched with the image features; a first determining module 604, configured to determine a target class corresponding to the MRI sequence according to the target feature.
In one embodiment, the feature extraction module 602 includes: the feature extraction submodule 6021 is used for performing feature extraction on the MRI image through a convolutional neural network to obtain a shallow layer initial feature and a high layer semantic feature; the relation submodule 6022 is used for relating the shallow layer initial features through the gram matrix to obtain bottom layer texture features; and a fusion submodule 6023 for fusing the bottom layer texture features and the high layer semantic features to obtain image features.
In one embodiment, the retrieving module 603 includes: a first determining submodule 6031, configured to determine a candidate projection distance corresponding to each template feature; a second determining submodule 6032, configured to determine a target projection distance according to the candidate projection distance with the smallest value; and a third determining submodule 6033, configured to determine a target feature according to the target projection distance.
In one implementation, the third determining submodule 6033 includes: a first determining unit 60331 configured to determine whether the target projection distance meets a target distance index; a second determining unit 60332, configured to determine, if it is determined that the target projection distance meets the target distance index, the template feature corresponding to the target projection distance as the target feature.
In one embodiment, an MRI sequence identification apparatus based on a convolutional neural network further includes: a third determining unit 60333, configured to add, according to the image feature, a corresponding template feature to the template feature set if it is determined that the target projection distance does not meet the target distance index.
In one implementation, the first determining module 604 includes: the fourth determining sub-module 6041 is configured to determine a category corresponding to the target feature, and determine the category corresponding to the target feature as the target category corresponding to the MRI sequence.
In one embodiment, an MRI sequence identification apparatus based on a convolutional neural network further includes: an obtaining module 605, configured to obtain at least one template sequence, where the template sequence is an MRI sequence with a known category; (ii) a A second determining module 606, configured to determine a template feature corresponding to the template sequence; the storage module 607 is used for storing all the template features in the retrieval model.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 7 shows a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The calculation unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as an MRI sequence recognition method based on a convolutional neural network. For example, in some embodiments, a convolutional neural network-based MRI sequence recognition method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When loaded into RAM 703 and executed by the computing unit 701, may perform one or more steps of a convolutional neural network-based MRI sequence recognition method described above. Alternatively, in other embodiments, the computing unit 701 may be configured to perform a convolutional neural network-based MRI sequence identification method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for identifying MRI sequence classes based on a convolutional neural network, the method comprising:
acquiring an MRI image and a corresponding MRI sequence;
performing feature extraction on the MRI image through a convolutional neural network to obtain image features;
retrieving the image features through a retrieval model to obtain target features matched with the image features;
and determining the target class corresponding to the MRI sequence according to the target characteristics.
2. The method of claim 1, wherein the extracting the features of the MRI image through the convolutional neural network to obtain image features comprises:
performing feature extraction on the MRI image through the convolutional neural network to obtain a shallow layer initial feature and a high layer semantic feature;
the shallow layer initial features are related through a gram matrix to obtain bottom layer texture features;
and fusing the bottom layer texture features and the high-layer semantic features to obtain the image features.
3. The method according to claim 1, wherein the retrieving image features through a retrieval model to obtain target features matching the image features comprises:
determining candidate projection distances corresponding to the image features and each template feature;
determining a target projection distance according to the candidate projection distance with the minimum numerical value;
and determining the target characteristic according to the target projection distance.
4. The method of claim 3, wherein said determining the target feature from the target projection distance comprises:
determining whether the target projection distance meets a target distance index;
and if the target projection distance is determined to accord with the target distance index, determining the template characteristic corresponding to the target projection distance as the target characteristic.
5. The method of claim 4, further comprising:
and if the target projection distance does not accord with the target distance index, adding corresponding template features into the retrieval model according to the image features.
6. The method according to claim 1, wherein the determining the object class corresponding to the MRI sequence according to the object feature comprises:
and determining the category corresponding to the target feature, and determining the category corresponding to the target feature as the target category corresponding to the MRI sequence.
7. The method according to claim 1, wherein before the retrieving template features matching the image features as target features by the retrieval model, the method further comprises:
obtaining at least one template sequence, wherein the template sequence is an MRI sequence with known category;
determining template features corresponding to the template sequences;
storing all of the template features in the retrieval model.
8. An apparatus for identifying MRI sequence classes based on a convolutional neural network, the apparatus comprising:
the acquisition module is used for acquiring an MRI image and a corresponding MRI sequence;
the characteristic extraction module is used for extracting the characteristics of the MRI image through a convolutional neural network to obtain image characteristics;
the retrieval module is used for retrieving the image features through a retrieval model to obtain target features matched with the image features;
and the first determining module is used for determining the target class corresponding to the MRI sequence according to the target characteristic.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202211226934.8A 2022-10-09 2022-10-09 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network Pending CN115631370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211226934.8A CN115631370A (en) 2022-10-09 2022-10-09 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211226934.8A CN115631370A (en) 2022-10-09 2022-10-09 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN115631370A true CN115631370A (en) 2023-01-20

Family

ID=84904533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211226934.8A Pending CN115631370A (en) 2022-10-09 2022-10-09 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN115631370A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246774A (en) * 2023-03-15 2023-06-09 北京医准智能科技有限公司 Classification method, device and equipment based on information fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189985A (en) * 2018-08-17 2019-01-11 北京达佳互联信息技术有限公司 Text style processing method, device, electronic equipment and storage medium
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111210441A (en) * 2020-01-02 2020-05-29 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform and computer-readable storage medium
CN111583356A (en) * 2020-05-13 2020-08-25 首都医科大学附属北京友谊医院 Magnetic resonance image synthesis method and device based on convolutional neural network
CN113177450A (en) * 2021-04-20 2021-07-27 北京有竹居网络技术有限公司 Behavior recognition method and device, electronic equipment and storage medium
CN114119546A (en) * 2021-11-25 2022-03-01 推想医疗科技股份有限公司 Method and device for detecting MRI image
CN114170221A (en) * 2021-12-23 2022-03-11 深圳市铱硙医疗科技有限公司 Method and system for confirming brain diseases based on images
CN114219983A (en) * 2021-12-17 2022-03-22 国家电网有限公司信息通信分公司 Neural network training method, image retrieval method and device
CN114998247A (en) * 2022-05-30 2022-09-02 深圳市联影高端医疗装备创新研究院 Abnormality prediction method, abnormality prediction device, computer apparatus, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189985A (en) * 2018-08-17 2019-01-11 北京达佳互联信息技术有限公司 Text style processing method, device, electronic equipment and storage medium
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111210441A (en) * 2020-01-02 2020-05-29 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform and computer-readable storage medium
CN111583356A (en) * 2020-05-13 2020-08-25 首都医科大学附属北京友谊医院 Magnetic resonance image synthesis method and device based on convolutional neural network
CN113177450A (en) * 2021-04-20 2021-07-27 北京有竹居网络技术有限公司 Behavior recognition method and device, electronic equipment and storage medium
CN114119546A (en) * 2021-11-25 2022-03-01 推想医疗科技股份有限公司 Method and device for detecting MRI image
CN114219983A (en) * 2021-12-17 2022-03-22 国家电网有限公司信息通信分公司 Neural network training method, image retrieval method and device
CN114170221A (en) * 2021-12-23 2022-03-11 深圳市铱硙医疗科技有限公司 Method and system for confirming brain diseases based on images
CN114998247A (en) * 2022-05-30 2022-09-02 深圳市联影高端医疗装备创新研究院 Abnormality prediction method, abnormality prediction device, computer apparatus, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋先刚 等: "《数字图像模式识别工程项目研究》", vol. 1, 西南交通大学出版社, pages: 41 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246774A (en) * 2023-03-15 2023-06-09 北京医准智能科技有限公司 Classification method, device and equipment based on information fusion
CN116246774B (en) * 2023-03-15 2023-11-24 浙江医准智能科技有限公司 Classification method, device and equipment based on information fusion

Similar Documents

Publication Publication Date Title
CN109447154B (en) Picture similarity detection method, device, medium and electronic equipment
CN108280477B (en) Method and apparatus for clustering images
CN111860573A (en) Model training method, image class detection method and device and electronic equipment
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
CN110781413B (en) Method and device for determining interest points, storage medium and electronic equipment
CN112560710B (en) Method for constructing finger vein recognition system and finger vein recognition system
CN113705628B (en) Determination method and device of pre-training model, electronic equipment and storage medium
CN113362314B (en) Medical image recognition method, recognition model training method and device
CN112989995B (en) Text detection method and device and electronic equipment
CN112560993A (en) Data screening method and device, electronic equipment and storage medium
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
US20230096921A1 (en) Image recognition method and apparatus, electronic device and readable storage medium
CN115631370A (en) Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN114548213A (en) Model training method, image recognition method, terminal device, and computer medium
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN109543716B (en) K-line form image identification method based on deep learning
CN114691918B (en) Radar image retrieval method and device based on artificial intelligence and electronic equipment
CN117333487B (en) Acne classification method, device, equipment and storage medium
CN110599456A (en) Method for extracting specific region of medical image
CN110750673A (en) Image processing method, device, equipment and storage medium
CN113344890B (en) Medical image recognition method, recognition model training method and device
CN114998607B (en) Ultrasonic image feature extraction method and device, electronic equipment and storage medium
CN116071628B (en) Image processing method, device, electronic equipment and storage medium
CN117115872A (en) Blood vessel identification method and device, electronic equipment and storage medium
CN117788903A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 3011, 2nd Floor, Building A, No. 1092 Jiangnan Road, Nanmingshan Street, Liandu District, Lishui City, Zhejiang Province, 323000

Applicant after: Zhejiang Yizhun Intelligent Technology Co.,Ltd.

Address before: No. 1202-1203, 12 / F, block a, Zhizhen building, No. 7, Zhichun Road, Haidian District, Beijing 100083

Applicant before: Beijing Yizhun Intelligent Technology Co.,Ltd.

CB02 Change of applicant information