CN114332947A - Image classification system and terminal equipment - Google Patents

Image classification system and terminal equipment Download PDF

Info

Publication number
CN114332947A
CN114332947A CN202111677051.4A CN202111677051A CN114332947A CN 114332947 A CN114332947 A CN 114332947A CN 202111677051 A CN202111677051 A CN 202111677051A CN 114332947 A CN114332947 A CN 114332947A
Authority
CN
China
Prior art keywords
sequence
image
images
classification
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111677051.4A
Other languages
Chinese (zh)
Inventor
胡湛棋
廖建湘
王海峰
蒋典
赵霞
袁碧霞
陈黎
林素芳
邹东方
叶园珍
段婧
赵彩蕾
林荣波
曾洪武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Childrens Hospital
Original Assignee
Shenzhen Childrens Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Childrens Hospital filed Critical Shenzhen Childrens Hospital
Priority to CN202111677051.4A priority Critical patent/CN114332947A/en
Publication of CN114332947A publication Critical patent/CN114332947A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses an image classification system and a terminal device, wherein the image classification system comprises: the image acquisition module is used for acquiring a first sequence image and a second sequence image, wherein the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities; the image fusion module is used for fusing the first sequence image and the second sequence image to obtain a third sequence image; and the prediction classification module is used for carrying out classification recognition on the third sequence images based on the trained prediction classification model and determining the classification result corresponding to the third sequence images. According to the invention, the visual effect of the focus can be improved by only acquiring the sequence images of different modes twice and fusing the sequence images of different modes, and the diagnosis of the tuberous sclerosis can be realized by performing feature extraction and classification prediction through the prediction classification model, so that the diagnosis efficiency of the tuberous sclerosis is effectively improved, and the classification accuracy can be improved.

Description

Image classification system and terminal equipment
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to an image classification system and terminal equipment.
Background
Tuberous Sclerosis (TSC) is an autosomal dominant hereditary disease. Tuberous sclerosis can lead to uncontrolled cell proliferation and differentiation, involving almost all organs and systems, notably the brain, skin, kidneys, heart, whose pathological changes are hamartomas, whose abnormal manifestations of the nervous system can usually be observed by images of the brain.
Magnetic Resonance Imaging (MRI) has rich soft tissue contrast and is an advanced imaging tool for clinical diagnosis of TSC. However, since the brain lesions of tuberous sclerosis are multiple lesions, located in the cortex and subcortical and interventricular canals, the diagnosis of a patient requires scanning a plurality of (hundreds of) sequential images, which takes a lot of time to complete, and for a doctor to complete the diagnosis by viewing the sequential images. Therefore, the diagnosis of tuberous sclerosis has a problem of low diagnosis efficiency at present.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image classification system and a terminal device, so as to solve the problem of low diagnosis efficiency in diagnosis of joint sclerosis.
In a first aspect, an embodiment of the present invention provides an image classification system, including:
the image acquisition module is used for acquiring a first sequence image and a second sequence image, wherein the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
the image fusion module is used for fusing the first sequence image and the second sequence image to obtain a third sequence image;
and the prediction classification module is used for carrying out classification recognition on the third sequence images based on the trained prediction classification model and determining a classification result corresponding to the third sequence images.
Optionally, the image fusion module fuses the first sequence of images and the second sequence of images through physical association of the first sequence of images and the second sequence of images.
Optionally, the prediction classification model includes a first three-dimensional convolution layer, a moving flipped residual neck three-dimensional convolution block, a second three-dimensional convolution layer, a global average pooling layer, and a full-connected layer.
Optionally, the prediction classification module comprises:
and the model construction unit is used for constructing a prediction classification model.
And the model training unit is used for training the constructed prediction classification model based on the training data to obtain the trained prediction classification model.
Optionally, the model training unit is specifically configured to: and pre-training the constructed prediction classification model based on the visual database to obtain a pre-trained prediction classification model, and adjusting parameters of the pre-trained prediction classification model based on the training data set to obtain the trained prediction classification model.
Optionally, the model training unit is further configured to acquire a training data set, the training data set including sequence images of a patient and sequence images of a normalizer.
Optionally, the first sequence of images and the second sequence of images are both three-dimensional images.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor implements the following steps when executing the computer program:
acquiring a first sequence of images and a second sequence of images; the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification recognition on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
Optionally, the processor, when executing the computer program, further implements the following steps:
constructing a prediction classification model;
and training the prediction classification model based on training data to obtain the trained prediction classification model.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when executed by a processor, the computer program implements the following steps:
acquiring a first sequence of images and a second sequence of images; the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification recognition on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
In a fourth aspect, an embodiment of the present invention provides a computer program product, which when run on a terminal device, causes the terminal device to perform the following steps:
acquiring a first sequence of images and a second sequence of images; the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification recognition on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
The image classification system, the terminal device, the computer readable storage medium and the computer program product provided by the embodiment of the invention have the following beneficial effects:
according to the image classification system provided by the embodiment of the invention, by utilizing the physical characteristics of the magnetic resonance sequence images of different modes, the visual effect of the focus can be improved only by acquiring the sequence images of different modes twice and fusing the sequence images of different modes, and the diagnosis of the tuberous sclerosis can be realized by performing characteristic extraction and classification prediction through the prediction classification model, so that the diagnosis efficiency of the tuberous sclerosis is effectively improved, and the classification accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of an image classification system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a prediction classification model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart diagram of method steps implemented when a processor of a terminal device executes a computer program provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. Furthermore, in the description of the present invention and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
It should also be appreciated that reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Medical images refer to techniques and procedures for obtaining images of internal tissues of a human body or a part of the human body in a non-invasive manner for medical or medical research purposes, and may be acquired by a medical imaging system.
Magnetic Resonance Imaging (MRI) is an imaging technique that utilizes the signals generated by the resonance of atomic nuclei within a strong magnetic field for image reconstruction. It uses radio-frequency pulse to excite the atomic nucleus whose spin is not zero in the magnetic field, after the radio-frequency pulse is stopped, the atomic nucleus is relaxed, in the relaxation process the induction coil is used to collect signal, and according to a certain mathematical method the signal can be reconstructed to form mathematical image. By scanning the brain by magnetic resonance imaging, corresponding sequence images can be obtained.
Magnetic resonance imaging has different modalities, each of which captures certain features of the underlying anatomy and provides a unique view of the intrinsic magnetic resonance parameters. And scanning by different modality scanning modes to obtain corresponding sequence images. Modalities for magnetic resonance imaging include proton density weighted imaging modality, T1 weighted imaging (T1W) modality, T2 weighted imaging (T2W) modality, Fluid-attenuated inversion recovery (FLAIR) modality, and the like. Acquiring images of the same anatomical structure with multiple different contrasts may increase the variety of diagnostic information available in an MRI examination.
When scanning for TSC patients, abnormally high or low signals can be observed in the FLAIR sequence images of the MRI (images acquired by the FLAIR modality scan) and in the T2W sequence images (images acquired by the T2W modality scan). However, the T2W sequence images have strong cerebrospinal fluid (CSF) signals, which can produce partial volume artifacts, thereby affecting the observation of cortical and subcortical lesions. While the FLAIR sequence images may well inhibit cerebrospinal fluid from effectively enhancing lesion visualization, the FLAIR sequence images do not include all of the tissue contrast required by the physician, so that multiple sequence images are typically scanned for each patient, and an experienced professional may spend a lot of time completing an examiner's diagnosis.
In the aspect of artificial intelligence diagnosis, a Convolutional Neural Network (CNN) is generally adopted to realize image recognition and classification at present, features can be automatically extracted from data and characterization learning is performed based on the data, however, a deep learning algorithm of the Convolutional Neural network usually needs a large number of data sets to improve accuracy, TSC is a rare disease, MRI data of TSC patients are difficult to collect, and therefore a large number of data sets are difficult to provide to train the Convolutional Neural network, and application of existing artificial intelligence diagnosis to TSC diagnosis is limited.
Based on this, the embodiment of the present invention provides an image classification system, which performs fusion processing on a T2W sequence image and a FLAIR sequence image to obtain a fused sequence image, and performs feature extraction and classification prediction by using a prediction classification model, so as to implement diagnosis of tuberous sclerosis, effectively improve diagnosis efficiency of tuberous sclerosis, and also improve classification accuracy.
The following will describe the image classification system and the terminal device provided by the embodiment of the invention in detail:
referring to fig. 1, fig. 1 is a schematic structural diagram of an image classification system according to an embodiment of the present invention.
As shown in fig. 1, the image classification system may include an image acquisition module 10, an image fusion module 20, and a prediction classification module 30, which are detailed as follows:
the image acquiring module 10 is configured to acquire a first sequence of images and a second sequence of images.
In an embodiment of the present invention, the first sequence image and the second sequence image are magnetic resonance sequence images acquired by using different modalities.
In an embodiment of the present invention, the first sequence of images is a T2W sequence of images, and the second sequence of images is a FLAIR sequence of images. Of course, the first sequence image may be a FLAIR sequence image, and the second sequence image may be a T2W sequence image. The FLAIR sequence image is a sequence image obtained by controlling the magnetic resonance imaging system to scan the preset part of the person to be detected through a FLAIR modality, and the T2W sequence image is a sequence image obtained by controlling the magnetic resonance imaging system to scan the preset part of the person to be detected through a T2W modality.
It should be noted that the preset portion may be determined according to a diagnosis requirement, for example, the diagnosis requirement on whether the brain has a pathological condition is required, and the preset portion is usually the head, which is not limited by the present invention.
In an embodiment of the present invention, after the magnetic resonance imaging system is controlled to scan the preset region to be detected based on the FLAIR modality and the T2W modality, the first sequence image and the second sequence image are obtained, and then the first sequence image and the second sequence image are sent to a Picture Archiving and Communication System (PACS) and stored by the PACS. The image acquiring module 10 may communicate with the image archiving and communication system to acquire the first sequence of images and the second sequence of images from the image archiving and communication system.
In an embodiment of the present invention, the image acquisition module 10 may further directly communicate with a magnetic resonance imaging system, the magnetic resonance imaging system is controlled to scan a preset region to be detected based on a FLAIR modality and a T2W modality to obtain the first sequence image and the second sequence image, and the magnetic resonance imaging system directly sends the first sequence image and the second sequence image obtained by scanning to the image acquisition module 10, so that the image acquisition module 10 acquires the first sequence image and the second sequence image.
It is understood that the image acquiring module 10 may acquire the first sequence image and the second sequence image from other devices storing the first sequence image and the second sequence image, which are provided by way of example and are not limited.
Of course, the image acquiring module 10 may store the first sequence image and the second sequence image in an image classification system after the acquired first sequence image and second sequence image, and call the images for use when image recognition and classification are required.
It should be noted that the first sequence image and the second sequence image are detection images of the same person to be detected, and when storing the detection images, the identification codes (unique identification codes such as an identification number, an identification card number, and a medical diagnosis card number) of the person to be detected may be used for association storage. Of course, time information such as date and scanning time can be added to distinguish the detected images scanned by the same patient at different times. And the first sequence image and the second sequence image which are respectively scanned by the same person to be detected at the same time are stored in a correlated manner for subsequent use.
The image fusion module 20 is configured to fuse the first sequence image and the second sequence image to obtain a third sequence image.
In the embodiment of the invention, the first sequence image and the second sequence image are images obtained by scanning the preset part of the person to be detected through different modalities, so that different scanning characteristics can be presented by the first sequence image and the second sequence image.
In a specific application, the image fusion module 20 fuses the first sequence image and the second sequence image through a physical correlation between the first sequence image and the second sequence image, where the first sequence image and the second sequence image have a certain physical correlation, and the first sequence image and the second sequence image are fused by a generalized multiplication combination, so as to obtain a physically combined third sequence image:
f(x,y)=Axayb
wherein f (x, y) refers to the combined third sequence image, x refers to the first sequence image, y refers to the second sequence image, a is an arbitrary scale factor, a and b are two positive indices (a > 0; b >0), and a + b is satisfied as 3.
In the embodiment of the invention, in order to determine the optimal parameter combination when the first sequence image and the second sequence image are combined, a is firstly made to be 1, the accuracy of cross validation is used as an index, and the optimal parameter a and the optimal parameter b are determined in the process of training the prediction classification model, so that high detection accuracy is realized. Here, the process of training the prediction classification model may refer to the related description of the prediction classification module 30, and is not described herein again to avoid repetition.
The prediction classification module 30 is configured to perform classification and recognition on the third sequence images based on the trained prediction classification model, and determine a classification result corresponding to the third sequence images.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a prediction classification model according to an embodiment of the present invention. As shown in fig. 2, in the embodiment of the present invention, the prediction classification model includes a first three-dimensional convolutional layer (Conv3D1), a moving flipped residual neck three-dimensional convolutional layer block (MBConv3D), a second three-dimensional convolutional layer (Conv3D2), a global average Pooling layer (GA Pooling), and a fully connected layer (FC).
In the embodiment of the invention, the third sequence image is input into the trained predictive classification model for feature extraction and classification prediction, so that a classification result corresponding to the third sequence image can be obtained, wherein the classification result comprises that the person to be detected is a TSC patient and the person to be detected is not the TSC patient.
Here, the first sequence image and the second sequence image may be three-dimensional images, and the third sequence image obtained by fusion may also be a three-dimensional image.
It should be noted that the prediction classification module 30 may further include a model construction unit and a model training unit.
The model building unit is used for building a prediction classification model.
The model training unit is used for training the constructed prediction classification model based on training data to obtain the trained prediction classification model.
In an embodiment of the present invention, the prediction classification model constructed by the model construction unit may be an EfficientNet3D model. The EfficientNet3D is one of image classification networks, not only improves the classification accuracy, but also can obviously reduce the number of network parameters, and is suitable for training and classifying small data sets.
Specifically, the prediction classification model may have a structure as shown in fig. 2, that is, the constructed prediction classification model includes a first three-dimensional convolutional layer (Conv3D1), a moving flipped residual neck three-dimensional convolutional layer block (MBConv3D), a second three-dimensional convolutional layer (Conv3D2), a global average Pooling layer (GA Pooling), and a fully connected layer (FC).
In the embodiment of the present invention, the model training unit may perform pre-training based on a visual database (ImageNet) to obtain a pre-trained predictive classification model, and then adjust parameters of the pre-trained predictive classification model based on a training data set to obtain a trained predictive classification model.
In a particular application, five-fold cross-validation may be employed to evaluate the model.
Specifically, the Adam algorithm is adopted for model training, the learning rate is set to be 0.001, the period (epoch) is set to be 100, and the loss function uses a cross entropy loss function:
Figure BDA0003452269500000091
wherein, yiIs the sequence of images in the training data set and n is the number of sequence of images in the training data.
And (3) converging the loss function by adjusting the parameters and the weights of the prediction classification model, and setting the model parameters and the weights at the moment as the parameters and the weights of the prediction classification model to obtain the trained prediction classification model.
In an embodiment of the present invention, the model training unit is further configured to obtain a training data set.
In an embodiment of the present invention, the training data set is sequence images of a patient (TSC tuber lesion visible on MRI) (which may be T2W sequence images, FLAIR sequence images, or sequence images obtained by fusing T2W sequence images and FLAIR sequence images) and sequence images of a normal patient (which may also be T2W sequence images, FLAIR sequence images, or sequence images obtained by fusing T2W sequence images and FLAIR sequence images).
In this case, T2W sequence images and FLAIR sequence images may be acquired first, then skull irrelevant to the lesion in the sequence images is removed by a deep learning tool HD-beta (see the existing deep learning tool), so that lesion detection and classification are facilitated, and since the sequence images of TSC patients are difficult to collect, the embodiment of the present invention employs three-dimensional sequence images (three-dimensional T2W sequence images and three-dimensional FLAIR sequence images), and then expands the training data set by various ways such as cutting, so as to increase the data volume of the training data set. Specifically, the three-dimensional sequence image is acquired, and then the data set is expanded by means of image shearing, rotation, overturning and the like, so that the robustness of the trained prediction classification model can be effectively improved.
Therefore, the image classification system provided by the embodiment of the invention can improve the visual effect of the focus by acquiring the sequence images of different modalities twice and fusing the sequence images of different modalities by utilizing the physical characteristics of the magnetic resonance sequence images of different modalities, can realize the diagnosis of the tuberous sclerosis by performing characteristic extraction and classification prediction through the prediction classification model, effectively improves the diagnosis efficiency of the tuberous sclerosis, and can improve the classification accuracy.
Fig. 3 is a schematic structural diagram of a terminal device according to another embodiment of the present invention. As shown in fig. 3, the terminal device 3 provided in this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in said memory 31 and executable on said processor 30, such as a program for cooperative control of a multi-agent system. The processor 30, when executing the computer program 32, implements the method steps as shown in fig. 4:
s11: a first sequence of images and a second sequence of images are acquired.
S12: and fusing the first sequence image and the second sequence image to obtain a third sequence image.
S13: and carrying out classification recognition on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
In an embodiment of the present invention, the processor 30 further implements the following steps when executing the computer program 32:
constructing a prediction classification model;
and training the constructed prediction classification model based on the training data to obtain the trained prediction classification model.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The processor 30, when executing the computer program 32, implements the functions of each module/unit in the image classification system embodiments described above, such as the functions of the modules 10-30 shown in fig. 1.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 32 in the terminal device 3. For example, the computer program 32 may be divided into a first obtaining unit and a second obtaining unit, and the specific functions of each unit refer to the related description in the embodiment corresponding to fig. 1, which is not described herein again.
The terminal device may include, but is not limited to, a processor 30, a memory 31. It will be understood by those skilled in the art that fig. 3 is only an example of the terminal device 3, and does not constitute a limitation to the terminal device 3, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal device 3, such as a hard disk or a memory of the terminal device 3. The memory 31 may also be an external storage device of the terminal device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal device 3. The memory 31 is used for storing the computer program and other programs and data required by the terminal device. The memory 31 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the invention also provides a computer readable storage medium. Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention, as shown in fig. 5, a computer program 51 is stored in the computer-readable storage medium 5, and when executed by a processor, the computer program 51 can implement the following steps:
acquiring a first sequence of images and a second sequence of images;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification and identification on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
The embodiment of the invention provides a computer program product, which when running on a terminal device, enables the terminal device to realize the following steps when executed:
acquiring a first sequence of images and a second sequence of images;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification and identification on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is merely used as an example, and in practical applications, the foregoing function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the terminal device is divided into different functional units or modules to perform all or part of the above-described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and parts that are not described or illustrated in a certain embodiment may refer to the description of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An image classification system, comprising:
the image acquisition module is used for acquiring a first sequence image and a second sequence image, wherein the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
the image fusion module is used for fusing the first sequence image and the second sequence image to obtain a third sequence image;
and the prediction classification module is used for carrying out classification recognition on the third sequence images based on the trained prediction classification model and determining a classification result corresponding to the third sequence images.
2. The image classification system of claim 1, wherein the image fusion module fuses the first sequence of images and the second sequence of images by a physical association of the first sequence of images and the second sequence of images.
3. The image classification system of claim 1, wherein the predictive classification model includes a first three-dimensional convolution layer, a moving flipped residual neck three-dimensional convolution block, a second three-dimensional convolution layer, a global average pooling layer, and a fully connected layer.
4. The image classification system of claim 1, wherein the prediction classification module comprises:
the model construction unit is used for constructing a prediction classification model;
and the model training unit is used for training the constructed prediction classification model based on the training data to obtain the trained prediction classification model.
5. The image classification system of claim 4, wherein the model training unit is specifically configured to: and pre-training the constructed prediction classification model based on the visual database to obtain a pre-trained prediction classification model, and adjusting parameters of the pre-trained prediction classification model based on the training data set to obtain the trained prediction classification model.
6. The image classification system of claim 4, wherein the model training unit is further configured to obtain a training data set, the training data set including sequential images of a patient and sequential images of a normalizer.
7. The image classification system according to any of claims 1 to 6, characterized in that the first and second sequence of images are both three-dimensional images.
8. A terminal device comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a first sequence of images and a second sequence of images; the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification recognition on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
9. The terminal device of claim 8, wherein the processor, when executing the computer program, further performs the steps of:
constructing a prediction classification model;
and training the prediction classification model based on training data to obtain the trained prediction classification model.
10. A computer-readable storage medium storing a computer program, the computer program when executed by a processor implementing the steps of:
acquiring a first sequence of images and a second sequence of images; the first sequence image and the second sequence image are magnetic resonance sequence images acquired by different modalities;
fusing the first sequence image and the second sequence image to obtain a third sequence image;
and carrying out classification recognition on the third sequence images based on the trained prediction classification model, and determining a classification result corresponding to the third sequence images.
CN202111677051.4A 2021-12-31 2021-12-31 Image classification system and terminal equipment Pending CN114332947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111677051.4A CN114332947A (en) 2021-12-31 2021-12-31 Image classification system and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111677051.4A CN114332947A (en) 2021-12-31 2021-12-31 Image classification system and terminal equipment

Publications (1)

Publication Number Publication Date
CN114332947A true CN114332947A (en) 2022-04-12

Family

ID=81022968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111677051.4A Pending CN114332947A (en) 2021-12-31 2021-12-31 Image classification system and terminal equipment

Country Status (1)

Country Link
CN (1) CN114332947A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349714A (en) * 2023-12-06 2024-01-05 中南大学 Classification method, system, equipment and medium for medical image of Alzheimer disease

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190683A (en) * 2018-08-14 2019-01-11 电子科技大学 A kind of classification method based on attention mechanism and bimodal image
CN110544252A (en) * 2019-09-05 2019-12-06 重庆邮电大学 parkinson's disease auxiliary diagnosis system based on multi-mode magnetic resonance brain image
CN110992338A (en) * 2019-11-28 2020-04-10 华中科技大学 Primary stove transfer auxiliary diagnosis system
CN111856362A (en) * 2019-04-24 2020-10-30 深圳先进技术研究院 Magnetic resonance imaging method, device, system and storage medium
CN112465058A (en) * 2020-12-07 2021-03-09 中国计量大学 Multi-modal medical image classification method under improved GoogLeNet neural network
CN113030817A (en) * 2021-03-02 2021-06-25 深圳市儿童医院 Magnetic resonance imaging method, equipment and storage medium
CN113191393A (en) * 2021-04-07 2021-07-30 山东师范大学 Contrast-enhanced energy spectrum mammography classification method and system based on multi-modal fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190683A (en) * 2018-08-14 2019-01-11 电子科技大学 A kind of classification method based on attention mechanism and bimodal image
CN111856362A (en) * 2019-04-24 2020-10-30 深圳先进技术研究院 Magnetic resonance imaging method, device, system and storage medium
CN110544252A (en) * 2019-09-05 2019-12-06 重庆邮电大学 parkinson's disease auxiliary diagnosis system based on multi-mode magnetic resonance brain image
CN110992338A (en) * 2019-11-28 2020-04-10 华中科技大学 Primary stove transfer auxiliary diagnosis system
CN112465058A (en) * 2020-12-07 2021-03-09 中国计量大学 Multi-modal medical image classification method under improved GoogLeNet neural network
CN113030817A (en) * 2021-03-02 2021-06-25 深圳市儿童医院 Magnetic resonance imaging method, equipment and storage medium
CN113191393A (en) * 2021-04-07 2021-07-30 山东师范大学 Contrast-enhanced energy spectrum mammography classification method and system based on multi-modal fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALAM NOOR ET AL.: "DriftNet: Aggressive Driving Behavior Classification using 3D EfficientNet Architecture", 《ARXIV》 *
XIYUE WANG ET AL.: "Automatic Glioma Grading Based on Two-Stage Networks by Integrating Pathology and MRI Images", 《BRAINLES 2020》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349714A (en) * 2023-12-06 2024-01-05 中南大学 Classification method, system, equipment and medium for medical image of Alzheimer disease
CN117349714B (en) * 2023-12-06 2024-02-13 中南大学 Classification method, system, equipment and medium for medical image of Alzheimer disease

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
Pezzotti et al. An adaptive intelligence algorithm for undersampled knee MRI reconstruction
US9980704B2 (en) Non-invasive image analysis techniques for diagnosing diseases
CN109741346A (en) Area-of-interest exacting method, device, equipment and storage medium
CN114376558B (en) Brain atlas individuation method and system based on magnetic resonance and twin map neural network
CN110506278A (en) Target detection in latent space
CN110444277B (en) Multi-mode brain MRI image bidirectional conversion method based on multi-generation and multi-confrontation
US10481233B2 (en) Edema invariant tractography
US11139068B2 (en) Methods, systems, and computer readable media for smart image protocoling
US20130141092A1 (en) Method of optimizing magnetic resonance scanning parameters
Sperber et al. A network underlying human higher-order motor control: Insights from machine learning-based lesion-behaviour mapping in apraxia of pantomime
CN110246109A (en) Merge analysis system, method, apparatus and the medium of CT images and customized information
DE102018210973A1 (en) Method for monitoring a patient during a medical imaging examination, in particular a magnetic resonance examination
CN110148195A (en) A kind of magnetic resonance image generation method, system, terminal and storage medium
CN116784820A (en) Brain function network construction method and system based on seed point connection
CN114332947A (en) Image classification system and terminal equipment
CN111951219B (en) Thyroid eye disease screening method, system and equipment based on orbit CT image
KR102103281B1 (en) Ai based assistance diagnosis system for diagnosing cerebrovascular disease
CN111540442A (en) Medical image diagnosis scheduling management system based on computer vision
CN116778021A (en) Medical image generation method, device, electronic equipment and storage medium
DE102008002951A1 (en) Method and device for evaluating images
US11756200B2 (en) Medical imaging with functional architecture tracking
Zhang et al. Medical image fusion based a densely connected convolutional networks
CN112750097B (en) Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network
CN107427255A (en) The method that the generation time point of infarct area is estimated based on brain image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220412

RJ01 Rejection of invention patent application after publication