CN115409843A - Brain nerve image feature extraction method based on scale equalization coupling convolution architecture - Google Patents

Brain nerve image feature extraction method based on scale equalization coupling convolution architecture Download PDF

Info

Publication number
CN115409843A
CN115409843A CN202211359224.2A CN202211359224A CN115409843A CN 115409843 A CN115409843 A CN 115409843A CN 202211359224 A CN202211359224 A CN 202211359224A CN 115409843 A CN115409843 A CN 115409843A
Authority
CN
China
Prior art keywords
image
scale
brain
convolution
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211359224.2A
Other languages
Chinese (zh)
Other versions
CN115409843B (en
Inventor
李奇
刘静远
武岩
宋雨
高宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202211359224.2A priority Critical patent/CN115409843B/en
Publication of CN115409843A publication Critical patent/CN115409843A/en
Application granted granted Critical
Publication of CN115409843B publication Critical patent/CN115409843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to a brain nerve image feature extraction method based on a scale equalization coupling convolution framework, which belongs to the technical field of image feature extraction and comprises the following steps: respectively carrying out different preprocessing on the sMRI image and the fMRI image to obtain a 3D brain image and a 2D brain function network image; performing convolution feature extraction operation on the 3D brain image and the 2D brain function network image respectively by using a scale equalization pyramid convolution network based on expansion convolution to obtain a time-space scale equalization feature; and carrying out random matching coupling calculation on the obtained time-space scale balance characteristics to obtain a coupling matrix, wherein the coupling matrix is used as the extracted fusion characteristics to be input into the classifier. The feature extraction method not only fully considers the multi-scale semantic relevance in a single mode, but also can extract the coupling features among the modes, so that the accuracy and the robustness of the model are further improved, and the feature extraction method is suitable for feature extraction of various different modes and has strong expandability.

Description

Brain nerve image feature extraction method based on scale equalization coupling convolution architecture
Technical Field
The invention belongs to the technical field of image feature extraction, and particularly relates to a brain neuroimaging feature extraction method based on a scale equalization coupling convolution framework.
Background
The deep learning capability in integrating information and mining internal association of information is far beyond human beings, and the deep learning method is widely applied to the medical field. The multi-modal data can be widely applied to diagnosis of Alzheimer's Disease (AD) because the clinical status of patients can be shown in multiple directions, and has good effect. However, the utilization of multi-modal data also presents great challenges, for example, the multi-modal data is difficult to obtain, and it is difficult to form a sufficient scale for training a model; for another example, the multi-modal parameters are huge, the training difficulty is high, and the model is required to have very strong complex feature extraction capability. Observing atrophy of a brain region of a patient and activity change of the brain region through a time sequence-structure means by using structural magnetic resonance imaging (sMRI) and functional magnetic resonance imaging (fMRI) is a common means for researching AD pathology, and the two can complement each other to represent multi-scale characteristics of AD in the brain from a plurality of angles of time-space.
In early-stage AD patients, only a few microstructure changes occur in the brain before brain tissues are obviously atrophied, so that the classification difficulty of AD cranial nerve images is higher than that of traditional images. However, the current research shows that the convolution operation of scale features is independent, and the problem of semantic information loss caused by the lack of relevance among scales limits the classification performance. Meanwhile, for the multi-modal feature fusion strategy, the common fusion means include operations of parallel input, channel splicing, corresponding position fusion and the like. However, these operations are only simple to accumulate the data of the respective modalities together, and there is a lack of mining on the correlation between the modalities, which is another big problem that the classification accuracy of AD by using multi-modality data cannot meet the clinical application.
At present, a multi-scale feature extraction method used by an AD multi-mode diagnosis model lacks consideration on semantic relevance among scales, so that semantics among different scales are lost, the effect of improving accuracy is influenced, meanwhile, AD multi-mode data are only simply accumulated, coupling relations among the modes are not further mined, multi-mode feature differences are not obvious, and multi-mode diagnosis accuracy is not ideal. Therefore, a feature extraction method is urgently needed, so that the model can not only take the multi-scale feature semantic relevance in the modes into consideration, but also can fully mine the coupling relationship among the modes.
Disclosure of Invention
In order to solve the problems that the multi-scale feature extraction method used by the existing AD multi-mode diagnosis model lacks consideration on semantic relevance among scales and does not further mine the coupling relation among modes, the method for extracting the cranial nerve image features based on the scale balanced coupling convolution architecture is provided.
In order to realize the purpose, the invention adopts the following technical scheme:
a brain nerve image feature extraction method based on a scale equalization coupling convolution architecture comprises the following steps:
acquiring a cranial nerve image comprising an sMRI image and an fMRI image, and respectively performing different pre-processing on the sMRI image and the fMRI image to obtain a 3D brain image and a 2D brain function network image;
performing convolution feature extraction operation on the 3D brain image and the 2D brain function network image respectively by using a scale equalization pyramid convolution network based on expansion convolution to obtain time-space scale equalization features of the 3D brain image and the 2D brain function network image;
and step three, performing random matching coupling calculation on the time-space scale balance characteristics obtained in the step two to obtain a coupling matrix, wherein the coupling matrix is used as the extracted fusion characteristics to be input into a classifier.
Compared with the prior art, the invention has the following beneficial effects:
(1) The semantic relevance among multiple scales in the feature extraction is fully considered by using scale equalization convolution, so that the model adopting the feature extraction method can fully learn the information features among brain structures and time sequences, and the data loss of the relevance among convolution receptive fields is reduced;
(2) By using the characteristic coupling operation, random matching coupling calculation is carried out on the time-space scale balance characteristics of the sMRI image and the fMRI image to obtain a coupling matrix, data of different modes can be mapped into the same semantic space, and the disturbance of the characteristics is improved; in addition, unmatched sMRI and fMRI data can be fused by using the coupling characteristics, so that the data fusion cost is reduced, the purpose of data enhancement can be achieved, and the generalization capability of the model is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for extracting cranial nerve image features based on a scale equalization coupling convolution architecture according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In one embodiment, as shown in fig. 1, the present invention provides a method for extracting features of a neuroencephalography, which is implemented based on a scale equalization coupling convolution architecture, and mainly includes a preprocessing step, a feature extraction step, and a feature coupling step, which are described in detail below.
Firstly, acquiring a cranial nerve image obtained by magnetic resonance imaging, wherein the cranial nerve image comprises an sMRI image and an fMRI image, and then respectively carrying out different pre-treatments on the sMRI image and the fMRI image so as to remove interference factors and correspondingly obtain a 3D brain image and a 2D brain function network image.
Further, the step of preprocessing the srri image in the step one includes the following steps:
preprocessing of the srmri images is done using SPM12 software. Firstly, introducing an sMRI image into SPM12 software, and screening an image with overlarge head movement through Realign head movement correction, wherein the standard of the overlarge head movement is 2mm and 2 degrees; then normalizing the functional image file after the head movement correction to MNI space, and finally obtaining a 3D brain image with the size of 121 multiplied by 145 multiplied by 121 after stripping the skull and removing the cerebellum.
Further, the step one of preprocessing the fMRI image includes the following steps:
preprocessing of fMRI images is also accomplished using SPM12 software. Firstly, importing the fMRI image into SPM12 software, and performing time point removing operation, for example, removing the former 10 time points which are unstable due to machine factors; then, slice Timing time layer correction is carried out, and the scanning time of each layer in a scanning period is guaranteed to be uniform; then, performing head movement correction, evaluating the head movement condition of the tested head, and adjusting the image dislocation at different moments caused by the head movement correction; then directly registering the individual limit plane image (EPI) to a standard EPI template; and finally, constructing a brain network, and dividing the fMRI data into 90 ROI nodes by using an AAL90 template to construct a brain function network.
And secondly, performing convolution feature extraction operation on the 3D brain image and the 2D brain function network image respectively by using a scale equalization pyramid convolution network based on expansion convolution to obtain time-space scale equalization features of the 3D brain image and the 2D brain function network image.
The method utilizes convolution kernels with different sizes to carry out convolution feature extraction operation so as to extract the time-space features of the cranial nerve images with different receptive fields. Meanwhile, in order to take the characteristic relevance of different receptive fields into account, the invention utilizes the idea of scale balance to carry out the operation of weight assignment and addition on the characteristics of the adjacent receptive field scales, so that the semantic relevance between the adjacent scales is taken into account in the characteristics output by each scale, and the time-space scale balance characteristic extraction is completed.
When a scale equalization pyramid convolution network is used for convolution feature extraction operation, the multi-scale feature extraction is not realized by using the traditional convolution strategies of 3 × 3,5 × 5 and 7 × 7 × 7 like the traditional 3D convolution, because the parameters of the whole network are greatly increased by the strategy, and the model training is difficult. In order to reduce the operation parameters of the whole model, the invention selects the expansion convolution to replace the traditional convolution strategy. The advantage of using dilation convolution is that the field of view of the convolution kernel can be increased while keeping the number of parameters unchanged, so that each convolution output contains a larger range of information. Using a 3 × 3 × 3 division =3 dilated convolution kernel, a field corresponding to a convolution kernel size of 7 × 7 × 7 can be obtained, and the parameter amount is only 7.9%.
Equivalent receptive field
Figure 869404DEST_PATH_IMAGE001
The calculation formula is as follows:
Figure 964661DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 370235DEST_PATH_IMAGE003
which represents the size of the convolution kernel and,
Figure 340465DEST_PATH_IMAGE004
the expansion coefficient is shown.
After the convolution strategy of each scale is determined, the multi-scale features need to be further fused. The specific process is as follows:
order to
Figure DEST_PATH_IMAGE005
Representing a data set comprising N samples,
Figure 968455DEST_PATH_IMAGE006
indicating the label to which the data corresponds.
The second step specifically comprises the following steps:
first, to obtain multiscale input data, each input is based on the input data of a scale equalization pyramid convolution network of the inflation convolution
Figure DEST_PATH_IMAGE007
All need to be divided into different scales (
Figure 654521DEST_PATH_IMAGE008
) I.e. by
Figure DEST_PATH_IMAGE009
In dividing the input data into different
Figure 704647DEST_PATH_IMAGE010
And then, selecting a corresponding equilibrium expansion convolution strategy according to the divided scales, and calculating to obtain output characteristics of different scales, wherein the calculation method is that the output characteristics of each scale are equal to the weight addition of the output characteristics of the scale and the output characteristics of adjacent scales, and the formula is as follows:
Figure DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 681831DEST_PATH_IMAGE012
representation scale
Figure DEST_PATH_IMAGE013
=
Figure 763663DEST_PATH_IMAGE014
The weight of (a) is calculated,
Figure DEST_PATH_IMAGE015
representation scale
Figure 528619DEST_PATH_IMAGE013
=
Figure 908784DEST_PATH_IMAGE014
By convolution operations of each scale in the formula
Figure 955238DEST_PATH_IMAGE013
All equal to itself and its neighbors
Figure 100698DEST_PATH_IMAGE013
The weights of the output features are added.
The sum of the output features of all scales is the overall multi-scale output feature
Figure 207194DEST_PATH_IMAGE016
The calculation formula is as follows:
Figure DEST_PATH_IMAGE017
and finally, obtaining the time-space scale balance characteristics of the input data after fusing the multi-scale characteristics by using the characteristic extraction network.
Finally, the overall multi-scale output features are input into a feature extraction network for further feature extraction, for example, the features are obtained by extracting the temporal and spatial features through ResNet
Figure 497493DEST_PATH_IMAGE018
Then converting the time-space characteristics into 64 × 32 characteristic diagram by matrix correction
Figure 19741DEST_PATH_IMAGE019
. WhereinThe feature extraction network can be realized by adopting the existing ResNet, and also can adopt two residual error structures of ResNeXt or Res2Net to replace ResNet.
And step three, performing random matching coupling calculation on the time-space scale balance characteristics obtained in the step two to obtain a coupling matrix, wherein the coupling matrix is used as the extracted fusion characteristics to be input into a classifier.
And after the characteristic extraction operation, obtaining the time-space scale balance characteristics of the sMRI image and the fMRI image. In order to enable the model to fully learn the time and space characteristics and consider the coupling relation between the time and space characteristics, a coupling matrix obtained by performing random matching coupling calculation on the time-space scale balance characteristics of the two is used as a fusion characteristic to be input into a classifier, the classifier is used for classifying the fusion characteristic and outputting a classification result.
For the calculation of the coupling matrix, the coupling matrix of the time-space scale equalization characteristic can be calculated by selecting cosine similarity, and the cosine similarity can also be calculated by replacing any one of the sperman correlation coefficient, the kendall rank correlation coefficient and the pearson correlation coefficient.
When cosine similarity is selected to calculate the coupling matrix of the time-space scale equalization characteristic, the calculation formula is as follows:
Figure 515313DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 792711DEST_PATH_IMAGE021
Figure 380425DEST_PATH_IMAGE022
respectively representing the characteristic diagrams of sMRI and fMRI after scale equalization pyramid convolution,
Figure 34260DEST_PATH_IMAGE023
the number of columns of the feature map is shown.
The time-space scale balance feature fusion adopts a random matching mode, so that each structural feature is matched with a time sequence feature, and all possibilities of matching the structure and the time sequence are fully considered. And meanwhile, the data volume is amplified, so that the tag domain of the coupling data domain can be fully characterized, for example, data acquired by the same person in different years can be taken into the coupling data domain, and the time-space feature combination possibility is considered as much as possible. By calculating the time-space scale balance characteristic coupling matrix and projecting the time-space scale balance characteristic coupling matrix to the label semantic space, the problem of difficult fusion due to different data dimensions is solved.
The feature extraction method of the invention utilizes the time-space scale balance pyramid convolution to learn the semantic relation among the multi-scale features of the MRI cranial nerve image and utilizes the coupling relation among the modal features to realize the more efficient fusion of the multi-mode data, and has the following beneficial effects:
(1) The semantic relevance among multiple scales in the feature extraction is fully considered by using scale equalization convolution, so that the model adopting the feature extraction method can fully learn the information features among brain structures and time sequences, and the data loss of the relevance among convolution receptive fields is reduced;
(2) By using the characteristic coupling operation, random matching coupling calculation is carried out on the time-space scale balance characteristics of the sMRI image and the fMRI image to obtain a coupling matrix, and data of different modes can be mapped into the same semantic space, so that the characteristic disturbance is improved; in addition, unmatched sMRI and fMRI data can be fused by using the coupling characteristics, so that the data fusion cost is reduced, the purpose of data enhancement can be achieved, and the generalization capability of the model is improved.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A brain nerve image feature extraction method based on a scale equalization coupling convolution framework is characterized by comprising the following steps:
acquiring a cranial nerve image comprising an sMRI image and an fMRI image, and respectively performing different pre-processing on the sMRI image and the fMRI image to obtain a 3D brain image and a 2D brain function network image;
performing convolution feature extraction operation on the 3D brain image and the 2D brain function network image respectively by using a scale equalization pyramid convolution network based on expansion convolution to obtain time-space scale equalization features of the 3D brain image and the 2D brain function network image;
and step three, performing random matching coupling calculation on the time-space scale balance characteristics obtained in the step two to obtain a coupling matrix, wherein the coupling matrix is used as the extracted fusion characteristics to be input into a classifier.
2. The method for extracting features of neuroimaging of brain based on scale equalization coupling convolution architecture as claimed in claim 1, wherein the process of preprocessing the sMRI image in step one comprises the following steps:
and (3) importing the sMRI image into SPM12 software, screening the image with excessive head movement through the head movement correction, normalizing the functional image file after the head movement correction to an MNI space, and stripping the skull and removing the cerebellum to finally obtain a 3D brain image.
3. The method for extracting features of neuroimaging of brain based on scale equalization coupling convolution architecture as claimed in claim 1, wherein the process of preprocessing fMRI image in step one includes the following steps:
importing the fMRI image into SPM12 software, and firstly, carrying out time point removing operation; then, time layer correction is carried out, and the unification of scanning time of each layer in a scanning period is ensured; then, performing head movement correction, evaluating the head movement condition of the tested head, and adjusting the image dislocation at different moments caused by the head movement correction; then directly registering the individual limit plane image to a standard EPI template; and finally, constructing a brain network, and dividing the fMRI data into 90 ROI nodes by using an AAL90 template to construct a brain function network.
4. The method for extracting features of cranial nerve images based on the scale equalization coupling convolution architecture as claimed in claim 1, wherein the second step comprises the following steps:
dividing each input data input into the scale equalization pyramid convolution network based on the expansion convolution into different scales;
selecting a corresponding balanced expansion convolution strategy according to the divided scales, and calculating to obtain output characteristics of different scales, wherein the calculation method is that the output characteristic of each scale is equal to the weight sum of the output characteristic of the scale and the output characteristic of the adjacent scale, and the sum of the output characteristics of all scales is the overall multi-scale output characteristic;
and after the feature extraction network is utilized to carry out feature extraction on the overall multi-scale output features, finally obtaining the time-space scale balance features of the input data.
5. The method for extracting features of brain neuroimaging based on scale-equalization coupling convolution architecture as claimed in claim 4, wherein the feature extraction network employs ResNet or ResNeXt residual network or Res2Net residual network.
6. The method for extracting features of cranial nerve images based on the scale equalization coupling convolution architecture as claimed in claim 1, wherein in step three, the coupling matrix is calculated by using any one of cosine similarity, spearman correlation coefficient, kendall rank correlation coefficient and pearson correlation coefficient.
CN202211359224.2A 2022-11-02 2022-11-02 Brain nerve image feature extraction method based on scale equalization coupling convolution architecture Active CN115409843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211359224.2A CN115409843B (en) 2022-11-02 2022-11-02 Brain nerve image feature extraction method based on scale equalization coupling convolution architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211359224.2A CN115409843B (en) 2022-11-02 2022-11-02 Brain nerve image feature extraction method based on scale equalization coupling convolution architecture

Publications (2)

Publication Number Publication Date
CN115409843A true CN115409843A (en) 2022-11-29
CN115409843B CN115409843B (en) 2023-04-07

Family

ID=84169248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211359224.2A Active CN115409843B (en) 2022-11-02 2022-11-02 Brain nerve image feature extraction method based on scale equalization coupling convolution architecture

Country Status (1)

Country Link
CN (1) CN115409843B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315425A (en) * 2023-10-12 2023-12-29 无锡市第五人民医院 Fusion method and system of multi-mode magnetic resonance images

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346530A (en) * 2014-10-29 2015-02-11 中国科学院深圳先进技术研究院 Method and system for extracting abnormal parameters of brain
CN110992351A (en) * 2019-12-12 2020-04-10 南京邮电大学 sMRI image classification method and device based on multi-input convolutional neural network
CN111402129A (en) * 2020-02-21 2020-07-10 西安交通大学 Binocular stereo matching method based on joint up-sampling convolutional neural network
CN112164082A (en) * 2020-10-09 2021-01-01 深圳市铱硙医疗科技有限公司 Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN112837274A (en) * 2021-01-13 2021-05-25 南京工业大学 Classification and identification method based on multi-mode multi-site data fusion
CN113040715A (en) * 2021-03-09 2021-06-29 北京工业大学 Human brain function network classification method based on convolutional neural network
CN114119702A (en) * 2021-11-30 2022-03-01 广州科技职业技术大学 Stereo matching method based on convolution network and related device
CN114242236A (en) * 2021-12-18 2022-03-25 深圳先进技术研究院 Structure-function brain network bidirectional mapping model construction method and brain network bidirectional mapping model
CN115191946A (en) * 2022-07-18 2022-10-18 山西白求恩医院(山西医学科学院、华中科技大学同济医学院附属同济医院山西医院、山西医科大学第三医院、山西医科大学第三临床医学院) Brain network blind source separation method based on multi-scale convolution self-encoder

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346530A (en) * 2014-10-29 2015-02-11 中国科学院深圳先进技术研究院 Method and system for extracting abnormal parameters of brain
CN110992351A (en) * 2019-12-12 2020-04-10 南京邮电大学 sMRI image classification method and device based on multi-input convolutional neural network
CN111402129A (en) * 2020-02-21 2020-07-10 西安交通大学 Binocular stereo matching method based on joint up-sampling convolutional neural network
CN112164082A (en) * 2020-10-09 2021-01-01 深圳市铱硙医疗科技有限公司 Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN112837274A (en) * 2021-01-13 2021-05-25 南京工业大学 Classification and identification method based on multi-mode multi-site data fusion
CN113040715A (en) * 2021-03-09 2021-06-29 北京工业大学 Human brain function network classification method based on convolutional neural network
CN114119702A (en) * 2021-11-30 2022-03-01 广州科技职业技术大学 Stereo matching method based on convolution network and related device
CN114242236A (en) * 2021-12-18 2022-03-25 深圳先进技术研究院 Structure-function brain network bidirectional mapping model construction method and brain network bidirectional mapping model
CN115191946A (en) * 2022-07-18 2022-10-18 山西白求恩医院(山西医学科学院、华中科技大学同济医学院附属同济医院山西医院、山西医科大学第三医院、山西医科大学第三临床医学院) Brain network blind source separation method based on multi-scale convolution self-encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XINJIANG WANG 等: "Scale-Equalizing Pyramid Convolution for Object Detection", 《ARXIV》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315425A (en) * 2023-10-12 2023-12-29 无锡市第五人民医院 Fusion method and system of multi-mode magnetic resonance images
CN117315425B (en) * 2023-10-12 2024-03-26 无锡市第五人民医院 Fusion method and system of multi-mode magnetic resonance images

Also Published As

Publication number Publication date
CN115409843B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
Cheng et al. Classification of MR brain images by combination of multi-CNNs for AD diagnosis
WO2023077603A1 (en) Prediction system, method and apparatus for abnormal brain connectivity, and readable storage medium
CN113616184B (en) Brain network modeling and individual prediction method based on multi-mode magnetic resonance image
CN112488976B (en) Multi-modal medical image fusion method based on DARTS network
US11615535B2 (en) Systems and methods for image processing
CN112102266A (en) Attention mechanism-based cerebral infarction medical image classification model training method
CN110175998A (en) Breast cancer image-recognizing method, device and medium based on multiple dimensioned deep learning
CN111597946A (en) Processing method of image generator, image generation method and device
Qin et al. Biomechanics-informed neural networks for myocardial motion tracking in MRI
CN113688862B (en) Brain image classification method based on semi-supervised federal learning and terminal equipment
CN115272295A (en) Dynamic brain function network analysis method and system based on time domain-space domain combined state
CN115409843B (en) Brain nerve image feature extraction method based on scale equalization coupling convolution architecture
CN117218453B (en) Incomplete multi-mode medical image learning method
CN115937129B (en) Method and device for processing left and right half brain relations based on multi-mode magnetic resonance image
CN113822323A (en) Brain scanning image identification processing method, device, equipment and storage medium
CN116128876B (en) Medical image classification method and system based on heterogeneous domain
Shah et al. EMED-UNet: an efficient multi-encoder-decoder based UNet for medical image segmentation
Chen et al. Image-level supervised segmentation for human organs with confidence cues
Yang et al. Hierarchical progressive network for multimodal medical image fusion in healthcare systems
CN114266738A (en) Longitudinal analysis method and system for mild brain injury magnetic resonance image data
CN113392938A (en) Classification model training method, Alzheimer disease classification method and device
Karani Tackling Distribution Shifts in Machine Learning-Based Medical Image Analysis
Dheepa et al. An Efficient Encoder-Decoder CNN for Brain Tumor Segmentation in MRI Images
Ye et al. Discriminative multi-task feature selection for multi-modality based AD/MCI classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant