CN116994060A - Brain texture analysis method based on LBP extraction and TCNN neural network - Google Patents

Brain texture analysis method based on LBP extraction and TCNN neural network Download PDF

Info

Publication number
CN116994060A
CN116994060A CN202311032394.4A CN202311032394A CN116994060A CN 116994060 A CN116994060 A CN 116994060A CN 202311032394 A CN202311032394 A CN 202311032394A CN 116994060 A CN116994060 A CN 116994060A
Authority
CN
China
Prior art keywords
brain
texture
sampling point
magnetic resonance
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311032394.4A
Other languages
Chinese (zh)
Inventor
张小瑞
卢培森
孙伟
张小娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202311032394.4A priority Critical patent/CN116994060A/en
Publication of CN116994060A publication Critical patent/CN116994060A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a brain texture analysis method based on LBP extraction and TCNN neural network, which comprises the steps of preprocessing MRI images by MIPAV software, registering the preprocessed images onto a standard brain template space MNI, and carrying out standardization processing by Min-Max; performing texture feature extraction on the MRI image by using a circular LBP method, and calculating the gray value by using a bilinear interpolation method; inputting the brain texture subjected to feature extraction into a neural network TCNN consisting of a transducer and a CNN, and classifying the texture; according to the method, a new neural network architecture TCNN is adopted for texture classification, the accuracy of brain texture feature extraction and classification is improved, and the method has practical significance and good prospect.

Description

Brain texture analysis method based on LBP extraction and TCNN neural network
Technical Field
The invention relates to the field of image processing, in particular to a brain texture analysis method based on LBP extraction and TCNN neural network.
Background
Texture is a property inherent to the surface of an object, and areas in an image often exhibit texture properties. In recent years, with the continuous development of neuroimaging technology and the deep research of brain science, brain texture analysis has become a popular field of interest. Brain texture refers to the internal texture features of brain tissue, including features in terms of cortical surface morphology, gray matter-white matter interface, and the like. The study of brain texture analysis aims at exploring the differences between the texture features of different brain structures, thereby providing basic support for biological study and neuroscience study.
However, the conventional brain texture analysis method has some problems such as low processing efficiency, high computational complexity, incomplete feature extraction, and the like. In recent years, deep learning-based methods have become a research hotspot. Convolutional Neural Networks (CNNs) have been used as an important method in deep learning, have achieved significant success in image recognition, classification, etc., and have been gradually applied to the field of brain texture analysis, but the pooling layer of CNNs loses a large amount of valuable information, ignoring the correlation between local and global, and thus reducing the accuracy of detection. The multi-head attention characteristic of the transducer makes the global performance excellent and the multi-mode fusion capability strong, but this requires a large number of data sets, requires larger computing resources and time, and cannot achieve the expected effect by using the transducer alone, so a new model architecture is needed to solve the problems. In addition, the local binary pattern (Local Binary Pattern, LBP) feature is widely used in the field of image processing as a commonly used texture feature description method. The LBP features have the advantages of simple calculation, robustness to image noise and brightness variation and the like, and have a certain application prospect in the field of brain texture analysis, but the window size of the LBP method is fixed (3 multiplied by 3) and is irrelevant to image content, so that errors can occur in the extraction of the texture primitive features of the LBP, and the method is difficult to adapt to the requirements of different roughness and scale textures.
Disclosure of Invention
The purpose of the invention is that: the method aims at exploring a brain texture analysis method based on LBP extraction and TCNN neural network classification, improves the existing LBP method to extract characteristics, adopts a new neural network architecture TCNN to carry out texture classification, improves the accuracy of brain texture characteristic extraction and classification, and has practical significance and good prospect.
In order to achieve the functions, the invention designs a brain texture analysis method based on LBP extraction and TCNN neural network, which comprises the following steps S1-S5, and the texture extraction and classification in brain structure magnetic resonance images are completed:
step S1: acquiring brain structure Magnetic Resonance Images (MRI), preprocessing by using MIPAV software, registering the preprocessed brain structure magnetic resonance images onto a standard brain template space MNI respectively, unifying coordinate spaces of the brain structure magnetic resonance images, and performing standardization processing;
step S2: constructing a brain texture feature extraction model, extracting texture features of the brain structure magnetic resonance image obtained in the step S1 by adopting a circular LBP method, wherein the coordinates of each sampling point cannot be guaranteed to be integers because the sampling points are distributed on a circle, directly substituting the sampling points with the coordinates being integers into a formula for calculation, rounding up and down the coordinates for the sampling points with non-integer coordinates, and calculating the gray value of the sampling points by adopting a bilinear interpolation method;
step S3: forming a TCNN neural network based on a transducer network and a CNN neural network, constructing a brain texture classification model, taking a brain structure magnetic resonance image extracted by texture features as input, and taking a texture type corresponding to the texture in the brain structure magnetic resonance image as output, so as to finish texture classification;
step S4: training the brain texture classification model, calculating a loss function of the brain texture classification model according to the original brain structure magnetic resonance image and the image extracted by the texture feature, and updating weight parameters by adopting a back propagation algorithm;
step S5: and (S4) repeating the step until the preset training times are reached, finishing the training of the brain texture classification model, and applying the trained brain texture classification model to finish the texture extraction and classification in the brain structure magnetic resonance image.
As a preferred technical scheme of the invention: the specific process of the step S1 comprises the following steps: preprocessing the acquired brain structure magnetic resonance image, including removing the cranium, correcting, filtering and enhancing the image, registering the brain structure magnetic resonance image on a standard brain template space MNI after preprocessing, adjusting and optimizing, selecting a corresponding registration algorithm, adjusting registration parameters and increasing registration constraints, and carrying out Min-Max standardization processing on pixel values of sampling points in the registered brain structure magnetic resonance image after registration, wherein Min-Max standardization means linear transformation on original data, and mapping the values between [0,1 ].
As a preferred technical scheme of the invention: the specific steps of step S2 are as follows:
step S2.1: for each sampling point in the brain structure magnetic resonance image obtained in the step S1, dividing a circular neighborhood by taking the sampling point as a central sampling point and taking R as a radius, selecting P equal-dividing points in the circular neighborhood, and taking a connecting line from the central sampling point to each equal-dividing point as a binary coding track, wherein the formula of each sampling point on the track is as follows:
wherein p represents the p-th sampling point, x p ,y p Respectively p-thAbscissa and ordinate, x of sampling point c ,y c Respectively the abscissa and the ordinate of the center sampling point c;
step S2.2: for sampling points with non-integer coordinates, selecting two sampling points p closest to the sampling point p on the track where the sampling points p are located 0 and p1 Will p 0 and p1 The abscissa and ordinate of (2) are respectively rounded up and down and are respectively marked as x 0 ,x 1 ,y 0 ,y 1 Four coordinates were obtained, respectively (x 0 ,y 0 ),(x 1 ,y 0 ),(x 0 ,y 1 ),(x 1 ,y 1 ) Then calculated according to the following formula:
wherein ,f(xp ,y p ) A pixel value representing the sampling point p;
step S2.3: comparing each sampling point on the track with the gray value of the central sampling point by taking the gray value of the central sampling point as a threshold value to obtain a binary code; the binary codes are arranged in a clockwise direction, and a binary number is obtained, wherein the gray value of each sampling point on the track is 1 when the gray value of each sampling point is larger than that of the central sampling point, and if the gray value of each sampling point is 0, the binary code is arranged in the clockwise direction; and counting frequency histograms of binary numbers of all sampling points on the track to obtain LBP characteristic vectors.
As a preferred technical scheme of the invention: in step S3, a brain texture classification model is constructed based on a transducer network and a CNN neural network, an image is divided into a plurality of patches, i is represented as any one position in the patches, j is represented as any position in a 3×3 neighborhood centered on i, and the attention module introduces a deep convolution represented by the following formula:
wherein ,Yi Is the output of the location i and,is a local neighborhood of position i, w i-j Is a weight matrix between position i and position j, X j Is the input of position j; in contrast, self-attention allows receptive fields to be not local neighbors and computes weights based on pairwise similarities;
the softmax function of the brain texture classification model is as follows:
wherein ,Yj Representing the output of location j, n representing the input passing through a certain location in global space, l also representing the input passing through a certain location in global space,representing global space, X j Is the input of position j, T represents the transpose of the matrix, X n Is the input of position n, X l Is the input of position l;
each patch is compared with other patches in the same image to generate an adaptive attention matrix, and a global static convolution kernel is added to the adaptive attention moment matrix after or before the softmax initialization, specifically as follows:
wherein ,representing the output of the function, post represents the inversion formula, n represents the input passing through a certain position in global space, T represents the transpose of the matrix, l also represents the input passing through a certain position in global space, X j Is the input of position j, X n Is the input of position n, X l Is the input of position l, w j-n Is the weight matrix between position j and position n.
As a preferred technical scheme of the invention: the back propagation algorithm of the brain texture classification model in step S4 is passed from the output layer to the input layer.
The beneficial effects are that: the advantages of the present invention over the prior art include:
1. the original LBP algorithm is improved, the original window 3 multiplied by 3 neighborhood is expanded to any neighborhood, the square neighborhood is replaced by the round neighborhood, the improved LBP method allows any plurality of sampling points to exist in the round neighborhood with the radius of R, and texture extraction can be more accurate.
2. The novel neural network architecture TCNN integrating the CNN and the converters is provided, and is classified by utilizing convolution operation and a self-attention mechanism, so that local features and global features can be considered, and the calculated amount is not excessively large, so that the training effect can be enhanced.
Drawings
Fig. 1 is a flowchart of a brain texture analysis method based on LBP extraction and TCNN neural network according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Referring to fig. 1, the brain texture analysis method based on LBP extraction and TCNN neural network provided by the embodiment of the invention includes steps S1-S5, to complete texture extraction and classification in brain structure magnetic resonance images:
step S1: acquiring brain structure Magnetic Resonance Images (MRI), preprocessing by using MIPAV software, registering the preprocessed brain structure magnetic resonance images onto a standard brain template space MNI respectively, unifying coordinate spaces of the brain structure magnetic resonance images, and performing standardization processing;
the specific process of the step S1 comprises the following steps: preprocessing the acquired brain structure magnetic resonance image, including removing the cranium, correcting, filtering and enhancing the image, registering the brain structure magnetic resonance image on a standard brain template space MNI after preprocessing, adjusting and optimizing, selecting a corresponding registration algorithm, adjusting registration parameters and increasing registration constraint, carrying out Min-max standardization processing on pixel values of sampling points in the registered brain structure magnetic resonance image after registration, enabling the coordinate space to be completely consistent with the standard brain template, checking and verifying the processed brain structure magnetic resonance image after standardization processing, and ensuring the image quality and accuracy, wherein the Min-max standardization formula is as follows:
where ND represents the result of data normalization, OD represents the original data, minA represents the minimum value of all data of attribute A, and MaxA represents the maximum value of all data of attribute A. Normalizing pixel values of pixels in the brain structure magnetic resonance image to be between 0 and 1 through Min-max normalization processing so as to avoid the influence of brightness difference among different images on texture characteristics;
step S2: constructing a brain texture feature extraction model, extracting texture features of the brain structure magnetic resonance image obtained in the step S1 by adopting a circular LBP method, calculating the gray value of the brain structure magnetic resonance image by utilizing a bilinear interpolation method, and obtaining a feature vector corresponding to the brain structure magnetic resonance image according to the gray value to finish texture feature extraction;
the specific steps of step S2 are as follows:
step S2.1: for each sampling point in the brain structure magnetic resonance image obtained in the step S1, dividing a circular neighborhood by taking the sampling point as a central sampling point and taking R as a radius, selecting P equal-dividing points in the circular neighborhood, and taking a connecting line from the central sampling point to each equal-dividing point as a binary coding track, wherein the formula of each sampling point on the track is as follows:
wherein p represents the p-th sampling point, x p ,y p X is the abscissa and the ordinate, x, of the p-th sampling point respectively c ,y c Respectively the abscissa and the ordinate of the center sampling point c;
step S2.2: when using the circular LBP algorithm, since the sampling points are distributed on a circle, the coordinates of each sampling point cannot be guaranteed to be an integer, and for the sampling points with non-integer coordinates, a bilinear interpolation method is needed to solve the problem, if a certain sampling point on the track does not fall on an integer position, the pixel gray value of two nearest integer positions in the track is marked as p according to the gray values of the pixels of the two nearest integer positions 0 and p1 Calculating the gray value by using a bilinear interpolation method; p is taken up 0 and p1 The abscissa and ordinate of (2) are respectively rounded up and down and are respectively marked as x 0 ,x 1 ,y 0 ,y 1 Four coordinates were obtained, respectively (x 0 ,y 0 ),(x 1 ,y 0 ),(x 0 ,y 1 ),(x 1 ,y 1 ) Then calculated according to the following formula:
wherein ,f(xp ,y p ) A pixel value representing the sampling point p;
step S2.3: comparing each sampling point on the track with the gray value of the central sampling point by taking the gray value of the central sampling point as a threshold value to obtain a binary code; the binary codes are arranged in a clockwise direction, and a binary number is obtained, wherein the gray value of each sampling point on the track is 1 when the gray value of each sampling point is larger than that of the central sampling point, and if the gray value of each sampling point is 0, the binary code is arranged in the clockwise direction; counting frequency histograms of binary numbers of all sampling points on the track to obtain LBP feature vectors; and for the combination of different radiuses and the number of neighborhoods, a statistical learning algorithm is used for carrying out classification or regression tasks, and the performance of the improved LBP method is evaluated.
Step S3: forming a TCNN neural network based on a transducer network and a CNN neural network, constructing a brain texture classification model, taking a brain structure magnetic resonance image extracted by texture features as input, and taking a texture type corresponding to the texture in the brain structure magnetic resonance image as output, so as to finish texture classification;
matching a new data set formed by the brain structure magnetic resonance image with an image data format of an input brain texture classification model, preprocessing and standardizing the new data set to ensure that the new data set has the same attribute and characteristic as the original training data set, initializing parameters of the pre-trained brain texture classification model through random initialization, fine-tuning the brain texture classification model by using the new data set after the steps are finished, updating the parameters of the brain texture classification model by using a back propagation algorithm, minimizing a loss function, performing 30 iterations in the fine-tuning process, and obtaining corresponding evaluation accuracy to ensure the accuracy of experiments; and then, after the LBP characteristics of the magnetic resonance image of the brain structure are extracted, a new image block is generated and embedded and input into a TCNN neural network based on the integration of a transducer and a CNN.
In step S3, a brain texture classification model is constructed based on a transducer network and a CNN neural network, an image is divided into a plurality of patches, i is represented as any one position in the patches, j is represented as any position in a 3×3 neighborhood centered on i, and the attention module introduces a deep convolution represented by the following formula:
wherein ,Yi Is the output of the location i and,is a local neighborhood of position i, w i-j Is a weight matrix between position i and position j, X j Is the input of position j; in contrast, self-saturation allows the receptive field to be other than localA neighborhood, and calculating weights based on the pairwise similarities;
the softmax function of the brain texture classification model is as follows:
wherein ,Yj Representing the output of location j, n representing the input passing through a certain location in global space, l also representing the input passing through a certain location in global space,representing global space, X j Is the input of position j, T represents the transpose of the matrix, X n Is the input of position n, X l Is the input of position l;
each patch is compared with other patches in the same image to generate an adaptive attention matrix, and a global static convolution kernel is added to the adaptive attention moment matrix after or before the softmax initialization, specifically as follows:
wherein ,representing the output of the function, post represents the inversion formula, n represents the input passing through a certain position in global space, T represents the transpose of the matrix, l also represents the input passing through a certain position in global space, X j Is the input of position j, X n Is the input of position n, X l Is the input of position l, w j-n Is the weight matrix between position j and position n.
Step S4: training the brain texture classification model, calculating a loss function of the brain texture classification model according to the original brain structure magnetic resonance image and the image extracted by the texture feature, and updating weight parameters by adopting a back propagation algorithm;
the specific process of the step S4 is to calculate the output result of the neural network, compare the output result of the neural network with the real label, calculate the loss function; calculating the derivative of the loss function on the network parameters, transferring the derivative from the output layer to the input layer by using a chain rule, and calculating the gradient of each parameter; according to the calculated gradient, updating the neural network parameters according to a gradient descent method or other optimization algorithms; repeating the steps until reaching the specified stopping condition; the propagation rule of the multilayer partial derivative is as follows:
in the step S4, the back propagation algorithm of the brain texture classification model is transmitted from an output layer to an input layer, the partial derivative of the current layer can be obtained by only circularly and iteratively calculating the value of each node of each layer, so that the gradient of the weight matrix W of each layer is obtained, and then the network parameters are iteratively optimized through the gradient descent algorithm.
Step S5: repeating the step S4 until the preset training times are reached, completing the training of the brain texture classification model, and applying the trained brain texture classification model to complete the texture extraction and classification in the brain structure magnetic resonance image; the specific process is as follows: inputting the newly acquired brain structure magnetic resonance image into a brain texture feature extraction model to obtain corresponding texture features; and inputting the texture features into a trained brain texture classification model to obtain a corresponding classification result. When the model needs to be verified, inputting a plurality of magnetic resonance images of similar brain structures to be verified into a neural network, and outputting a classification result; if the accuracy of classification is better than the previous accuracy, the model is proved to be capable of extracting the brain texture features more accurately and improving the accuracy of brain texture classification.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (5)

1. The brain texture analysis method based on LBP extraction and TCNN neural network is characterized by comprising the following steps S1-S5, and the texture extraction and classification in the brain structure magnetic resonance image are completed:
step S1: acquiring brain structure magnetic resonance images, preprocessing by using MIPAV software, registering the preprocessed brain structure magnetic resonance images onto a standard brain template space MNI respectively, unifying coordinate spaces of the brain structure magnetic resonance images, and performing standardization processing by using Min-Max;
step S2: constructing a brain texture feature extraction model, extracting texture features of the brain structure magnetic resonance image obtained in the step S1 by adopting a circular LBP method, wherein the coordinates of each sampling point cannot be guaranteed to be integers because the sampling points are distributed on a circle, directly substituting the sampling points with the coordinates being integers into a formula for calculation, rounding up and down the coordinates for the sampling points with non-integer coordinates, and calculating the gray value of the sampling points by adopting a bilinear interpolation method;
step S3: constructing a brain texture classification model based on a TCNN neural network, taking the brain structure magnetic resonance image extracted by the texture features as input, and taking the texture type corresponding to the texture in the brain structure magnetic resonance image as output, so as to finish texture classification;
step S4: training the brain texture classification model, calculating a loss function of the brain texture classification model according to the original brain structure magnetic resonance image and the image extracted by the texture feature, and updating weight parameters by adopting a back propagation algorithm;
step S5: and (S4) repeating the step until the preset training times are reached, finishing the training of the brain texture classification model, and applying the trained brain texture classification model to finish the texture extraction and classification in the brain structure magnetic resonance image.
2. The brain texture analysis method based on LBP extraction and TCNN neural network according to claim 1, wherein the specific procedure of step S1 includes: preprocessing the acquired brain structure magnetic resonance image, including removing the cranium, correcting, filtering and enhancing the image, registering the brain structure magnetic resonance image on a standard brain template space MNI after preprocessing, adjusting and optimizing, selecting a corresponding registration algorithm, adjusting registration parameters and increasing registration constraints, and carrying out Min-Max standardization processing on pixel values of sampling points in the registered brain structure magnetic resonance image after registration, wherein Min-Max standardization means linear transformation on original data, and mapping the values between [0,1 ].
3. The brain texture analysis method based on LBP extraction and TCNN neural network according to claim 1, wherein the specific steps of step S2 are as follows:
step S2.1: for each sampling point in the brain structure magnetic resonance image obtained in the step S1, dividing a circular neighborhood by taking the sampling point as a central sampling point and taking R as a radius, selecting P equal-dividing points in the circular neighborhood, and taking a connecting line from the central sampling point to each equal-dividing point as a binary coding track, wherein the formula of each sampling point on the track is as follows:
wherein p represents the p-th sampling point, x p ,y p X is the abscissa and the ordinate, x, of the p-th sampling point respectively c ,y c Respectively the abscissa and the ordinate of the center sampling point c;
step S2.2: for sampling points with non-integer coordinates, selecting two sampling points p closest to the sampling point p on the track where the sampling points p are located 0 and p1 Will p 0 and p1 The abscissa and ordinate of (2) are respectively rounded up and down and are respectively marked as x 0 ,x 1 ,y 0 ,y 1 Four coordinates were obtained, respectively (x 0 ,y 0 ),(x 1 ,y 0 ),(x 0 ,y 1 ),(x 1 ,y 1 ) Then calculated according to the following formula:
wherein ,f(xp ,y p ) A pixel value representing the sampling point p;
step S2.3: comparing each sampling point on the track with the gray value of the central sampling point by taking the gray value of the central sampling point as a threshold value to obtain a binary code; the binary codes are arranged in a clockwise direction, and a binary number is obtained, wherein the gray value of each sampling point on the track is 1 when the gray value of each sampling point is larger than that of the central sampling point, and if the gray value of each sampling point is 0, the binary code is arranged in the clockwise direction; and counting frequency histograms of binary numbers of all sampling points on the track to obtain LBP characteristic vectors.
4. The brain texture analysis method based on LBP extraction and TCNN neural network according to claim 1, wherein in step S3, a brain texture classification model is constructed based on a transducer network and a CNN neural network, the image is divided into a plurality of patches, i is represented as any one position in the patches, j is represented as any position in a 3 x 3 neighborhood centered on i, and the attention module introduces a deep convolution represented as:
wherein ,Yi Is the output of the location i and,is a local neighborhood of position i, w i-j Is a weight matrix between position i and position j, X j Is the input of position j; in contrast, self-attention allows receptive fields to be not local neighbors and computes weights based on pairwise similarities;
the softmax function of the brain texture classification model is as follows:
wherein ,Yj Representing the output of location j, n representing the input passing through a certain location in global space, l also representing the input passing through a certain location in global space,representing global space, X j Is the input of position j, T represents the transpose of the matrix, X n Is the input of position n, X l Is the input of position l;
each patch is compared with other patches in the same image to generate an adaptive attention matrix, and a global static convolution kernel is added to the adaptive attention moment matrix after or before the softmax initialization, specifically as follows:
wherein ,representing the output of the function, post represents the inversion formula, n represents the input passing through a certain position in global space, T represents the transpose of the matrix, l also represents the input passing through a certain position in global space, X j Is the input of position j, X n Is the input of position n, X l Is the input of position l, w j-n Is the weight matrix between position j and position n.
5. The brain texture analysis method based on LBP extraction and TCNN neural network according to claim 1, wherein the back propagation algorithm of the brain texture classification model in step S4 is transferred from the output layer to the input layer.
CN202311032394.4A 2023-08-16 2023-08-16 Brain texture analysis method based on LBP extraction and TCNN neural network Pending CN116994060A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311032394.4A CN116994060A (en) 2023-08-16 2023-08-16 Brain texture analysis method based on LBP extraction and TCNN neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311032394.4A CN116994060A (en) 2023-08-16 2023-08-16 Brain texture analysis method based on LBP extraction and TCNN neural network

Publications (1)

Publication Number Publication Date
CN116994060A true CN116994060A (en) 2023-11-03

Family

ID=88524698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311032394.4A Pending CN116994060A (en) 2023-08-16 2023-08-16 Brain texture analysis method based on LBP extraction and TCNN neural network

Country Status (1)

Country Link
CN (1) CN116994060A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409030A (en) * 2023-12-14 2024-01-16 齐鲁工业大学(山东省科学院) OCTA image blood vessel segmentation method and system based on dynamic tubular convolution

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409030A (en) * 2023-12-14 2024-01-16 齐鲁工业大学(山东省科学院) OCTA image blood vessel segmentation method and system based on dynamic tubular convolution
CN117409030B (en) * 2023-12-14 2024-03-22 齐鲁工业大学(山东省科学院) OCTA image blood vessel segmentation method and system based on dynamic tubular convolution

Similar Documents

Publication Publication Date Title
CN108510532B (en) Optical and SAR image registration method based on deep convolution GAN
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
Ye et al. Real-time no-reference image quality assessment based on filter learning
CN107578007A (en) A kind of deep learning face identification method based on multi-feature fusion
JP7130905B2 (en) Fast and Robust Dermatoglyphic Mark Minutia Extraction Using Feedforward Convolutional Neural Networks
CN114187450A (en) Remote sensing image semantic segmentation method based on deep learning
CN113095333B (en) Unsupervised feature point detection method and unsupervised feature point detection device
CN116994060A (en) Brain texture analysis method based on LBP extraction and TCNN neural network
CN112634149A (en) Point cloud denoising method based on graph convolution network
CN111079514A (en) Face recognition method based on CLBP and convolutional neural network
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
CN111274915A (en) Depth local aggregation descriptor extraction method and system for finger vein image
CN116664892A (en) Multi-temporal remote sensing image registration method based on cross attention and deformable convolution
CN115527072A (en) Chip surface defect detection method based on sparse space perception and meta-learning
Lu et al. Image-specific prior adaptation for denoising
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization
CN113763274A (en) Multi-source image matching method combining local phase sharpness orientation description
CN112967210A (en) Unmanned aerial vehicle image denoising method based on full convolution twin network
CN112232119A (en) Remote sensing texture image segmentation method and device
Sinha et al. Iris segmentation using deep neural networks
CN111666813A (en) Subcutaneous sweat gland extraction method based on three-dimensional convolutional neural network of non-local information
CN111797903B (en) Multi-mode remote sensing image registration method based on data-driven particle swarm optimization
CN115170854A (en) End-to-end PCANetV 2-based image classification method and system
CN113744241A (en) Cell image segmentation method based on improved SLIC algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination