CN116309250A - Image feature extraction and classification method based on muscle ultrasound - Google Patents

Image feature extraction and classification method based on muscle ultrasound Download PDF

Info

Publication number
CN116309250A
CN116309250A CN202211093348.0A CN202211093348A CN116309250A CN 116309250 A CN116309250 A CN 116309250A CN 202211093348 A CN202211093348 A CN 202211093348A CN 116309250 A CN116309250 A CN 116309250A
Authority
CN
China
Prior art keywords
image
muscle
feature
features
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211093348.0A
Other languages
Chinese (zh)
Inventor
周永进
邓妙琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202211093348.0A priority Critical patent/CN116309250A/en
Priority to PCT/CN2022/137851 priority patent/WO2024051015A1/en
Publication of CN116309250A publication Critical patent/CN116309250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4519Muscles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Child & Adolescent Psychology (AREA)
  • Dentistry (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Rheumatology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)

Abstract

The invention discloses an image feature extraction and classification method based on muscle ultrasound, which comprises the steps of taking a muscle ultrasound image as a training data set; extracting a first image feature from the training dataset; performing feature selection and feature dimension reduction on the first image features to obtain second image features; training a classifier based on the second image characteristics and performing model verification to obtain a classification model of the muscle ultrasonic image; and acquiring a muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, and inputting the third image feature into a classification model of the muscle ultrasonic image for classification to obtain a classification result. The invention can collect muscle ultrasonic images with low cost, collect wider muscle structure and function indexes, and construct a classification model by combining image histology with artificial intelligent means such as machine learning, deep learning and the like, thereby realizing classification and evaluation of individual muscle ultrasonic images and realizing large-scale early disease analysis and risk evaluation at individual level.

Description

Image feature extraction and classification method based on muscle ultrasound
Technical Field
The invention relates to the field of medical image recognition, in particular to an image feature extraction and classification method based on muscle ultrasound.
Background
Alzheimer's Disease (AD) is a neurodegenerative Disease characterized by hidden attacks and progressive impairment of behavioral and cognitive functions, occurring mainly in the 65 year old and older population. With the increase in aging, the number of AD patients increases year by year. However, because of the hidden disease, high incidence and complex pathophysiological changes, the early stage of the disease is interfered in time, which is helpful for delaying and controlling the development of the disease.
Neuroimaging detection is mainly based on CT, MRI, fMRI, PET and other structural and functional neuroimaging technologies, and is used for noninvasively detecting brain structural and functional changes in vitro. The CT and MRI can quantitatively calculate morphological changes such as thickness, volume and the like of the cerebral cortex based on morphological measurement of voxels; fMRI and PET can provide anatomical and physiological information of brain, and detect regions of metabolic activity change in the brain of a patient and protein polymers related to AD in the brain. The neuroimaging detection technology plays an important role in finding early AD patients to a certain extent, but has the disadvantages of high detection cost and high cost, and is not suitable for early large-scale screening.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
The invention aims to solve the technical problems that the detection cost is high and the cost is high in the prior art, and is not suitable for early large-scale screening.
The technical scheme adopted for solving the technical problems is as follows:
in a first aspect, the present invention provides a method for extracting and classifying image features based on muscle ultrasound, wherein the method comprises:
acquiring a muscle ultrasonic image, taking the muscle ultrasonic image as a training data set, and extracting first image features from the training data set;
performing feature selection and feature dimension reduction on the first image features to obtain second image features;
training a machine learning model and a deep learning model based on the second image features to obtain a classifier, and performing model verification on the classifier to obtain a classification model of the muscle ultrasonic image;
and acquiring a muscle ultrasonic image to be classified, taking the muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, and inputting the third image feature into a classification model of the muscle ultrasonic image for classification to obtain a classification result.
In one implementation, the acquiring a muscle ultrasound image, using the muscle ultrasound image as a training dataset, includes:
setting a detection mode of an ultrasonic imaging system as a musculoskeletal detection mode;
placing the long axis of the ultrasonic probe parallel to the long axis direction of the muscle, and keeping the ultrasonic probe placed at the first detection position by arranging a mark;
based on the detection mode, acquiring a muscle ultrasonic image of the first detection position by using a real-time B-mode ultrasonic imaging device under the static state of the subject;
and taking the muscle ultrasonic image of the first detection position as the training data set.
In one implementation, the extracting the first image feature from the training dataset includes:
carrying out normalized Leiden transformation on the training data set to obtain a Leiden transformation matrix;
gradient solving is carried out on the Leiden transformation matrix, and edge enhancement is carried out, so that a Leiden transformation gradient matrix is obtained;
performing binarization processing and clustering processing on the Reden transformation gradient matrix to obtain deep fascia characteristic points, muscle bundle characteristic points and shallow fascia characteristic points;
performing accurate division and Reden inverse transformation on the deep fascia characteristic points, the muscle bundle characteristic points and the shallow fascia characteristic points to obtain muscle thickness, muscle fiber length and feather angle characteristics;
Obtaining the morphological characteristics of the muscle according to the thickness, the length of the muscle fiber and the feather angle characteristics of the muscle;
acquiring the average frequency analysis feature of the training dataset; wherein the calculation formula of the average frequency analysis characteristic is that
Figure BDA0003837911930000031
n, I, and f are the length, power, and frequency of the power density spectrum, respectively;
the first image feature is derived based on the muscle morphology feature and the average frequency analysis feature.
In one implementation, the extracting the first image feature from the training dataset includes:
extracting first-order statistical features from the training dataset based on pixel gray distribution calculations; wherein the first order statistical features include integrated spectral density, mean, standard deviation, variance, skewness, kurtosis, and energy;
extracting Haralick features from the training dataset based on gray level co-occurrence matrix calculation; wherein the Haralick features include contrast, correlation, energy, entropy, homogeneity and symmetry;
extracting a galaway feature from the training dataset based on gray run-length matrix computation; wherein the galaway feature includes a short run advantage, a long run advantage, a gray level non-uniformity, a long run non-uniformity, a run Cheng Bai percentage;
Based on the comparison result of the central pixel of the image local area and the neighborhood thereof, the method comprises the following steps ofExtracting local binary pattern features in the training data set; wherein the local binary pattern features include energy and entropy, the energy being LBP energy =∑ i f i 2 The entropy is LBP entropy =-∑ i f i 2 log 2 (f i ),f i Representing the corresponding frequency of the i-th block in the image local area;
obtaining the image texture analysis feature according to the first-order statistical feature, the Haralick feature, the Galoway feature and the local binary pattern feature;
and obtaining the first image characteristic based on the image texture analysis characteristic.
In one implementation manner, the performing feature selection and feature dimension reduction on the first image feature to obtain a second image feature includes:
performing feature selection on the first image features by adopting a t-test method to obtain screened first image features;
and performing feature dimension reduction on the screened first image features by adopting a principal component analysis, a linear discriminant analysis, a t-SNE, a multidimensional scaling method, an equidistant mapping method, local linear embedding, laplacian feature mapping and neighbor component analysis method to obtain the second image features.
In one implementation, the training a machine learning model and a deep learning model based on the second image features, to obtain a classifier, includes:
Training a machine learning model to be trained and a deep learning model based on the second image features to obtain training results;
evaluating the training result through a first performance index to obtain the classifier; the first performance index comprises accuracy, precision, recall rate, F1 fraction and area under a working curve of the subject.
In one implementation, the performing model verification on the classifier to obtain a classification model of the muscle ultrasound image further includes:
acquiring a verification data set, inputting the verification data set into the classifier, and performing model verification by adopting 10-fold cross verification to obtain verification accuracy;
and adjusting the classifier according to the verification precision to obtain a classification model of the muscle ultrasonic image.
In a second aspect, an embodiment of the present invention further provides a device for extracting and classifying image features based on muscle ultrasound, where the device includes:
the first image feature acquisition module is used for acquiring a muscle ultrasonic image, taking the muscle ultrasonic image as a training data set and extracting first image features from the training data set;
the second image feature acquisition module is used for carrying out feature selection and feature dimension reduction on the first image features to obtain second image features;
The classification model acquisition module is used for training a machine learning model and a deep learning model based on the second image features to obtain a classifier, and performing model verification on the classifier to obtain a classification model of the muscle ultrasonic image;
the classifying module is used for acquiring a muscle ultrasonic image to be classified, taking the muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, and inputting the third image feature into a classifying model of the muscle ultrasonic image for classification to obtain a classifying result.
In a third aspect, an embodiment of the present invention further provides an intelligent terminal, where the intelligent terminal includes a memory, a processor, and a muscle ultrasound-based image feature extraction and classification program stored in the memory and capable of running on the processor, where the processor implements the steps of the muscle ultrasound-based image feature extraction and classification method as described in any one of the above when executing the muscle ultrasound-based image feature extraction and classification program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a muscle ultrasound based image feature extraction and classification program, where the muscle ultrasound based image feature extraction and classification program, when executed by a processor, implements the steps of the muscle ultrasound based image feature extraction and classification method according to any one of the above.
The beneficial effects are that: compared with the prior art, the invention provides the image feature extraction and classification method based on the muscle ultrasound, firstly, the muscle ultrasound image is taken as a training data set, and the image acquisition cost can be remarkably reduced by utilizing the muscle ultrasound image obtained by the widely-used and low-cost B-type ultrasound imaging equipment. And extracting the first image features from the training data set, and obtaining the second image features by performing feature selection and feature dimension reduction on the first image features so as to avoid possible redundancy, high linear correlation and model overfitting among the features. Then training a machine learning model and a deep learning model based on the second image features to obtain a plurality of classifiers, and evaluating the classifier with the optimal performance as a classification model of the muscle ultrasonic image through model verification on the classifier. And finally, extracting a third image feature from the muscle ultrasonic image to be classified, and inputting the third image feature into a classification model of the muscle ultrasonic image for classification, so that early risk assessment of individual level is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a flow chart of an image feature extraction and classification method based on muscle ultrasound according to an embodiment of the present invention.
Fig. 2 is an original ultrasound image of gastrocnemius muscle provided by an embodiment of the present invention.
Fig. 3 is a diagram showing an exemplary detection of gastrocnemius ultrasonic image characteristics according to an embodiment of the present invention.
Fig. 4 is a flowchart of morphological parameter extraction according to an embodiment of the present invention.
Fig. 5 is a schematic block diagram of an image feature extraction and classification device based on muscle ultrasound according to an embodiment of the present invention.
Fig. 6 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Sarcopenia, a common aging phenotype, is defined as a loss of muscle structure and function, and is closely associated with Alzheimer's Disease (AD), mild cognitive impairment, and decline in cognitive ability. Furthermore, in many elderly people, impaired motor function precedes and predicts reduced cognitive ability, mild cognitive impairment and AD.
Taking AD as an example, at present, three methods for clinical diagnosis exist: the first is neuropsychological test based on multiple psychobehavioral symptom ratings, the second is biochemical test based on samples of cerebrospinal fluid, blood and urine, and the third is neuroimaging test based on CT, MRI, PET technology.
The neuropsychological test method is to utilize multiple psychobehavioral symptom rating scales to detect cognitive decline and evaluate the decline degree and dementia, and the cognitive decline is found by the test, so that objective basis can be provided for screening and diagnosis, and meanwhile, the method is helpful for differential diagnosis of dementia. The commonly used test scales can be divided into the following categories according to clinical use and purpose of use: cognitive impairment screening scales, cognitive function assessment scales, daily life ability assessment scales, mental behavior symptom assessment scales, general function assessment scales, dementia grading scales and identification and exclusion diagnosis scales, such as the most widely used cognitive screening scales at present-simple intelligent state scales (mini-mental state examination, MMSE), montreal cognitive assessment scales (montreal cognitive assessment scale, mocA), painting clock tests (clock drawing test, CDT) and the like, are helpful for identifying early AD patients, but most of the early diagnosis sensitivity and specificity for AD are not ideal, and the scales are long in test consumption and are partially susceptible to disturbance of education, culture and other factors.
Biochemical detection of related biological markers mainly takes cerebrospinal fluid, peripheral blood, urine and the like as sample sources, and is based on beta-amyloid beta (Abeta) abnormal deposition, tau protein hyperphosphorylation, immunoinflammatory reaction, mitochondrial dysfunction, oxidative stress and the like [5 ]]And detecting potential biomarkers in the AD molecular pathogenic mechanism accepted at home and abroad. At present, detection of AD related biological markers mainly comprises cerebrospinal fluid examination, although the core biomarkers T-tau, P-tau and Abeta 42 High sensitivity and specificity have been shown in AD patient identification, but cerebrospinal fluid collection is an invasive sampling modality that may cause discomfort or side effects in some subjects; the cerebrospinal fluid biomarker has high variability, is easily influenced by transportation, experimental methods and reference values, and is difficult to determine in clinical limit values; in addition, the popularization difficulty is high, and the method is not suitable for large-scale early screening. In addition to cerebrospinal fluid detection, there are also peripheral blood, urine and other biomarkers for detection, although these two samples are extremely simple to collect, the blood components are complex, the content of blood proteins and compounds interfering with the measurement is high, while potential AD biomarkers are present in the blood at a lower concentration and are affected by peripheral circulation; urine is used as a potential source of brain disease biomarkers, can accumulate various metabolic changes related to diseases, has the advantages of noninvasive, convenient and repeatable acquisition, rapid metabolism of urine components, low stability, high requirements of related detection technologies on technicians and equipment, and limits the clinical popularization and application of the urine.
Neuroimaging detection is mainly based on CT, MRI, fMRI, PET and other structural and functional neuroimaging technologies, and is used for noninvasively detecting brain structural and functional changes in vitro. The CT and MRI can quantitatively calculate morphological changes such as thickness, volume and the like of the cerebral cortex based on morphological measurement of voxels; fMRI and PET can provide anatomical and physiological information of brain, and detect regions of metabolic activity change in the brain of a patient and protein polymers related to AD in the brain. The neuroimaging detection technology plays an important role in early diagnosis and differential diagnosis of AD patients to a certain extent, but has the disadvantages of high detection cost and high cost, and is not suitable for early large-scale screening.
Therefore, in order to solve the above-mentioned problems, the present embodiment provides a method for extracting and classifying image features based on muscle ultrasound, firstly, obtain a muscle ultrasound image, use the muscle ultrasound image as a training data set, and evaluate individual risk by using a muscle ultrasound image obtained by a widely used and low-cost B-type ultrasound imaging device, so as to realize noninvasive, convenient and low-cost early-stage large-scale screening and risk evaluation of diseases. And extracting the first image features from the training data set, and obtaining the second image features by performing feature selection and feature dimension reduction on the first image features so as to avoid possible redundancy, high linear correlation and model overfitting among the features. Then training a machine learning model and a deep learning model based on the second image features to obtain a plurality of classifiers, and evaluating the classifier with the optimal performance as a classification model of the muscle ultrasonic image through model verification on the classifier. And finally, extracting a third image feature from the muscle ultrasonic image to be classified, and inputting the third image feature into a classification model of the muscle ultrasonic image for classification, so that a classification result is obtained through a noninvasive and convenient muscle ultrasonic image classification method, and early risk assessment of individual level is realized.
Exemplary method
The embodiment provides an image feature extraction and classification method based on muscle ultrasound. In specific implementation, as shown in fig. 1, the method includes the following steps:
step S100, acquiring a muscle ultrasonic image, taking the muscle ultrasonic image as a training data set, and extracting first image features from the training data set;
in one implementation, the step S100 specifically includes the following steps:
step S101, setting a detection mode of an ultrasonic imaging system as a musculoskeletal detection mode;
step S102, placing a long axis of an ultrasonic probe parallel to the long axis direction of the muscle, and keeping the ultrasonic probe placed at the first detection position by setting a mark;
step S103, acquiring a muscle ultrasonic image of the first detection position by using a real-time B-mode ultrasonic imaging device under the static state of the subject based on the detection mode;
and step S104, taking the muscle ultrasonic image of the first detection position as the training data set.
Type B ultrasound imaging devices are the most widely used, simplest ultrasound devices in clinical practice, have the potential to detect muscle fine structures, and allow visualization and quantification of muscle structures. The muscle ultrasonic image obtained by the widely-used and low-cost B-type ultrasonic imaging equipment is used for evaluating the individual risk, so that noninvasive, convenient and low-cost early-stage large-scale disease screening and risk evaluation can be realized.
Specifically, under the static state of the subject, acquiring a muscle ultrasonic image of the subject by using a real-time B-mode ultrasonic imaging device, wherein the ultrasonic imaging system selects a muscle bone detection mode, and the long axis of the ultrasonic probe is parallel to the long axis direction of the muscle and is placed at a muscle abdomen or other specific position, namely a first detection position in the embodiment; applying an appropriate amount of an ultrasound gel couplant to ensure acoustic coupling between the probe and the skin; the ultrasound probe may be adjusted to optimize the contrast of the muscle bundles in the ultrasound image and the positions may be marked to ensure that the probe is placed in the same position each time. An example of gastrocnemius ultrasound image is shown in figure 2.
It should be noted that in the aspect of muscle ultrasound image acquisition, in addition to acquiring a muscle ultrasound image of a subject under static conditions, a muscle ultrasound image of a subject during dynamic structural changes generated by muscle stretching can be acquired; the acquisition device may also use a shear wave elastography device or other ultrasound imaging modality in addition to the B-mode ultrasound device.
Step 105, carrying out normalized Leiden transformation on the training data set to obtain a Leiden transformation matrix;
step S106, gradient solving is carried out on the Leiden transformation matrix, and edge enhancement is carried out, so that a Leiden transformation gradient matrix is obtained;
Step S107, performing binarization processing and clustering processing on the Reden transformation gradient matrix to obtain deep fascia characteristic points, muscle bundle characteristic points and shallow fascia characteristic points;
step S108, accurately dividing the deep fascia characteristic points, the muscle bundle characteristic points and the shallow fascia characteristic points and performing the Reden inverse transformation to obtain muscle thickness, muscle fiber length and feather angle characteristics;
step S109, obtaining the morphological characteristics of the muscle according to the thickness, the length of the muscle fiber and the feather angle characteristics of the muscle;
specifically, morphological characteristics of the muscle include muscle thickness, muscle fiber length, feathered angle, and the like. The present embodiment uses a feature detection scheme based on a matrix of the Reden transformation gradient to estimate the morphological parameters of the muscle. As shown in fig. 3, the muscle layer of the gastrocnemius muscle includes a superficial fascia, a fascial region, and a deep fascia, and a fascial line L1 and a deep fascia line L2 may be drawn according to the arrangement direction of the muscle fibers and the direction of the deep fascia. The image is normalized and subjected to the Leiden transformation, the transformation matrix is subjected to gradient, edge characteristic points of the muscle structural elements are highlighted, then the deep and shallow fascia edge characteristic points and the muscle bundle edge characteristic points are accurately divided, and finally the characteristic points are subjected to the Leiden inverse transformation to accurately position the muscle structural elements, so that the morphological parameters are calculated, and a flow chart of specific muscle morphological feature extraction is shown in figure 4.
Step S110, acquiring the average frequency analysis characteristic of the training data set; wherein the calculation formula of the average frequency analysis characteristic is that
Figure BDA0003837911930000101
n, I, and f are the length, power, and frequency of the power density spectrum, respectively;
and step S111, obtaining the first image characteristic based on the muscle morphology characteristic and the average frequency analysis characteristic.
In particular, the average frequency analysis feature (mean frequency analysis feature, MFAF) is a valid parameter related to muscle mass and hopefully describing skeletal muscle structure differences, and is not significantly affected by the configuration of different ultrasound devices. The calculation formula is as follows:
Figure BDA0003837911930000111
where n, I, and f are the length, power, and frequency, respectively, of the power density spectrum.
Step S112, extracting first-order statistical features from the training data set based on pixel gray distribution calculation; wherein the first order statistical features include integrated spectral density, mean, standard deviation, variance, skewness, kurtosis, and energy;
step S113, extracting Haralick features from the training data set based on gray level co-occurrence matrix calculation; wherein the Haralick features include contrast, correlation, energy, entropy, homogeneity and symmetry;
Step S114, extracting Gallowy features from the training data set based on gray level run length matrix calculation; wherein the galaway feature includes a short run advantage, a long run advantage, a gray level non-uniformity, a long run non-uniformity, a run Cheng Bai percentage;
step S115, extracting local binary pattern features from the training data set based on the comparison result of the central pixel of the image local area and the neighborhood thereof; wherein the local binary pattern features include energy and entropy, the energy being LBP energy =∑ i f i 2 The entropy is LBP entropy =-∑ i f i 2 log 2 (f i ),f i Representing the corresponding frequency of the i-th block in the image local area;
step S116, obtaining the image texture analysis feature according to the first-order statistical feature, the Haralick feature, the Galaway feature and the local binary pattern feature;
step S117, obtaining the first image feature based on the image texture analysis feature.
The image texture analysis features mainly comprise first-order statistical features and higher-order texture features. In one aspect, the first order statistical features may be effective to quantitatively describe the ultrasound echo intensity of skeletal muscle; in addition, the ultrasonic echo intensity information of skeletal muscles of different ages or groups is different, and the characteristics can also provide structural information related to muscle states, so that effective information is provided for muscle damage assessment. On the other hand, higher order texture features, such as Haralick features, gallowy features, and local binary pattern (Local Binary Pattern, LBP) features, etc., perform better than first order statistical features in fine tasks such as muscle gender identification.
Specifically, the first-order statistical features are features calculated directly based on the pixel gray distribution of the original image, and include integrated spectral density, average value, standard deviation, variance, skewness, kurtosis, energy, and the like. The Haralick feature is calculated by a gray level co-occurrence matrix, which is a matrix function of pixel distance and direction, and the correlation between two gray level values is calculated at a given spatial distance d and direction theta, which is a common method for describing image textures by researching the spatial correlation characteristics of gray level. Typical Haralick features include contrast, correlation, energy, entropy, homogeneity, symmetry, etc. Typically, each Haralick feature contains four directions of 0 °, 45 °, 90 °, and 135 °. The gradient features are calculated based on a gray scale run length matrix, the gray scale run length matrix represents the regularity of the texture change of an image, the size of the gray scale run length matrix is determined by the gray scale level of the image and the size of the image, the gray scale run length matrix comprises texture statistical features such as short run advantage, long run advantage, gray scale non-uniformity, long run non-uniformity, gradient Cheng Bai and the like, and each gradient feature also comprises four directions of 0 DEG, 45 DEG, 90 DEG and 135 deg. The LBP characteristic is obtained by comparing the central pixel of the local area of the image with the neighborhood thereof, mainly describes the local texture characteristic of the image, has the obvious advantages of rotation invariance, gray invariance and the like, and comprises energy (energy) and entropy (entropy), and the specific calculation formula is as follows:
LBP energy =∑ i f i 2
LBP entropy =-∑ i f i 2 log 2 (f i ),
Wherein f i Representing the corresponding frequency of the i-th block in the local area of the image.
It should be noted that, when extracting the morphological characteristics of the muscle, such as feather angle, muscle thickness, muscle fiber angle, muscle bundle length, physiological cross-sectional area of the muscle, etc., the method of detecting the characteristics based on the gradient matrix of the radon transform can be used to locate each structural element of the muscle tissue by using the methods based on the hough transform, deep learning, etc., so as to realize the automatic measurement of morphological parameters.
And step 200, performing feature selection and feature dimension reduction on the first image features to obtain second image features.
Specifically, with the development of the fields of machine learning and pattern recognition, the number of features is increased sharply, especially in medical image analysis, the conventional algorithm often encounters dimension disasters, redundancy may exist between the features, and high linear correlation easily causes model overfitting, so that all the features need to be selected and dimension reduced, a group of related feature subsets which are most sensitive to diseases are screened out, the dimension of data is reduced, the learning process can be accelerated, the efficiency of data analysis is effectively improved, and the performance of a classifier model is improved.
In one implementation manner, the step S200 specifically includes the following steps:
step S201, performing feature selection on the first image features by adopting a t-test method to obtain screened first image features;
and S202, performing feature dimension reduction on the screened first image features by adopting a principal component analysis, a linear discriminant analysis, a t-SNE, a multidimensional scaling method, an equidistant mapping method, local linear embedding, laplacian feature mapping and a neighbor component analysis method to obtain the second image features.
Specifically, the feature selection part selects the t-test method in the Filter method, measures the discriminant of each feature by analyzing the feature, and then selects the feature subset with the most discriminant by sorting. And performing feature dimension reduction on the features of the first image features after feature selection by adopting a plurality of methods such as principal component analysis, linear discriminant analysis, t-SNE, multidimensional scaling, equidistant mapping, local linear embedding, laplacian feature mapping, neighbor component analysis and the like to obtain second image features.
Step S300, training a machine learning model and a deep learning model based on the second image features to obtain a classifier, and performing model verification on the classifier to obtain a classification model of the muscle ultrasound image;
Specifically, machine learning algorithms can be classified into supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, and the like, depending on the learning manner. Currently common machine learning algorithms include support vector machines, decision trees, random forests, logistic regression, gao Sibei phyllss classifier, bernoulli bayesian classifier, artificial neural networks, etc. With the development of big data and deep learning, the algorithm based on the deep learning can overcome the limitation of the traditional shallow machine learning in solving complex tasks and improve the discrimination capability. And training a machine learning model and a deep learning model based on the second image features to obtain a classifier, and performing model verification on the classifier to finally obtain a classification model of the muscle ultrasonic image.
In one implementation manner, the step S300 specifically includes the following steps:
step 301, training a machine learning model to be trained and a deep learning model based on the second image features to obtain training results;
step S302, evaluating the training result through a first performance index to obtain the classifier; the first performance index comprises accuracy, precision, recall rate, F1 fraction and area under a working curve of the subject.
Step S303, acquiring a verification data set, inputting the verification data set into the classifier, and performing model verification by adopting 10-fold cross verification to obtain verification accuracy;
and step S304, adjusting the classifier according to the verification precision to obtain a classification model of the muscle ultrasonic image.
Specifically, in this embodiment, the current mainstream and common machine learning and deep learning models are adopted for training, and the classifier with the optimal performance is evaluated through performance indexes such as accuracy, precision, recall rate, F1 fraction, area under the working curve of the subject, and the like.
In particular, cross-validation is a statistical validation technique that evaluates and compares model performance. It uses a subset of the data set, trains it, and then uses a complementary subset of the data set that is not used for training to evaluate the performance of the model, ensuring that the model captures patterns correctly from the data, regardless of interference from the data. In the embodiment, the muscle ultrasonic image to be classified is used as a test data set, and the third image characteristic is input into a classification model of the muscle ultrasonic image to be classified, so that a classification result is obtained. The most classical method in cross-validation is k-fold cross-validation, i.e. the data set is divided into k subsets, each subset is tested once, the rest are used as training sets, then cross-validation is repeated k times, and the average value is calculated as a score. The embodiment adopts 10-fold cross verification which is most commonly used in the prior machine learning field to establish a model and verify model parameters so as to obtain verification accuracy; and then adjusting the classifier according to the verification precision to obtain a classification model of the muscle ultrasonic image.
Step 400, acquiring a muscle ultrasonic image to be classified, taking the muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, and inputting the third image feature into a classification model of the muscle ultrasonic image for classification, so as to obtain a classification result.
Specifically, the third image feature of the muscle ultrasonic image to be classified is extracted, and the third image feature is input into the classification model of the trained muscle ultrasonic image to be classified, so that a prediction result or category is obtained, and early risk assessment at the individual level is realized.
Exemplary apparatus
The embodiment also provides an image feature extraction and classification device based on muscle ultrasound, which comprises:
a first image feature acquiring module 10, configured to acquire a muscle ultrasound image, and extract a first image feature from a training dataset by using the muscle ultrasound image as the training dataset;
a second image feature obtaining module 20, configured to perform feature selection and feature dimension reduction on the first image feature to obtain a second image feature;
a classification model acquisition module 30, configured to train a machine learning model and a deep learning model based on the second image features, obtain a classifier, and perform model verification on the classifier to obtain a classification model of the muscle ultrasound image;
The classifying module 40 is configured to obtain a muscle ultrasound image to be classified, take the muscle ultrasound image to be classified as a test data set, extract a third image feature from the test data set, and input the third image feature into a classification model of the muscle ultrasound image for classification, thereby obtaining a classification result.
In one implementation, the first image feature acquisition module 10 includes:
a detection mode setting unit for setting the detection mode of the ultrasonic imaging system as a musculoskeletal detection mode;
an ultrasonic probe placing unit for placing a long axis of an ultrasonic probe parallel to a long axis direction of a muscle, and keeping the ultrasonic probe placed at the first detection position by setting a mark;
a muscle ultrasound image acquisition unit for acquiring a muscle ultrasound image of the first detection position using a real-time B-mode ultrasound imaging apparatus in a static state of the subject based on the detection mode;
and the training data set acquisition unit is used for taking the muscle ultrasonic image of the first detection position as the training data set.
The Leiden transformation matrix acquisition unit is used for carrying out normalized Leiden transformation on the training data set to obtain a Leiden transformation matrix;
The Leiden transformation gradient matrix acquisition unit is used for solving the gradient of the Leiden transformation matrix and carrying out edge enhancement to obtain the Leiden transformation gradient matrix;
the clustering unit is used for carrying out binarization processing and clustering processing on the Reden transformation gradient matrix to obtain deep fascia characteristic points, muscle bundle characteristic points and shallow fascia characteristic points;
the Reden inverse transformation unit is used for precisely dividing the deep fascia characteristic points, the muscle bundle characteristic points and the shallow fascia characteristic points and performing the Reden inverse transformation to obtain muscle thickness, muscle fiber length and feather angle characteristics;
a muscle morphology feature obtaining unit, configured to obtain the muscle morphology feature according to the muscle thickness, the muscle fiber length, and the feather angle feature;
an average frequency analysis feature acquisition unit configured to acquire the average frequency analysis feature of the training data set; wherein the calculation formula of the average frequency analysis characteristic is that
Figure BDA0003837911930000161
n, I, and f are the length, power, and frequency of the power density spectrum, respectively;
a first-order statistical feature extraction unit for extracting first-order statistical features from the training dataset based on pixel gray distribution calculation; wherein the first order statistical features include integrated spectral density, mean, standard deviation, variance, skewness, kurtosis, and energy;
The Harfull feature extraction unit is used for extracting Harfull features from the training data set based on gray level co-occurrence matrix calculation; wherein the Haralick features include contrast, correlation, energy, entropy, homogeneity and symmetry;
a Galwalk feature extraction unit configured to extract Galwalk features from the training dataset based on gray scale run length matrix calculation; wherein the galaway feature includes a short run advantage, a long run advantage, a gray level non-uniformity, a long run non-uniformity, a run Cheng Bai percentage;
the local binary pattern feature extraction unit is used for extracting local binary pattern features from the training data set based on the comparison result of the central pixel of the local region of the image and the neighborhood thereof; wherein the local binary pattern features include energy and entropy, the energy being LBP energy =∑ i f i 2 The entropy is LBP entropy =-∑ i f i 2 log 2 (f i ),f i Representing the corresponding frequency of the i-th block in the image local area;
the image texture analysis feature acquisition unit is used for acquiring the image texture analysis feature according to the first-order statistical feature, the Haralick feature, the Gallowy feature and the local binary pattern feature;
and the first image feature acquisition unit is used for acquiring the first image feature based on the muscle morphological feature, the average frequency analysis feature and the image texture analysis feature.
In one implementation, the second image feature acquisition module 20 includes:
the feature screening unit is used for carrying out feature selection on the first image features by adopting a t-test method to obtain screened first image features;
the feature dimension reduction unit is used for carrying out feature dimension reduction on the screened first image features by adopting a principal component analysis, a linear discriminant analysis, a t-SNE, a multidimensional scaling method, an equidistant mapping method, local linear embedding, a Laplacian feature mapping method and a neighbor component analysis method to obtain the second image features.
In one implementation, the classification model acquisition module 30 includes:
the model training unit is used for training the machine learning model and the deep learning model to be trained based on the second image characteristics to obtain training results;
the classifier acquisition unit is used for evaluating the training result through a first performance index to obtain the classifier; the first performance index comprises accuracy, precision, recall rate, F1 fraction and area under a working curve of the subject.
The verification accuracy acquisition unit is used for acquiring a verification data set, inputting the verification data set into the classifier, and performing model verification by adopting 10-fold cross verification to obtain verification accuracy;
And the classification model acquisition unit is used for adjusting the classifier according to the verification precision to obtain a classification model of the muscle ultrasonic image.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a functional block diagram thereof may be shown in fig. 5. The intelligent terminal comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. The processor of the intelligent terminal is used for providing computing and control capabilities. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the intelligent terminal is used for communicating with an external terminal through network connection. The computer program, when executed by a processor, implements a method for extracting and classifying image features based on muscle ultrasound. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen, and a temperature sensor of the intelligent terminal is arranged in the intelligent terminal in advance and used for detecting the running temperature of internal equipment.
It will be appreciated by those skilled in the art that the schematic block diagram shown in fig. 5 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the intelligent terminal to which the present inventive arrangements are applied, and that a particular intelligent terminal may include more or less components than those shown, or may combine some of the components, or may have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses an image feature extraction and classification method based on muscle ultrasound, which comprises the steps of obtaining a muscle ultrasound image, and taking the muscle ultrasound image as a training data set; extracting a first image feature from the training dataset; performing feature selection and feature dimension reduction on the first image features to obtain second image features; training a classifier based on the second image characteristics and performing model verification to obtain a classification model of the muscle ultrasonic image; and acquiring a muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, inputting the third image feature into a classification model of the muscle ultrasonic image for classification, and obtaining a classification result. The invention can collect muscle ultrasonic images with low cost, collect wider muscle structure and function indexes, and construct a classification model by combining artificial intelligent means such as machine learning, deep learning and the like by utilizing image histology, thereby realizing classification and evaluation of individual muscle ultrasonic images.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An image feature extraction and classification method based on muscle ultrasound, which is characterized by comprising the following steps:
acquiring a muscle ultrasonic image, taking the muscle ultrasonic image as a training data set, and extracting first image features from the training data set;
performing feature selection and feature dimension reduction on the first image features to obtain second image features;
training a machine learning model and a deep learning model based on the second image features to obtain a classifier, and performing model verification on the classifier to obtain a classification model of the muscle ultrasonic image;
and acquiring a muscle ultrasonic image to be classified, taking the muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, and inputting the third image feature into a classification model of the muscle ultrasonic image for classification to obtain a classification result.
2. The method for extracting and classifying image features based on muscle ultrasound according to claim 1, wherein the acquiring a muscle ultrasound image, using the muscle ultrasound image as a training data set, comprises:
setting a detection mode of an ultrasonic imaging system as a musculoskeletal detection mode;
Placing the long axis of the ultrasonic probe parallel to the long axis direction of the muscle, and keeping the ultrasonic probe placed at the first detection position by arranging a mark;
based on the detection mode, acquiring a muscle ultrasonic image of the first detection position by using a real-time B-mode ultrasonic imaging device under the static state of the subject;
and taking the muscle ultrasonic image of the first detection position as the training data set.
3. The method of claim 1, wherein the extracting the first image feature from the training dataset comprises:
carrying out normalized Leiden transformation on the training data set to obtain a Leiden transformation matrix;
gradient solving is carried out on the Leiden transformation matrix, and edge enhancement is carried out, so that a Leiden transformation gradient matrix is obtained;
performing binarization processing and clustering processing on the Reden transformation gradient matrix to obtain deep fascia characteristic points, muscle bundle characteristic points and shallow fascia characteristic points;
performing accurate division and Reden inverse transformation on the deep fascia characteristic points, the muscle bundle characteristic points and the shallow fascia characteristic points to obtain muscle thickness, muscle fiber length and feather angle characteristics;
obtaining the morphological characteristics of the muscle according to the thickness, the length of the muscle fiber and the feather angle characteristics of the muscle;
Acquiring the average frequency analysis feature of the training dataset; wherein the calculation formula of the average frequency analysis characteristic is that
Figure FDA0003837911920000021
n, I, and f are the length, power, and frequency of the power density spectrum, respectively;
the first image feature is derived based on the muscle morphology feature and the average frequency analysis feature.
4. The method of claim 1, wherein the extracting the first image feature from the training dataset comprises:
extracting first-order statistical features from the training dataset based on pixel gray distribution calculations; wherein the first order statistical features include integrated spectral density, mean, standard deviation, variance, skewness, kurtosis, and energy;
extracting Haralick features from the training dataset based on gray level co-occurrence matrix calculation; wherein the Haralick features include contrast, correlation, energy, entropy, homogeneity and symmetry;
extracting a galaway feature from the training dataset based on gray run-length matrix computation; wherein the galaway feature includes a short run advantage, a long run advantage, a gray level non-uniformity, a long run non-uniformity, a run Cheng Bai percentage;
Extracting local binary pattern features from the training dataset based on a comparison of a central pixel of a local region of the image with its neighborhood; wherein the local binary pattern features include energy and entropy, the energy being LBP energy =-Σ i f i 2 The entropy is LBP entropy =-∑ i f i 2 log 2 (f i ),f i Representing the corresponding frequency of the i-th block in the image local area;
obtaining the image texture analysis feature according to the first-order statistical feature, the Haralick feature, the Galoway feature and the local binary pattern feature;
and obtaining the first image characteristic based on the image texture analysis characteristic.
5. The method for extracting and classifying image features based on muscle ultrasound according to claim 1, wherein the performing feature selection and feature dimension reduction on the first image features to obtain second image features comprises:
performing feature selection on the first image features by adopting a t-test method to obtain screened first image features;
and performing feature dimension reduction on the screened first image features by adopting a principal component analysis, a linear discriminant analysis, a t-SNE, a multidimensional scaling method, an equidistant mapping method, local linear embedding, laplacian feature mapping and neighbor component analysis method to obtain the second image features.
6. The method for extracting and classifying image features based on muscle ultrasound according to claim 1, wherein training a machine learning model and a deep learning model based on the second image features to obtain a classifier comprises:
training a machine learning model to be trained and a deep learning model based on the second image features to obtain training results;
evaluating the training result through a first performance index to obtain the classifier; the first performance index comprises accuracy, precision, recall rate, F1 fraction and area under a working curve of the subject.
7. The method for extracting and classifying image features based on muscle ultrasound according to claim 1, wherein the performing model verification on the classifier obtains a classification model of the muscle ultrasound image, further comprising:
acquiring a verification data set, inputting the verification data set into the classifier, and performing model verification by adopting 10-fold cross verification to obtain verification accuracy;
and adjusting the classifier according to the verification precision to obtain a classification model of the muscle ultrasonic image.
8. An image feature extraction and classification device based on muscle ultrasound, the device comprising:
The first image feature acquisition module is used for acquiring a muscle ultrasonic image, taking the muscle ultrasonic image as a training data set and extracting first image features from the training data set;
the second image feature acquisition module is used for carrying out feature selection and feature dimension reduction on the first image features to obtain second image features;
the classification model acquisition module is used for training a machine learning model and a deep learning model based on the second image features to obtain a classifier, and performing model verification on the classifier to obtain a classification model of the muscle ultrasonic image;
the classifying module is used for acquiring a muscle ultrasonic image to be classified, taking the muscle ultrasonic image to be classified as a test data set, extracting a third image feature from the test data set, and inputting the third image feature into a classifying model of the muscle ultrasonic image for classification to obtain a classifying result.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a memory, a processor and a muscle ultrasound based image feature extraction and classification program stored in the memory and operable on the processor, wherein the processor implements the steps of the muscle ultrasound based image feature extraction and classification method according to any one of claims 1-7 when executing the muscle ultrasound based image feature extraction and classification program.
10. A computer readable storage medium, wherein the computer readable storage medium has stored thereon a muscle ultrasound based image feature extraction and classification program, which when executed by a processor, implements the steps of the muscle ultrasound based image feature extraction and classification method of any of claims 1-7.
CN202211093348.0A 2022-09-08 2022-09-08 Image feature extraction and classification method based on muscle ultrasound Pending CN116309250A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211093348.0A CN116309250A (en) 2022-09-08 2022-09-08 Image feature extraction and classification method based on muscle ultrasound
PCT/CN2022/137851 WO2024051015A1 (en) 2022-09-08 2022-12-09 Image feature extraction and classification method based on muscle ultrasound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211093348.0A CN116309250A (en) 2022-09-08 2022-09-08 Image feature extraction and classification method based on muscle ultrasound

Publications (1)

Publication Number Publication Date
CN116309250A true CN116309250A (en) 2023-06-23

Family

ID=86796492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211093348.0A Pending CN116309250A (en) 2022-09-08 2022-09-08 Image feature extraction and classification method based on muscle ultrasound

Country Status (2)

Country Link
CN (1) CN116309250A (en)
WO (1) WO2024051015A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345633A (en) * 2013-06-27 2013-10-09 山东大学 Structural nuclear magnetic resonance image processing method used for Alzheimer disease early detection
CN109087298B (en) * 2018-08-17 2020-07-28 电子科技大学 Alzheimer's disease MRI image classification method
JP6757378B2 (en) * 2018-08-28 2020-09-16 株式会社モルフォ Image identification device, image identification method and image identification program
CN110276772B (en) * 2019-05-10 2022-10-18 深圳大学 Automatic positioning method and system for structural elements in muscle tissue
CN111242174B (en) * 2019-12-31 2022-09-23 浙江大学 Liver cancer image feature extraction and pathological classification method based on imaging omics

Also Published As

Publication number Publication date
WO2024051015A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
Aycheh et al. Biological brain age prediction using cortical thickness data: a large scale cohort study
Martí-Juan et al. A survey on machine and statistical learning for longitudinal analysis of neuroimaging data in Alzheimer’s disease
Wang et al. Classification of diffusion tensor metrics for the diagnosis of a myelopathic cord using machine learning
Mwangi et al. Visualization and unsupervised predictive clustering of high-dimensional multimodal neuroimaging data
Sharma et al. Brain tumor segmentation using DE embedded OTSU method and neural network
Klein et al. Early diagnosis of dementia based on intersubject whole-brain dissimilarities
Mamun et al. Deep Learning Based Model for Alzheimer's Disease Detection Using Brain MRI Images
CN112348785A (en) Epileptic focus positioning method and system
Crimi et al. MultiLink analysis: Brain network comparison via sparse connectivity analysis
CN116821753A (en) Machine learning-based community acquired pneumonia pathogen type prediction method
CN114842969A (en) Mild cognitive impairment assessment method based on key fiber bundles
Yang et al. Diagnosis of Parkinson’s disease based on 3D ResNet: The frontal lobe is crucial
Song et al. A novel computer-assisted diagnosis method of knee osteoarthritis based on multivariate information and deep learning model
Jiang et al. Transfer learning on T1-weighted images for brain age estimation
Jeong et al. Neonatal encephalopathy prediction of poor outcome with diffusion-weighted imaging connectome and fixel-based analysis
Çelebi et al. Leveraging Deep Learning for Enhanced Detection of Alzheimer's Disease Through Morphometric Analysis of Brain Images.
Wang et al. Adaptive Weights Integrated Convolutional Neural Network for Alzheimer's Disease Diagnosis
WO2024051015A1 (en) Image feature extraction and classification method based on muscle ultrasound
Sharma et al. Machine Learning of Diffusion Weighted Imaging for Prediction of Seizure Susceptibility Following Traumatic Brain Injury
SupriyaPatro et al. Lightweight 3d convolutional neural network for schizophrenia diagnosis using mri images and ensemble bagging classifier
Zhu Early diagnosis of Parkinson's Disease by analyzing magnetic resonance imaging brain scans and patient characteristic
Nisha et al. SGD-DABiLSTM based MRI Segmentation for Alzheimer’s disease Detection
CN114305387A (en) Magnetic resonance imaging-based method, equipment and medium for classifying small cerebral vascular lesion images
CN114847922A (en) Brain age prediction method based on automatic fiber bundle identification
CN113080929A (en) anti-NMDAR encephalitis image feature classification method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination