CN112132788B - Bone age assessment method based on characteristic region grade identification - Google Patents

Bone age assessment method based on characteristic region grade identification Download PDF

Info

Publication number
CN112132788B
CN112132788B CN202010890447.6A CN202010890447A CN112132788B CN 112132788 B CN112132788 B CN 112132788B CN 202010890447 A CN202010890447 A CN 202010890447A CN 112132788 B CN112132788 B CN 112132788B
Authority
CN
China
Prior art keywords
bone
attention
random
map
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010890447.6A
Other languages
Chinese (zh)
Other versions
CN112132788A (en
Inventor
尹久
池凯凯
吴旻媛
张书彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010890447.6A priority Critical patent/CN112132788B/en
Publication of CN112132788A publication Critical patent/CN112132788A/en
Application granted granted Critical
Publication of CN112132788B publication Critical patent/CN112132788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A bone age assessment method based on characteristic region grade identification divides 14 specific bones for bone age assessment from each metacarpal; 3 data enhancement techniques are used to expand the data set and increase the generalization ability of the network; a dual-attention convolution model was introduced to train each bone to obtain a bone maturation grade assessment model. Different from the traditional full-palm-based evaluation intelligent model, the method introduces a attention mechanism to perform joint analysis on the cut local feature map, and further improves the evaluation accuracy. The test result is superior to the automatic bone age assessment method based on the full palmar bone image.

Description

Bone age assessment method based on characteristic region grade identification
Technical Field
The invention relates to image recognition and deep learning technologies, in particular to a bone age assessment method of a convolutional neural network classification model based on a dual-attention model.
Background
Bone age assessment is important to understanding the growth and development of children and is a medical examination by pediatricians and pediatric endocrinologists to determine the differences in bone age of children and actual age of children. The bone age assessment can be used for diagnosing and treating growth and endocrine dyscrasia of children and young people, is also helpful for predicting the final adult height of children and young people, and is also helpful for diagnosing and treating surgical operations related to spinal correction, lower limb equalization and the like. Besides being used for the growth condition of children, the method is also widely applied to the fields of sports, judicial identification and the like. In the field of sports, the bone age is mainly used for avoiding age counterfeiting, standardizing the racing order, determining the development level of athletes, making scientific training means as indexes of athlete material selection and selecting and pulling out sports talents. In the field of judicial identification, bone age is mainly used for identifying the ages of criminal suspects or dead persons, and provides references for the criminals. The most common and widely accepted method of bone age assessment is manual assessment using X-ray films with the left hand including the wrist, palm and fingers.
The two bone age assessment methods which are dominant internationally are respectively: GP mapping and artificial average weighted maturity geometric averaging (TW). The GP atlas method utilizes the atlas to match the X-ray image of the child to estimate the bone age, and the method is simple, but has insufficient reliability and strong subjectivity; the TW3 method is complex, the bone age assessment is very time-consuming, and the method is difficult to apply in large scale. The bone age evaluation method suitable for the teenagers in China is a maturity geometric mean method (CHN method) based on differential analysis automatic weighting, has good representativeness and time characteristics, is more scientific and advanced, improves the evaluation accuracy and consistency, reduces random errors, is simpler and more efficient than a TW3 method and a GP map method, and is also a bone age evaluation standard referred by the invention. However, there are two major drawbacks to conventional artificial bone age assessment by doctors: 1) The subjectivity and the accuracy of the evaluation are strong, unless the doctor is expert doctor, the result of evaluating the same X-ray film bone age by different doctors is often inconsistent, or the result of evaluating the same X-ray film bone age by the same doctor at different times is also often different; 2) The bone age evaluation requires strong expertise, requires strict long-time training, and takes a long time in the evaluation process.
Recent researches show that a convolutional neural network (CHN) based on deep learning has strong capability in image object detection and classification tasks, and can improve the performance of a plurality of detection tasks in biomedicine. In medical image analysis, CNN has been successfully applied to many other problems, such as detection and classification of interstitial lung disease, breast cancer detection, detection and classification of lung nodules by CNN. Thus, deep learning methods, particularly CNN, have been applied to many medical imaging analysis projects and have favorable results.
Disclosure of Invention
In order to overcome the defects of the existing bone age identification technology, the invention provides an intelligent bone age assessment method based on multiple interested feature Region (ROI) grade judgment. In particular, the method is an intelligent implementation and improvement of the CNH method. Firstly, the probability of the most possible two grades output by the double idea CNN network model is utilized to calculate the weighted score of the bone, and then the bone age is obtained according to the total grade score of 14 bones and the bone age comparison table, so that the accuracy of bone age assessment is improved.
In order to solve the problems, the invention provides the following technical proposal
A bone age assessment method based on feature region class identification, the method comprising the steps of:
1) The method comprises the steps of automatically calibrating and cutting an interested characteristic region ROI of 14 bones in each hand bone piece by adopting a master-rcnn method, and carrying out combined operations of random rotation, random translation cutting and random center cutting on pictures to realize data enhancement;
2) Constructing a channel attention module, aiming at the channel number, the height and the width of an input feature map, and generating an attention map by using a shared multi-layer perceptron after double pooling operation, wherein the attention map is regarded as response to a specific category;
3) Constructing a spatial attention module, performing double-pooling operation on an input feature map to generate two-dimensional hole teaching descriptors, and performing rolling and operation by using a convolution kernel of 7*7 to generate a spatial attention map;
4) And (3) optimizing the neural network model by using a Focal loss function for the characteristic region with low classification accuracy and particularly unbalanced level distribution.
Further, in the step 1), the cutting rule is:
each bone is cut into regions with a suitable fixed size frame that ensures ROI of the bone in each hand bone, but contains as little interference as possible.
In the step 1), the random rotation process is as follows:
the rotation angle is randomly selected from [ -10 deg., 10 deg. ] using a random angle rotation, the step size being 1 deg..
In the step 1), the random translation clipping is as follows:
the random translation directions are up and down and left and right, one direction is selected for translation each time, and the range is adjusted according to the sizes of different bones.
In the step 1), the random center clipping is as follows:
the ROI of each bone is ensured to be contained in the image of the data set, a proper amount of cutting frames are cut, and the cutting value is proper.
Still further, in the step 2), the feature pictures after the data enhancement processing are obtained through a channel attention module, and the channel attention module construction process is as follows:
given input feature F.epsilon.R C×H×W Where C is the number of channels of the feature map, H is the height of the feature map, W is the width of the feature map, and two different channel descriptors are generated using both the average pooling and the maximum pooling featuresThe two descriptors are then passed through a shared multi-layer perceptron Model (MLP) to generate a channel attention weighting map M, respectively c ∈R C×1×1 . Setting the hidden layer size to +.>Where r is the reduction rate, defaulting to r=16, we use summation to get the output characteristics after sharing the MLP layer. The channel attention calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
where sigma denotes the activation function Sigmoid, delta denotes the activation function Relu,and->The weights of the first layer and the second layer in the MLP are represented, respectively.
Still further, in the step 3), the construction process for generating the spatial attention map for the input feature map is as follows:
for the input feature map F ε R C×H×W First performing an average pooling and a maximum pooling operation along the channel axis to generate two-dimensional spatial descriptorsAnd->Connecting the features represented by the two spatial descriptors, and then performing convolution operation by using a convolution kernel of 7×7 to generate a spatial attention map M s (F)∈R 1×H×W The spatial attention is calculated as follows:
M s =σ(f 7×7 ([AvgPool(F);MaxPool(F)]))
wherein σ represents a Sigmoid activation function, f 7×7 The convolution operation is represented and the convolution kernel size is 7 x 7, c representing the number of channels of the input feature map.
Preferably, in the step 4), the suitable two-class Focal loss function is adjusted to conform to the suitable multi-class Focal loss function, as follows:
the two-class Focal loss function formula is:
FL(p t )=-α t (1-p t ) γ log(p t )
wherein p is t For predicting the probability of positive samples, a weighting factor alpha is introduced t ∈[0,1]Gamma is an adjustable focusing parameter, wherein the value gamma is more than or equal to 0, (1-p) t ) γ The newly added modulation factor;
based on the multi-classification cross entropy function, M is used for representing the number of sample classes, and y is set ic The value 0 or 1 is taken, if the category prediction is the same as the category of the sample i, 1 is taken, otherwise, 0 and p are taken ic Representing the prediction probability that the prediction sample i belongs to class c, for the weighting factor α, one α is set for each class of cases c ∈[0,1]The focusing parameter gamma is uniformly set, and then the multi-classification model Focal loss function is expressed as follows:
the beneficial effects of the invention are as follows: and a attention introducing mechanism is introduced to perform joint analysis on the cut local feature images, so that the accuracy of the evaluation is further improved. The test result is superior to the automatic bone age assessment method based on the full palmar bone image.
Drawings
FIG. 1 is a flow chart of a feature region based level determination method;
FIG. 2 is a channel attention structure;
fig. 3 is a spatial attention structure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Referring to fig. 1 to 3, a bone age assessment method based on characteristic region rank recognition, which uses a dual-attention convolutional neural network, can efficiently and accurately assess bone rank and judge bone age according to CHN method.
The bone age assessment method based on characteristic region grade identification specifically comprises the following steps:
1) The method comprises the steps of automatically calibrating and cutting an interested characteristic region ROI of 14 bones in each hand bone piece by adopting a master-rcnn method, and carrying out combined operations of random rotation, random translation cutting and random center cutting on pictures to realize data enhancement;
2) Constructing a channel attention module, aiming at the channel number, the height and the width of an input feature map, and generating an attention map by using a shared multi-layer perceptron after double pooling operation, wherein the attention map is regarded as response to a specific category;
3) Constructing a spatial attention module, performing double-pooling operation on an input feature map to generate two-dimensional hole teaching descriptors, and performing rolling and operation by using a convolution kernel of 7*7 to generate a spatial attention map;
4) And (3) optimizing the neural network model by using a Focal loss function for the characteristic region with low classification accuracy and particularly unbalanced level distribution.
Further, in the step 1), the cutting rule is:
each bone is cut into regions with a suitable fixed size frame that ensures ROI of the bone in each hand bone, but contains as little interference as possible.
In the step 1), the random rotation process is as follows:
the rotation angle is randomly selected from [ -10 deg., 10 deg. ] using a random angle rotation, the step size being 1 deg..
In the step 1), the random translation clipping is as follows:
the random translation directions are up and down and left and right, one direction is selected for translation each time, and the range is adjusted according to the sizes of different bones.
In the step 1), the random center clipping is as follows:
the ROI of each bone is ensured to be contained in the image of the data set, a proper amount of cutting frames are cut, and the cutting value is proper.
Still further, in the step 2), the feature pictures after the data enhancement processing are obtained through a channel attention module, the attention pattern reflecting the specific category is obtained, the channel attention structure chart is shown in fig. 2, and the channel attention module construction process is as follows:
given input feature F.epsilon.R C×H×W Where C is the number of channels of the feature map, H is the height of the feature map, W is the width of the feature map, and two different channel descriptors are generated using both the average pooling and the maximum pooling featuresThe two descriptors are then passed through a shared multi-layer perceptron Model (MLP) to generate a channel attention weighting map M, respectively c ∈R C×1×1 The hidden layer is scaled to +.>Where r is the reduction rate, by default r=16. After sharing the MLP layer, the output characteristics are obtained using summation. The channel attention calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
where sigma denotes the activation function Sigmoid, delta denotes the activation function Relu,and->The weights of the first layer and the second layer in the MLP are represented, respectively.
Still further, in the step 3), a spatial attention map is generated for the input feature map, and the inter-attention map is shown in fig. 2, and the construction process is described as follows:
for the input feature map F ε R C×H×W First performing an average pooling and a maximum pooling operation along the channel axis to generate two-dimensional spatial descriptorsAnd->Connecting the features represented by the two spatial descriptors, and then performing convolution operation by using a convolution kernel of 7×7 to generate a spatial attention map M s (F)∈R 1×H×W The spatial attention is calculated as follows:
M s =σ(f 7×7 ([AvgPool(F);MaxPool(F)]))
wherein σ represents a Sigmoid activation function, f 7×7 The convolution operation is represented and the convolution kernel size is 7 x 7, c representing the number of channels of the input feature map.
Preferably, in the step 4), the parameters suitable for the two-class Focal loss function are adjusted to conform to the suitable multi-class Focal loss function, so as to optimize the network model, and the procedure is as follows:
the two-class Focal loss function formula is:
FL(p t )=-α t (1-p t ) γ log(p t )
wherein p is t To predict the probability of being a positive sample, a common approach to solve the problem of class imbalance is to introduce a weighting factor α t ∈[0,1]Gamma is an adjustable focusing parameter, wherein the value gamma is more than or equal to 0, (1-p) t ) γ The newly added modulation factor;
using M representation based on multi-class cross entropy functionSample class number, set y ic The value 0 or 1 is taken, if the category prediction is the same as the category of the sample i, 1 is taken, otherwise, 0 and p are taken ic Representing the prediction probability that the prediction sample i belongs to class c, for the weighting factor α, one α is set for each class of cases c ∈[0,1]The focusing parameter gamma is uniformly set, and then the multi-classification model Focal loss function is expressed as follows:

Claims (6)

1. a bone age assessment method based on characteristic region class identification, the method comprising the steps of:
1) The method comprises the steps of automatically calibrating and cutting an interested characteristic region ROI of 14 bones in each hand bone piece by adopting a master-rcnn method, and carrying out combined operations of random rotation, random translation cutting and random center cutting on pictures to realize data enhancement;
2) Constructing a channel attention module, aiming at the channel number, the height and the width of an input feature map, and generating an attention map by using a shared multi-layer perceptron after double pooling operation, wherein the attention map is regarded as response to a specific category;
the channel attention module construction process comprises the following steps:
given input feature F.epsilon.R C×H×W Where C is the number of channels of the feature map, H is the height of the feature map, W is the width of the feature map, and two different channel descriptors are generated using both the average pooling and the maximum pooling featuresThe two descriptors are then passed through a shared multi-layer perceptron model MLP to generate a channel attention weighting map M, respectively c ∈R C ×1×1 The hidden layer is scaled to +.>Where r is the reduction rate, defaulting to r=16; by sharing the MLP layer, useThe summation yields the output characteristics, and the channel attention calculation formula is as follows:
M c (F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
where sigma denotes the activation function Sigmoid, delta denotes the activation function Relu,and->Weights of the first layer and the second layer in the MLP are respectively represented;
3) Constructing a spatial attention module, performing double-pooling operation on an input feature map to generate two-dimensional hole teaching descriptors, and performing rolling and operation by using a convolution kernel of 7*7 to generate a spatial attention map;
the construction process of the spatial attention module comprises the following steps:
for the input feature map F ε R C×H×W First performing an average pooling and a maximum pooling operation along the channel axis to generate two-dimensional spatial descriptorsAnd->Connecting the features represented by the two spatial descriptors, and then performing convolution operation by using a convolution kernel of 7×7 to generate a spatial attention map M s (F)∈R 1×H×W The spatial attention is calculated as follows:
M s =σ(f 7×7 ([AvgPool(F);MaxPool(F)]))
wherein σ represents a Sigmoid activation function, f 7×7 Representing a convolution operation, and the convolution kernel size is 7×7, c representing the number of channels of the input feature map;
4) And (3) optimizing the neural network model by using a Focal loss function for the characteristic region with low classification accuracy and particularly unbalanced level distribution.
2. The method according to claim 1, wherein in step 1), the cutting rule is: each bone is cut into regions with a suitable fixed size frame that ensures ROI of the bone in each hand bone, but contains as little interference as possible.
3. The method according to claim 1 or 2, wherein in step 1), the random rotation process is: the rotation angle is randomly selected from [ -10 deg., 10 deg. ] using a random angle rotation, the step size being 1 deg..
4. The method according to claim 1 or 2, wherein in step 1), the random translational clipping is: the random translation directions are up and down and left and right, one direction is selected for translation each time, and the range is adjusted according to the sizes of different bones.
5. The method according to claim 1 or 2, wherein in step 1), the random center clipping is: the ROI of each bone is ensured to be contained in the image of the data set, a proper amount of cutting frames are cut, and the cutting value is proper.
6. The method according to claim 1 or 2, wherein in step 4), the Focal loss function application procedure is: the two-class Focal loss function is adjusted to fit the proper multiple-class Focal loss function, and the two-class Focal loss function has the following formula:
FL(p t )=-α t (1-p t ) γ log(p t )
wherein p is t For predicting the probability of positive samples, a weighting factor alpha is introduced t ∈[0,1]Gamma is an adjustable focusing parameter, wherein the value gamma is more than or equal to 0, (1-p) t ) γ The newly added modulation factor;
based on the multi-classification cross entropy function, M is used for representing the number of sample classes, and y is set ic The value 0 or 1 is taken, if the category prediction is the same as the category of the sample i, 1 is taken, otherwise, 0 and p are taken ic Representing the prediction probability that the prediction sample i belongs to class c, for the weighting factor α, one α is set for each class of cases c ∈[0,1]The focusing parameter gamma is uniformly set, and then the multi-classification model Focal loss function is expressed as follows:
CN202010890447.6A 2020-08-29 2020-08-29 Bone age assessment method based on characteristic region grade identification Active CN112132788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010890447.6A CN112132788B (en) 2020-08-29 2020-08-29 Bone age assessment method based on characteristic region grade identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010890447.6A CN112132788B (en) 2020-08-29 2020-08-29 Bone age assessment method based on characteristic region grade identification

Publications (2)

Publication Number Publication Date
CN112132788A CN112132788A (en) 2020-12-25
CN112132788B true CN112132788B (en) 2024-04-16

Family

ID=73848364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010890447.6A Active CN112132788B (en) 2020-08-29 2020-08-29 Bone age assessment method based on characteristic region grade identification

Country Status (1)

Country Link
CN (1) CN112132788B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396242B (en) * 2022-10-31 2023-04-07 江西神舟信息安全评估中心有限公司 Data identification method and network security vulnerability detection method
CN117252881B (en) * 2023-11-20 2024-01-26 四川大学 Bone age prediction method, system, equipment and medium based on hand X-ray image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503635A (en) * 2019-07-30 2019-11-26 浙江工业大学 A kind of hand bone X-ray bone age assessment method based on isomeric data converged network
WO2020062840A1 (en) * 2018-09-30 2020-04-02 杭州依图医疗技术有限公司 Method and device for detecting bone age
CN111161254A (en) * 2019-12-31 2020-05-15 上海体育科学研究所 Bone age prediction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020062840A1 (en) * 2018-09-30 2020-04-02 杭州依图医疗技术有限公司 Method and device for detecting bone age
CN110503635A (en) * 2019-07-30 2019-11-26 浙江工业大学 A kind of hand bone X-ray bone age assessment method based on isomeric data converged network
CN111161254A (en) * 2019-12-31 2020-05-15 上海体育科学研究所 Bone age prediction method

Also Published As

Publication number Publication date
CN112132788A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
JP6999812B2 (en) Bone age evaluation and height prediction model establishment method, its system and its prediction method
CN107464250B (en) Automatic breast tumor segmentation method based on three-dimensional MRI (magnetic resonance imaging) image
TWI684997B (en) Establishing method of bone age assessment and height prediction model, bone age assessment and height prediction system, and bone age assessment and height prediction method
CN112132788B (en) Bone age assessment method based on characteristic region grade identification
CN109508644A (en) Facial paralysis grade assessment system based on the analysis of deep video data
CN110084803A (en) Eye fundus image method for evaluating quality based on human visual system
CN111008974A (en) Multi-model fusion femoral neck fracture region positioning and segmentation method and system
CN111161287A (en) Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning
CN114037011B (en) Automatic identification and cleaning method for tongue color noise labeling sample of traditional Chinese medicine
WO2022088729A1 (en) Point positioning method and related apparatus, and device, medium and computer program
CN113298780B (en) Deep learning-based bone age assessment method and system for children
CN114842238B (en) Identification method of embedded breast ultrasonic image
Rahman et al. HOG+ CNN Net: Diagnosing COVID-19 and pneumonia by deep neural network from chest X-Ray images
CN112820399A (en) Method and device for automatically diagnosing benign and malignant thyroid nodules
CN114821189A (en) Focus image classification and identification method based on fundus images
CN113657449A (en) Traditional Chinese medicine tongue picture greasy classification method containing noise labeling data
CN111062953A (en) Method for identifying parathyroid hyperplasia in ultrasonic image
Wang et al. A ResNet‐based approach for accurate radiographic diagnosis of knee osteoarthritis
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning
CN113420793A (en) Improved convolutional neural network ResNeSt 50-based gastric ring cell carcinoma classification method
Lu et al. Data enhancement and deep learning for bone age assessment using the standards of skeletal maturity of hand and wrist for chinese
CN117557840A (en) Fundus lesion grading method based on small sample learning
CN113537375B (en) Diabetic retinopathy grading method based on multi-scale cascade
CN116129185A (en) Fuzzy classification method for tongue-like greasy feature of traditional Chinese medicine based on collaborative updating of data and model
CN109191425A (en) medical image analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant