CN109544518B - Method and system applied to bone maturity assessment - Google Patents

Method and system applied to bone maturity assessment Download PDF

Info

Publication number
CN109544518B
CN109544518B CN201811319500.6A CN201811319500A CN109544518B CN 109544518 B CN109544518 B CN 109544518B CN 201811319500 A CN201811319500 A CN 201811319500A CN 109544518 B CN109544518 B CN 109544518B
Authority
CN
China
Prior art keywords
bone
feature
image
channel
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811319500.6A
Other languages
Chinese (zh)
Other versions
CN109544518A (en
Inventor
王翔宇
王书强
申妍燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201811319500.6A priority Critical patent/CN109544518B/en
Publication of CN109544518A publication Critical patent/CN109544518A/en
Application granted granted Critical
Publication of CN109544518B publication Critical patent/CN109544518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of bone discrimination, in particular to a method and a system applied to bone maturity assessment; the invention utilizes the transfer learning method to extract the characteristics of the skeleton image and utilizes the semi-supervised generation countermeasure network based on the skeleton characteristic generation and the characteristic identification to identify the skeleton maturity, thereby solving the problems of overlarge parameter amount, overfitting and the like in the classification and identification of the high-resolution skeleton input image.

Description

Method and system applied to bone maturity assessment
Technical Field
The invention relates to the technical field of bone discrimination, in particular to a method and a system applied to bone maturity assessment.
Background
The skeletal growth stage classification is one of important indexes for measuring the growth maturity of human bodies, and the growth and development potential of the teenagers can be predicted by measuring the skeletal maturity of the teenagers or the operation time of a specific disease and the disease development condition can be predicted in an auxiliary mode. In the traditional bone maturity assessment method, an expert compares a reading film with a map, or the expert scores the development degree of each bone epiphysis and obtains the prediction of the bone maturity by superposition measurement. The traditional method is highly dependent on the experience of professional doctors, and the manual film reading among different doctors has very large error, and the time cost and the labor cost of the manual film reading are very high.
An auxiliary diagnosis model for evaluating the bone growth stage by using the existing deep learning technology depends on a large amount of labeled training data and extremely depends on high-quality effective labels, the bone maturity evaluation by using the deep learning needs a large amount of labeled bone image samples, the data labeling work needs a professional physician team to spend a large amount of labor cost, the high-quality labels depend on the abundant degree of the experience of the physician team, and the labeling of the bone image needs a large amount of effort, so that the effective training samples are difficult to obtain. In addition, the labeling work on huge data sets consumes huge time cost, the labeling quality is easily restricted by the energy and physical factors of doctors, and the labeling quality of a large number of bone samples is difficult to guarantee.
Disclosure of Invention
The invention mainly solves the technical problem of providing a method applied to bone maturity assessment, which solves the problems of overlarge parameter amount, overfitting and the like in the classification and identification of high-resolution bone input images by utilizing a transfer learning method to extract the characteristics of bone images and utilizing a semi-supervised generation countermeasure network based on bone characteristic generation and characteristic identification to identify the bone maturity and also provides a system applied to the bone maturity assessment.
In order to solve the technical problems, the invention adopts a technical scheme that: a method applied to bone maturity assessment is provided, wherein the method comprises the following steps:
step S1, establishing a dense connection network model with image feature extraction capability, pre-training the dense connection network model, and then performing module transfer on dense connection blocks in the dense connection network model to generate a bone feature extraction module capable of obtaining primary features of a bone image;
step S2, inputting one-dimensional noise with Gaussian distribution into a multi-channel feature generator, generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module in each channel of the multi-channel feature generator, then performing cascade splicing on each bone feature map in each channel of the multi-channel feature generator, obtaining a bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing, and outputting the bone feature map;
and step S3, inputting the primary features of the bone image obtained by the real image through the bone feature extraction module and the bone feature map generated by the multi-channel feature generator into the capsule bone feature discriminator, so as to obtain bone instantiation feature vectors retaining space position information, and obtaining the category corresponding to the bone instantiation feature vectors, so that the bone maturity evaluation result of the real image can be known.
As an improvement of the present invention, the method further includes step S4, constructing a loss function of the bone feature map in the multi-channel feature generator, then defining the loss function in the capsule bone feature discriminator by using cross entropy, extracting the bone instantiation feature vector in the loss function, and obtaining the category corresponding to the bone instantiation feature vector, so as to obtain the bone maturity assessment result of the real image.
As a further improvement of the present invention, step S1 includes:
step S11, establishing a dense connection network model, and taking a training set image of a pre-training data set as an input image;
and step S12, performing feature extraction on the input image through a plurality of dense connecting blocks and transition layers, and outputting a prediction result.
As a further improvement of the present invention, step S1 further includes:
step S13, calculating a loss function by using the prediction result and the label of the real image of the pre-training data set, then reversely propagating and optimizing network parameters, and training the dense connection network model through the network parameters to enable the dense connection network model to have the image feature extraction capability;
and step S14, carrying out module migration on the pre-training data set to generate a bone feature extraction module capable of obtaining primary features of a bone image, and inputting a real image of a bone into the bone feature extraction module to obtain the primary features of the bone image.
As a further improvement of the present invention, step S2 includes:
step S21, inputting one-dimensional noise with Gaussian distribution into the multi-channel feature generator;
and step S22, for the feature generator in each channel in the multi-channel feature generator, the input noise passes through a plurality of deconvolution layers, and the generated bone feature graph is amplified layer by layer after deconvolution of the deconvolution layers, so that the feature dimension of each generated bone feature graph is the same as that extracted by the bone feature extraction module.
As a further improvement of the present invention, step S2 further includes:
step S23, carrying out cascade splicing on the bone feature graphs generated by the feature generators in each channel of the multi-channel feature generator, and obtaining the bone feature graphs with the same dimensionality as the feature dimensionality extracted by the bone feature extraction module after splicing;
and step S24, inputting the spliced bone characteristic diagram into a capsule bone characteristic discriminator.
As a further improvement of the present invention, step S3 includes:
step S31, inputting the primary feature of the bone image obtained by the bone feature extraction module of the real image and the bone feature map generated by the multi-channel feature generator into the capsule bone feature discriminator;
and step S32, extracting feature vectors from the primary features of the bone image and the bone feature map through a plurality of convolution layers.
As a further improvement of the present invention, step S3 further includes:
step S33, inputting the extracted feature vector into a capsule layer, thereby obtaining a bone instantiation feature vector retaining spatial position information;
step S34, obtaining the category corresponding to the bone instantiation feature vector, and then obtaining the bone maturity evaluation result of the real image.
A system for bone maturity assessment, comprising:
the pre-training skeleton image feature extraction model is used for establishing a dense connection network model and pre-training the dense connection network model so as to generate image feature extraction capacity;
the bone feature extraction module is used for obtaining primary features of the bone image through the real image;
the multi-channel feature generator is used for generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module, performing cascade splicing on the bone feature map, and obtaining the bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing;
and the capsule bone feature discriminator is used for obtaining a bone instantiation feature vector retaining the spatial position information.
As an improvement of the invention, the multi-channel feature generator further comprises a compensation module for constructing a loss function of the bone feature map in the multi-channel feature generator, and then defining the loss function by using cross entropy in the capsule bone feature discriminator.
The invention has the beneficial effects that: compared with the prior art, the method has the advantages that the bone image is subjected to feature extraction by using a transfer learning method, and the bone maturity is identified by using a semi-supervised generation countermeasure network based on bone feature generation and feature identification, so that the problems of overlarge parameter amount, overfitting and the like in the classification and identification of the high-resolution bone input image are solved.
Drawings
FIG. 1 is a block diagram of the steps of the method of the present invention applied to bone maturity assessment;
FIG. 2 is a first embodiment of the method for bone maturity assessment according to the present invention;
FIG. 3 is a block diagram of step S1 of the method of the present invention applied to bone maturity assessment;
FIG. 4 is a block diagram of step S2 of the method of the present invention applied to bone maturity assessment;
FIG. 5 is a block diagram of step S3 of the method of the present invention applied to bone maturity assessment;
FIG. 6 is a schematic view of the structure of the ulna and radius;
FIG. 7 is a schematic diagram of the maturity stages of the ulna and radius bones according to the present invention;
FIG. 8 is a block diagram of a system for bone maturity assessment according to the present invention;
FIG. 9 is a block diagram of the operating mechanism of the capsule layer of the capsule skeleton feature identifier according to the second embodiment of the present invention;
fig. 10 is a block diagram of a routing algorithm of a capsule bone feature discriminator according to a second embodiment of the present invention.
Detailed Description
In the existing method for processing the bone growth stage evaluation problem by the deep learning technology, most models usually adopt convolution and pooling operations in the manner of extracting bone image features in the classification process, but a convolution neural network cannot effectively detect specific direction information of the features when processing the image features, so that spatial information of the features is lost, and the performance of a bone maturity prediction model is influenced by the loss of the spatial transformation information. Therefore, in the task of classifying the ulna and radius growth stages by using the convolutional network, the convolutional network can be promoted to learn the spatial variation only by performing various spatial transformations on the input data by using data enhancement and other modes on the input bone data, so that the position variation information such as rotation and the like can be better identified, but the convolutional network has the problem that the spatial hierarchy information and the direction information of the image are easily lost essentially, so that the identification of the spatial transformation information with the same semantic content is not sensitive, and the classification precision is influenced.
The good performance of the deep learning skeleton maturity evaluation model depends on a large number of high-quality training samples, the accuracy of the skeleton maturity prediction model depends on a large number of training data, and whether enough training data are available is an important factor influencing whether the deep learning model can obtain a satisfactory result. When a traditional convolutional neural network is used for predicting a bone growth stage, spatial position information is easy to lose by adopting pooling operation, a high-precision result is achieved, a training set is often required to have abundant training samples for the neural network to fully learn various diversified shape and posture characteristics, for a bone maturity assessment task, image samples are difficult to obtain a large quantity of training samples due to reasons such as patient privacy, and therefore when a general convolutional network model is used for predicting the task, the classification precision is influenced due to the limited number of the training samples.
An existing bone age automatic identification system based on an improved residual error network (chinese patent application CN201711274742.3) performs image framing on a preprocessed bone image by using a sliding window to obtain an input image. And then, a plurality of images framed by the same image by the sliding window are respectively input into a trained residual bone age classification network, and the residual network performs feature extraction on the input images through a plurality of residual feature extraction modules. And finally, classifying each image framed by the sliding window through a softmax function to obtain a plurality of classification results, and finally taking the classification result with the highest frequency as the bone age prediction of the original image.
Other solutions similar to the present invention are: chinese patent application CN201711125692.2, the present invention provides an automated bone age assessment method including bone region of interest detection and bone region classification. The method comprises the steps of extracting an interest region of a target skeleton by training an RPN network and a Fast R-CNN network, inputting a skeleton image of the interest region obtained by target detection into a classification model, extracting and sampling input image features by the classification model through convolution and pooling, and finally outputting probability that an image belongs to each category and finally outputting final bone age prediction.
Further, chinese patent application CN201711065065.4 proposes an X-ray bone age prediction method based on deep learning, which includes preprocessing and data enhancing a hand X-ray image, segmenting the hand bone region by using a neural network, labeling a palm region and a background region in the X-ray image as an image positive sample block and an image negative sample block, inputting the labeled samples into the neural network for training, and using the obtained results as segmentation templates. And after processing the segmentation template, removing the surrounding small-area interference region, removing the muscle tissue region of the palm segmentation template region, and obtaining the hand bone X-ray film image after removing the noise. And then carrying out bone age classification on the denoised image by using a neural network model such as GoogLeNet or Alexnet by using a transfer learning method.
As shown in fig. 1, the present invention provides a method for bone maturity assessment, comprising the following steps:
step S1, establishing a dense connection network model with image feature extraction capability, pre-training the dense connection network model, and then performing module transfer on dense connection blocks in the dense connection network model to generate a bone feature extraction module capable of obtaining primary features of a bone image;
step S2, inputting one-dimensional noise with Gaussian distribution into a multi-channel feature generator, generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module in each channel of the multi-channel feature generator, then performing cascade splicing on each bone feature map in each channel of the multi-channel feature generator, obtaining a bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing, and outputting the bone feature map;
and step S3, inputting the primary features of the bone image obtained by the real image through the bone feature extraction module and the bone feature map generated by the multi-channel feature generator into the capsule bone feature discriminator, so as to obtain bone instantiation feature vectors retaining space position information, and obtaining the category corresponding to the bone instantiation feature vectors, so that the bone maturity evaluation result of the real image can be known.
As shown in fig. 3, step S1 includes:
step S11, establishing a dense connection network model, and taking a training set image of a pre-training data set as an input image;
step S12, performing feature extraction on the input image through a plurality of dense connecting blocks and a transition layer, and outputting a prediction result;
step S13, calculating a loss function by using the prediction result and the label of the real image of the pre-training data set, then reversely propagating and optimizing network parameters, and training the dense connection network model through the network parameters to enable the dense connection network model to have the image feature extraction capability;
and step S14, carrying out module migration on the pre-training data set to generate a bone feature extraction module capable of obtaining primary features of a bone image, and inputting a real image of a bone into the bone feature extraction module to obtain the primary features of the bone image.
In the present invention, as shown in fig. 4, step S2 includes:
step S21, inputting one-dimensional noise with Gaussian distribution into the multi-channel feature generator;
step S22, for the feature generator in each channel in the multi-channel feature generator, the input noise passes through a plurality of deconvolution layers, and the generated bone feature graph is amplified layer by layer after deconvolution of the deconvolution layers, so that the feature dimension of each generated bone feature graph is the same as that extracted by the bone feature extraction module;
step S23, carrying out cascade splicing on the bone feature graphs generated by the feature generators in each channel of the multi-channel feature generator, and obtaining the bone feature graphs with the same dimensionality as the feature dimensionality extracted by the bone feature extraction module after splicing;
and step S24, inputting the spliced bone characteristic diagram into a capsule bone characteristic discriminator.
In the present invention, as shown in fig. 5, step S3 includes:
step S31, inputting the primary feature of the bone image obtained by the bone feature extraction module of the real image and the bone feature map generated by the multi-channel feature generator into the capsule bone feature discriminator;
step S32, extracting feature vectors from the primary features of the bone image and the bone feature map through a plurality of convolution layers;
step S33, inputting the extracted feature vector into a capsule layer, thereby obtaining a bone instantiation feature vector retaining spatial position information;
step S34, obtaining the category corresponding to the bone instantiation feature vector, and then obtaining the bone maturity evaluation result of the real image.
The invention provides an embodiment I, as shown in fig. 2, a method for evaluating bone maturity of the embodiment, which comprises the following steps:
step S1, establishing a dense connection network model with image feature extraction capability, pre-training the dense connection network model, and then performing module transfer on dense connection blocks in the dense connection network model to generate a bone feature extraction module capable of obtaining primary features of a bone image;
step S2, inputting one-dimensional noise with Gaussian distribution into a multi-channel feature generator, generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module in each channel of the multi-channel feature generator, then performing cascade splicing on each bone feature map in each channel of the multi-channel feature generator, obtaining a bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing, and outputting the bone feature map;
step S3, inputting the primary features of the bone image obtained by the real image through the bone feature extraction module and the bone feature map generated by the multi-channel feature generator into a capsule bone feature discriminator, so as to obtain bone instantiation feature vectors retaining space position information, and obtaining the category corresponding to the bone instantiation feature vectors, so that the bone maturity evaluation result of the real image can be known;
step S4, constructing a loss function of the bone feature map in the multi-channel feature generator, defining the loss function in the capsule bone feature discriminator by using cross entropy, extracting the bone instantiation feature vector in the loss function, and obtaining the category corresponding to the bone instantiation feature vector, so as to obtain the bone maturity evaluation result of the real image.
The second embodiment of the present invention is directed to a method for assessing the maturity of ulna and radius, and at present, the assessment of bone maturity is an important index for the auxiliary identification of the growth and development degree and the problems related to growth and development, and plays an important role in clinical medicine, and in the treatment of some related diseases such as idiopathic scoliosis and related malformed bone diseases, the assessment of bone maturity can assist physicians in identifying the bone growth stage of patients, so that the physicians can determine the optimal treatment time and the clinical observation interval. For patients requiring surgical intervention, the physician is assisted in determining the appropriate time to perform or terminate the bracing procedure. For the identification of the bone maturity, the number of bones in human bones is large due to the wrist and palm parts, the bone connection growth plate is obvious, the contained information is large, the collection of bone images is very convenient, and the palm parts are generally used internationally for bone maturity assessment, as shown in fig. 7.
The current bone maturity stages of the radius are described in the following table:
description of the maturity stage of radius bone
Figure GDA0002622748940000091
The current skeletal maturity stages of the ulna are described in the following table:
ulna skeletal maturity stage description
Figure GDA0002622748940000092
Figure GDA0002622748940000101
The bone maturity growth stage classification provides the relationship between growth peak and stop stages of children and teenagers, has important significance for doctors to make clinical decisions, and explains the bone morphology corresponding to the bone maturity of each growth stage, the relevant characteristics of the subjects, the age and the sexual development degree.
As shown in fig. 8, the second embodiment provides a system for bone maturity assessment, comprising:
the pre-training skeleton image feature extraction model is used for establishing a dense connection network model and pre-training the dense connection network model so as to generate image feature extraction capacity;
the bone feature extraction module is used for obtaining primary features of the bone image through the real image;
the multi-channel feature generator is used for generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module, performing cascade splicing on the bone feature map, and obtaining the bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing;
the capsule skeleton characteristic discriminator is used for acquiring a skeleton instantiation characteristic vector retaining space position information;
and the compensation module is used for constructing a loss function of the bone feature map in the multi-channel feature generator and then defining the loss function by using cross entropy in the capsule bone feature discriminator.
Compared with the prior art, the invention has the following advantages:
1. the semi-supervised learning method is introduced into the skeletal maturity classification task, only small samples in the data set need to be labeled, the demand for labeling training samples is reduced, the workload of complex labeling work on training data in a traditional algorithm is reduced, the work period of the skeletal maturity identification task is shortened, and the overall efficiency of the skeletal maturity identification task is improved.
2. The capsule network structure is introduced into the bone age identification technology, so that the sensitivity of a bone feature discriminator on bone feature space information identification is improved, and deformation information such as angles, positions and the like of bone image features can be identified, so that the bone maturity identification precision is improved.
3. The capsule skeleton feature discriminator based on the capsule idea improvement can reduce the required amount of training sample amount when a neural network processes a skeleton maturity identification task by an efficient image feature extraction mode and reserving image space position change information compared with a traditional skeleton maturity classification model, and can achieve corresponding higher precision by relatively fewer samples.
4. The method adopts a bone feature extraction mode for the high-resolution bone image, inputs a real sample into a semi-supervised generation confrontation network, and simultaneously utilizes a multi-channel feature generator to replace a traditional mode of directly generating an image by generating the confrontation network, so that the problem of high parameter quantity brought by the high-resolution image to the traditional network is reduced, and compared with a common network, the method does not need to scale and scale the original image, and avoids the loss of input image features.
Meanwhile, the second embodiment provides a method for evaluating the maturity of skeleton,
construction of pre-training bone image feature extraction module
Step a 1: establishing a dense connection network model, and taking a training set image of a pre-training data set as input;
step a 2: performing feature extraction on an input image through a plurality of dense connecting blocks and a transition layer (as shown in fig. 8, applying a dense connecting block 1, a dense connecting block 2, a dense connecting block 3, a convolution layer and a transition layer), and outputting a prediction result;
step a 3: calculating a loss function by using a prediction result of the dense connection network model and a real image label of a pre-training data set, minimizing the loss function, reversely propagating and optimizing network parameters, training a network, and enabling the dense connection network model to have good image feature extraction capability through pre-training;
step a 4: the method comprises the steps of carrying out module migration on parameters of partial dense connecting blocks (the dense connecting blocks 1 and the dense connecting blocks 2) on the front layer of a dense connecting network model, using the parameters as a bone feature extraction module, inputting an ulna/radius real image into the bone feature extraction module to obtain primary features of the ulna/radius image, filtering features and noise features which are small in maturity classification images in a bone image sample by the bone feature extraction module, and reducing feature sizes input into a feature discriminator.
(II) generating image features by a multi-channel feature generator
Step b 1: inputting one-dimensional random noise with Gaussian distribution into a multi-channel feature generator;
step b 2: for each multi-channel feature generator, the input random noise passes through a plurality of deconvolution layers, wherein the activation function of each deconvolution layer adopts a ReLU function, the generated bone feature graphs are amplified layer by layer after multi-layer deconvolution, and the size of each generated bone feature graph is the same as the feature size extracted by the bone feature extraction module;
step b 3: carrying out cascade splicing on the bone feature graphs generated by the feature generator of each channel, wherein the obtained dimension after splicing is consistent with the feature dimension extracted by the bone feature extraction module;
step b 4: and inputting the features generated by the multi-channel feature generator into a capsule bone feature discriminator, and classifying the generated multi-channel bone features by the bone feature discriminator to obtain a discrimination result.
(III) classifying and predicting results by a capsule bone feature discriminator
Step c 1: inputting the primary ulna/radius features obtained by the real image through a bone feature extraction module or bone features generated by a multi-channel feature generator into a capsule bone feature discriminator;
step c 2: further extracting input ulna/radius low-level features through a plurality of convolution layers, wherein a convolution layer activation function adopts a ReLU function;
step c 3: inputting the ulna/radius features extracted in the step c2 into the capsule layer to obtain an ulna/radius feature vector retaining spatial position information;
step c 4: constructing a bone instantiation feature vector which can represent all input features most at present, such as an instantiation feature vector representing bone width information or an instantiation vector representing bone texture information, by a routing mechanism by using a routing algorithm for the connection mode of ulna/radius instantiation feature vectors between layers, and outputting and inputting the skeleton instantiation feature vector to the next layer;
step c 5: after ulna and radius instantiation feature extraction is carried out on a plurality of capsule layers, a plurality of bone instantiation feature vectors are obtained, for a radius classification task, the model lengths of the first four vectors represent the probability that the current input belongs to each radius maturity stage, the model length of the other vector represents the probability that the current input of the bone features is the generation features, for the ulna classification task, the model lengths of the first three vectors represent the probability that the current input belongs to each ulna maturity stage, the model length of the other vector represents the probability that the current input of the bone features is the generation features, and the class corresponding to the bone instantiation vector with the largest model length is taken as the classification result of the current ulna/radius input features.
In the above steps, the bone capsule interlayer calculation method described in step c4 is specifically calculated as shown in fig. 9, and an ulna and radius instantiation feature vector u of an upper capsule layer is setiInput into this layer, uiAnd an optimizable transformation matrix WijMultiplied prediction vectors
Figure GDA0002622748940000131
Wherein the vector weighted sum S is obtained by linear combination between prediction vectorsjThe magnitude of the coefficient of the linear combination is cijIt is decided to obtain SjThen, S is compressed by a compression functionjLimiting the length of the vector to obtain an output vector vjCoefficient ofijIs a constant. Constant cijB is formed byijCalculation decision, in iterative operation, b is continuously updatedijTo update the coupling coefficient cij
The above process is a mathematical description of a routing mechanism, mathematical operations among capsule network layers in our skeletal feature discriminator actually have corresponding instantiation meanings, and an instantiation description of a linear combination mechanism and a routing mechanism for the operations among the capsule layers is shown in fig. 10.
Construction of loss function
In the second embodiment, the ulna/radius features generated by the bone feature generator are made to approximate the feature distribution of the real bone image by constructing the loss function, and the capsule bone feature discriminator can well distinguish the categories corresponding to the input features of the ulna and the radius by constructing the multi-classification loss function, and the steps are as follows:
step d 1: for the training process of the bone feature discriminator, after the model predicts the category corresponding to the radius feature of the currently input ulna, the loss of the discriminator is calculated according to the prediction result and the bone maturity corresponding to the currently input feature, the loss of the discriminator is composed of 3 parts, and can be expressed as follows:
total loss of bone feature discriminator is LLoss of true tagged bone input features+LLoss of true unlabeled skeletal input features+LGenerating loss of skeletal featuresFor the loss function of the features of the labeled real bone image, taking the radial classification as an example, the feature discriminator carries out four classifications on the loss function, and each classification represents a radial maturity stage. We calculate their loss function by using the edge loss Margin loss. We predict the class probability for each bone maturity using the output of the first four capsule vectors. For the loss function of the features of the unlabeled real bone image, the input ulna/radius features can only know whether the labels of the ulna/radius features are real bone features or generated bone features, so that the probability of whether the current input belongs to the false bone features is judged by utilizing one output vector in the feature discriminator, the probability is closer to 0 to represent that the image is the real bone input features, and the probability is closer to 1 to represent that the image is the generated bone features. We use the cross entropy of the sigmoid function to define the loss function for unlabeled real samples. For the loss of the generated bone features, a loss function is constructed in the same way, and the loss function is defined in the cross entropy form of the sigmoid function.
Step d 2: for a multi-channel feature generator, which aims to generate a bone feature image capable of deceiving a feature discriminator by simulating real ulna/radius features, the loss of the multi-channel feature generator comprises two parts in total, which can be expressed as:
total loss of multi-channel feature generator is LGenerating characteristic discrimination loss+LGenerating feature multi-channel matching loss
For discriminant loss of generated bone features, the generated bone features hope to be capable of simulating the distribution of real ulna/radius features to deceive a feature discriminant, so for the generated bone features, the discriminant result is compared with a real label by the discriminant to obtain a loss function, and for the loss of the generator, the cross entropy of sigmoid is adopted for calculation. For multi-channel matching loss of generated features, when the feature discriminator performs further feature extraction on input bone feature information, the feature generator expects the generated bone features to perform further feature extraction on the discrimination model, and the output result can be matched with the intermediate result of the real bone features extracted by the discriminator. We use the square of the difference between the true feature intermediate result in the discriminator and the generated feature intermediate result in the discriminator as a loss function.
In the embodiment, the features of the high-resolution ulna/radius image are extracted by using a transfer learning method, the ulna/radius maturity is identified by using a confrontation network generated by semi-supervision based on bone feature generation and feature identification, the problems of overlarge parameter amount, overfitting and the like in the classification and identification of the high-resolution bone input image are solved, an ulna and radius feature information graph is generated by using a multi-channel feature generator, extracted bone sample input features are distinguished by using a capsule bone feature discriminator to obtain bone maturity stage prediction, a capsule instantiation feature propagation mechanism is used in the capsule bone feature discriminator, the conventional convolution operation and pooling operation for generating a confrontation network discriminator model are replaced by a vector feature representation method and a routing mechanism, and bone image instantiation vectors are extracted by connecting capsules of different layers in the capsule bone feature discriminator step by step, the second embodiment adopts a semi-supervised learning mode, weakens the demand of the network on the labeling information of the skeleton image, and further realizes the automatic evaluation model of the skeleton maturity with high precision and high robustness.
The second embodiment has the following advantages:
1. the method comprises the steps of applying a semi-supervised generation countermeasure dense connection network based on feature recognition to a skeletal maturity assessment task, providing a multichannel skeletal feature generation mode, simulating features of a real ulna/radius by using a multichannel skeletal feature generator to generate a skeletal feature graph for multichannel image generation, calculating a loss function by adopting superposition of multichannel feature matching and discriminant loss, enabling generated skeletal features to be closer to the distribution of the real skeletal features, and improving the feature recognition capability and the anti-noise capability of a skeletal maturity discriminant model through countermeasure training.
2. The bone maturity classification method comprises the steps of generating an antagonistic dense connection network by adopting semi-supervision to identify the bone maturity, utilizing a small part of ulna/radius real images with labels, and performing antagonistic learning by a bone feature generator and a capsule bone feature discriminator to enable the capsule bone feature discriminator to fully learn the image features of the ulna/radius at each stage, so that the demand of a model on the images with labeled bones is reduced, and a high-precision bone maturity classification result can be obtained by only using a small amount of real image labels in a data set.
3. By adopting a transfer learning mode, pre-training a dense connection network by using other data sets, transferring a module of the dense connection network with a feature extraction function to a generation countermeasure dense connection network, and performing feature extraction on a real ulna/radius input image by using the module, the problem that the parameter quantity is overlarge when a high-resolution skeleton image is directly input to the generation countermeasure network is reduced, the occurrence of an overfitting phenomenon is reduced, a solution is provided for the input of the high-resolution image, the loss of features when the input image is subjected to size conversion is avoided, and all features of the input skeleton image are reserved.
4. In the capsule skeleton feature discriminator, the extraction of features by the capsule layer can reserve the space position change information of skeleton image features, and the instantiation features of different skeletons are combined and screened through a routing mechanism, so that the feature extraction efficiency of ulna/radius images is improved.
5. The capsule network is utilized to optimize the capsule skeleton feature discriminator, so that the capsule skeleton feature discriminator can extract feature vectors containing image space position information, the traditional process of extracting features of skeleton images by using convolution operation is replaced, the capsule skeleton feature discriminator can identify variants with the same or similar features by learning a small amount of ulna/radius features, and the demand of training samples in a skeleton maturity task is reduced.
6. The ulna/radius image is used as an input for generating the countermeasure network in a characteristic form, and the multi-channel characteristic generator and the capsule skeleton characteristic discriminator adopt a discrimination mode that the ulna/radius image characteristics are used as an input and output basis, so that the operation efficiency of the network is improved, the interference of the output of the multi-channel characteristic generator on the capsule skeleton characteristic discriminator is improved, and the discrimination capability of the discriminator on the skeleton characteristics is improved.
7. The method comprises the steps of generating a confrontation network based on semi-supervision of feature recognition, obtaining a trained feature discrimination model through confrontation training, transplanting the feature extraction network and the trained feature discrimination model to a hardware platform, achieving an ulna/radius bone maturity assessment task, and facilitating subsequent system upgrading and updating.
The bone maturity evaluation method can be applied to the same classification problems with other industry backgrounds, such as other medical image classification tasks and the like, namely, the tasks which meet the standard of the general deep learning classification task, and the difference is that corresponding training samples are replaced in the training stage to be learned by using the system.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for bone maturity assessment, comprising the steps of:
step S1, establishing a dense connection network model with image feature extraction capability, pre-training the dense connection network model, and then performing module transfer on dense connection blocks in the dense connection network model to generate a bone feature extraction module capable of obtaining primary features of a bone image;
step S2, inputting one-dimensional noise with Gaussian distribution into a multi-channel feature generator, generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module in each channel of the multi-channel feature generator, then performing cascade splicing on each bone feature map in each channel of the multi-channel feature generator, obtaining a bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing, and outputting the bone feature map;
and step S3, inputting the primary features of the bone image obtained by the real image through the bone feature extraction module and the bone feature map generated by the multi-channel feature generator into the capsule bone feature discriminator, so as to obtain bone instantiation feature vectors retaining space position information, and obtaining the category corresponding to the bone instantiation feature vectors, so that the bone maturity evaluation result of the real image can be known.
2. The method as claimed in claim 1, further comprising step S4, constructing a loss function of the bone feature map in the multi-channel feature generator, defining the loss function in the capsule bone feature discriminator using cross entropy, extracting the bone instantiation feature vector in the loss function, and obtaining the class corresponding to the bone instantiation feature vector, so as to obtain the bone maturity estimation result of the real image.
3. The method for bone maturity assessment according to claim 1 or 2, wherein step S1 includes:
step S11, establishing a dense connection network model, and taking a training set image of a pre-training data set as an input image;
and step S12, performing feature extraction on the input image through a plurality of dense connecting blocks and transition layers, and outputting a prediction result.
4. The method for bone maturity assessment according to claim 3, wherein step S1 further includes:
step S13, calculating a loss function by using the prediction result and the label of the real image of the pre-training data set, then reversely propagating and optimizing network parameters, and training the dense connection network model through the network parameters to enable the dense connection network model to have the image feature extraction capability;
and step S14, carrying out module migration on the pre-training data set to generate a bone feature extraction module capable of obtaining primary features of a bone image, and inputting a real image of a bone into the bone feature extraction module to obtain the primary features of the bone image.
5. The method for bone maturity assessment according to claim 1 or 4, wherein step S2 includes:
step S21, inputting one-dimensional noise with Gaussian distribution into the multi-channel feature generator;
and step S22, for the feature generator in each channel in the multi-channel feature generator, the input noise passes through a plurality of deconvolution layers, and the generated bone feature graph is amplified layer by layer after deconvolution of the deconvolution layers, so that the feature dimension of each generated bone feature graph is the same as that extracted by the bone feature extraction module.
6. The method for bone maturity assessment according to claim 5, wherein step S2 further includes:
step S23, carrying out cascade splicing on the bone feature graphs generated by the feature generators in each channel of the multi-channel feature generator, and obtaining the bone feature graphs with the same dimensionality as the feature dimensionality extracted by the bone feature extraction module after splicing;
and step S24, inputting the spliced bone characteristic diagram into a capsule bone characteristic discriminator.
7. The method for bone maturity assessment according to claim 1 or 6, wherein step S3 includes:
step S31, inputting the primary feature of the bone image obtained by the bone feature extraction module of the real image and the bone feature map generated by the multi-channel feature generator into the capsule bone feature discriminator;
and step S32, extracting feature vectors from the primary features of the bone image and the bone feature map through a plurality of convolution layers.
8. The method for bone maturity assessment according to claim 7, wherein step S3 further includes:
step S33, inputting the extracted feature vector into a capsule layer, thereby obtaining a bone instantiation feature vector retaining spatial position information;
step S34, obtaining the category corresponding to the bone instantiation feature vector, and then obtaining the bone maturity evaluation result of the real image.
9. A system for bone maturity assessment, comprising:
the pre-training skeleton image feature extraction model is used for establishing a dense connection network model and pre-training the dense connection network model so as to generate image feature extraction capacity;
the bone feature extraction module is used for obtaining primary features of the bone image through the real image;
the multi-channel feature generator is used for generating a bone feature map with the same size as the feature size extracted by the bone feature extraction module, performing cascade splicing on the bone feature map, and obtaining the bone feature map with the same dimension as the feature dimension extracted by the bone feature extraction module after splicing;
and the capsule bone feature discriminator is used for obtaining a bone instantiation feature vector retaining the spatial position information.
10. The system of claim 9, further comprising a compensation module for constructing a loss function of the bone feature map in the multi-channel feature generator, and then defining the loss function by cross entropy in the capsule bone feature discriminator.
CN201811319500.6A 2018-11-07 2018-11-07 Method and system applied to bone maturity assessment Active CN109544518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811319500.6A CN109544518B (en) 2018-11-07 2018-11-07 Method and system applied to bone maturity assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811319500.6A CN109544518B (en) 2018-11-07 2018-11-07 Method and system applied to bone maturity assessment

Publications (2)

Publication Number Publication Date
CN109544518A CN109544518A (en) 2019-03-29
CN109544518B true CN109544518B (en) 2020-11-03

Family

ID=65844644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811319500.6A Active CN109544518B (en) 2018-11-07 2018-11-07 Method and system applied to bone maturity assessment

Country Status (1)

Country Link
CN (1) CN109544518B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009034A (en) * 2019-04-02 2019-07-12 中南大学 Air-conditioner set type identifier method
CN110009097B (en) * 2019-04-17 2023-04-07 电子科技大学 Capsule residual error neural network and image classification method of capsule residual error neural network
CN110022422B (en) * 2019-04-19 2020-02-07 吉林大学 Video frame sequence generation method based on dense connection network
CN110322432B (en) * 2019-05-27 2021-11-23 上海联影医疗科技股份有限公司 Medical image processing method, apparatus, computer device and readable storage medium
CN112365553B (en) * 2019-07-24 2022-05-20 北京新唐思创教育科技有限公司 Human body image generation model training, human body image generation method and related device
CN110648317B (en) * 2019-09-18 2023-06-30 上海交通大学 Quality classification method and system suitable for spine metastasis tumor bone
CN110879985B (en) * 2019-11-18 2022-11-11 西南交通大学 Anti-noise data face recognition model training method
CN111539963B (en) * 2020-04-01 2022-07-15 上海交通大学 Bone scanning image hot spot segmentation method, system, medium and device
CN111553412A (en) * 2020-04-27 2020-08-18 广州市妇女儿童医疗中心(广州市妇幼保健院、广州市儿童医院、广州市妇婴医院、广州市妇幼保健计划生育服务中心) Method, device and equipment for training precocious puberty classification model
CN112102285B (en) * 2020-09-14 2024-03-12 辽宁工程技术大学 Bone age detection method based on multi-modal countermeasure training

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004112580A2 (en) * 2003-06-19 2004-12-29 Compumed, Inc. Method and system for analyzing bone conditions using dicom compliant bone radiographic image
CN107767376A (en) * 2017-11-02 2018-03-06 西安邮电大学 X-ray film stone age Forecasting Methodology and system based on deep learning
CN108052940A (en) * 2017-12-17 2018-05-18 南京理工大学 SAR remote sensing images waterborne target detection methods based on deep learning
CN108510004A (en) * 2018-04-04 2018-09-07 深圳大学 A kind of cell sorting method and system based on depth residual error network
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004112580A2 (en) * 2003-06-19 2004-12-29 Compumed, Inc. Method and system for analyzing bone conditions using dicom compliant bone radiographic image
CN107767376A (en) * 2017-11-02 2018-03-06 西安邮电大学 X-ray film stone age Forecasting Methodology and system based on deep learning
CN108052940A (en) * 2017-12-17 2018-05-18 南京理工大学 SAR remote sensing images waterborne target detection methods based on deep learning
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN108510004A (en) * 2018-04-04 2018-09-07 深圳大学 A kind of cell sorting method and system based on depth residual error network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的医学图像癌变识别研究;薛迪秀;《中国博士学位论文全文数据库信息科技辑》;20180215(第2期);I138-88 *
基于深度机器学习的体态与手势感知计算关键技术研究;杜宇;《中国博士学位论文全文数据库信息科技辑》;20170815(第8期);I138-74 *
深度学习在图像识别及骨龄评估中的优势及应用前景;胡婷鸿 等;《法医学杂志》;20171231;第33卷(第6期);第629-639页 *
胶囊网络技术及发展趋势研究;朱应钊 等;《广东通信技术》;20181031;第51-54,74页 *

Also Published As

Publication number Publication date
CN109544518A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109544518B (en) Method and system applied to bone maturity assessment
CN107748900B (en) Mammary gland tumor classification device and storage medium based on discriminative convolutional neural network
CN113040715B (en) Human brain function network classification method based on convolutional neural network
WO2016192612A1 (en) Method for analysing medical treatment data based on deep learning, and intelligent analyser thereof
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
CN112784879A (en) Medical image segmentation or classification method based on small sample domain self-adaption
CN113706434B (en) Post-processing method for chest enhancement CT image based on deep learning
CN116884623B (en) Medical rehabilitation prediction system based on laser scanning imaging
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
Tian et al. Radiomics and Its Clinical Application: Artificial Intelligence and Medical Big Data
Manikandan et al. Segmentation and Detection of Pneumothorax using Deep Learning
Qiu et al. Spiculation sign recognition in a pulmonary nodule based on spiking neural p systems
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification
CN114519705A (en) Ultrasonic standard data processing method and system for medical selection and identification
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN117237351A (en) Ultrasonic image analysis method and related device
CN115909438A (en) Pain expression recognition system based on depth time-space domain convolutional neural network
CN115762721A (en) Medical image quality control method and system based on computer vision technology
Jia et al. Fine-grained precise-bone age assessment by integrating prior knowledge and recursive feature pyramid network
Perkonigg et al. Detecting bone lesions in multiple myeloma patients using transfer learning
Shekerbek et al. APPLICATION OF MATHEMATICAL METHODS AND MACHINE LEARNING ALGORITHMS FOR CLASSIFICATION OF X-RAY IMAGES.
CN117975170B (en) Medical information processing method and system based on big data
Akram et al. Recognizing Breast Cancer Using Edge-Weighted Texture Features of Histopathology Images.
Gupta et al. Lungs Disease Classification using VGG-16 architecture with PCA
Lakshmi Deep learning on medical image analysis on COVID-19 x-ray dataset using an X-Net architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant