CN107895367B - Bone age identification method and system and electronic equipment - Google Patents

Bone age identification method and system and electronic equipment Download PDF

Info

Publication number
CN107895367B
CN107895367B CN201711125692.2A CN201711125692A CN107895367B CN 107895367 B CN107895367 B CN 107895367B CN 201711125692 A CN201711125692 A CN 201711125692A CN 107895367 B CN107895367 B CN 107895367B
Authority
CN
China
Prior art keywords
bone
target
picture
skeleton
bone age
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711125692.2A
Other languages
Chinese (zh)
Other versions
CN107895367A (en
Inventor
王书强
王永灿
胡勇
曹松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201711125692.2A priority Critical patent/CN107895367B/en
Publication of CN107895367A publication Critical patent/CN107895367A/en
Application granted granted Critical
Publication of CN107895367B publication Critical patent/CN107895367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Abstract

The present disclosure relates to the field of bone maturity analysis technologies, and in particular, to a bone age identification method, system and electronic device. The bone age identification method comprises the following steps: step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model; step b: cutting a target skeleton picture according to the position coordinates of the target skeleton; step c: and inputting the target bone picture into a bone stage classification model for bone age identification. According to the method, the ulna picture and the radius picture are automatically classified and recognized by deep learning, manual intervention is not needed, the bone age recognition effect and efficiency are improved, and the method has better performance than the prior art; and manual collection is not needed, so that time and labor are saved, and the method is efficient and rapid.

Description

Bone age identification method and system and electronic equipment
Technical Field
The present disclosure relates to the field of bone maturity analysis technologies, and in particular, to a bone age identification method, system and electronic device.
Background
Bone maturity (bone age) analysis, as an important index of growth and development degree, plays an important role in the fields of medicine, sports, judicial expertise and the like, and particularly in clinical management of adolescent scoliosis and other patients, the bone maturity analysis is performed to understand the peak period and the stop period of growth of the adolescent scoliosis and is important for determining the clinical observation interval, starting and stopping the support treatment at regular time. However, since the wrist has a large number of bones and a large amount of information, and the collection is more convenient, the wrist bones are generally used for evaluating the bone maturity.
The currently international commonly used skeletal development maturity evaluation method comprises a G-P atlas method and a TW scoring method which are provided based on the skeletal development characteristics of Europe and America. However, the G-P atlas method and the TW scoring method cannot be completely suitable for east Asian race because of the great difference of the skeleton development conditions of various countries. The evaluation methods such as the plum fruit method, the CHN method, the Chinese-05 standard and the like are also established in sequence in China, and the evaluation methods have certain lag due to the accelerated growth and development of children.
Luk et al, hong kong university, 2013, proposed an assessment criterion for bone maturity by classification of distal radius and ulna. The standard investigated the development of changes in the corresponding bone age and sexual characteristics, station height, sit height, arm distance, radius length, tibia length at each stage of these epiphyses, and found that station height, sit height, and arm distance growth peaked at the R7 (mean, 11.4 years) and U5 (mean, 11.0 years), long bone growth peaked also at R7 and U5, and height and arm distance growth stopped at the R10 (mean, 15.6 years) and U9 (mean, 17.3 years). However, these are manually classified by radiologists according to criteria, which is time-consuming, labor-consuming and highly subjective.
Although chinese patents CN103300872B and CN106340000A implement automatic bone age identification by using computer graphics analysis and image feature extraction, and then using a support vector machine algorithm classification method. However, the parts such as acquisition and feature extraction at the target bone position still need manual participation, complete automation cannot be realized, the used method is relatively traditional, and the effect and the efficiency are not ideal enough.
In medical diagnostic procedures, accurate diagnosis is often examined with the aid of high-quality medical images. With the recent continuous improvement of medical imaging technology, hospitals have numerous high-end imaging devices to acquire higher-quality medical images more quickly. However, the interpretation and judgment of the images are generally finished by doctors, which not only wastes time and labor, but also has more subjective factors. Computer aided detection is an important tool in clinical practice and research, and can be used for automatic diagnosis by utilizing technologies such as machine learning, image processing and the like. However, the effect of the traditional method is not ideal enough, and in recent years, some researches using deep learning technology obtain better effects and show the superior performance of the traditional method in the aspect. Therefore, the deep learning method is adopted to analyze radial and ulnar X-ray pictures to automatically classify and evaluate the bone maturity, so that the understanding of the growth peak period and the growth stop period is of great significance for determining the clinical management of adolescent scoliosis patients and other patients.
Disclosure of Invention
The present application provides a bone age identification method, system and electronic device, which aim to solve at least one of the above technical problems in the prior art to a certain extent.
In order to solve the above problems, the present application provides the following technical solutions:
a bone age identification method, comprising:
step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: and inputting the target bone picture into a bone stage classification model for bone age identification.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the step a also comprises the following steps: collecting a skeleton image; the bone image is an X-ray picture.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the step a also comprises the following steps: marking a target skeleton region in the collected skeleton image to obtain a target skeleton position detection data set; and constructing a target skeleton region detection model, and training the target skeleton position detection model through the skeleton image in the target skeleton position detection data set.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the target bone region detection model comprises an RPN network and a Fast R-CNN network, wherein the RPN network and the Fast R-CNN network share a bottom layer convolution layer, the bottom layer convolution layer comprises 5 layers of convolution layers, a sixth convolution layer is arranged behind the bottom layer convolution layer, the sixth convolution layer is connected with 2 convolution branches, initial region classification scores and a boundary frame are respectively output through the 2 convolution branches to form the RPN network, and an initial interest region of a target bone is extracted through the RPN network; and the bottom convolution layer is connected with a first full-connection layer and a second full-connection layer through an ROI (region of interest) pooling layer, and the first full-connection layer and the second full-connection layer respectively output classification scores and position coordinates of the boundary frame according to the initial interest region.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step c, the inputting the target bone picture into a bone stage classification model for bone age identification further comprises: marking out bone age labels corresponding to the target bone picture according to a bone maturity evaluation standard to obtain a bone age stage classification data set, constructing a bone stage classification model, and training the constructed bone stage classification model through the bone age stage classification data set.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the training of the skeleton stage classification model through the bone age stage classification data set specifically comprises the following steps:
step c 1: initializing bone age stage classification model parameters;
step c 2: performing convolution pooling on the target bone picture, and extracting characteristic information of the target bone picture;
step c 3: calculating the extracted characteristic information of the ulna picture and/or the radius picture to obtain various probabilities, and outputting a bone age predicted value corresponding to the target bone picture;
step c 4: forming a loss function according to the output bone age predicted value and the error between the bone age labels, judging whether the loss function meets the minimum value, and if not, adjusting network parameters by using a back propagation algorithm; if the minimum value is met, the network parameters are saved.
Another technical scheme adopted by the embodiment of the application is as follows: a bone age identification system comprising:
a position detection module: the bone image is input into a target bone position detection model, and the position coordinates of a target bone in the bone image are detected through the target bone position detection model;
the picture cutting module: the system is used for cutting out a target skeleton picture according to the position coordinates of the target skeleton;
bone age identification module: and the system is used for inputting the target bone picture into a bone stage classification model for bone age identification.
The technical scheme adopted by the embodiment of the application further comprises an image acquisition module, wherein the image acquisition module is used for acquiring the bone image; the bone image is an X-ray picture.
The technical scheme adopted by the embodiment of the application further comprises the following steps:
a region marking module: the system comprises a skeleton position detection data set, a skeleton position detection data set and a skeleton position detection data set, wherein the skeleton position detection data set is used for marking a target skeleton region in an acquired skeleton image;
a first model building module: the method is used for constructing a target bone region detection model, and the target bone position detection model is trained through the bone images in the target bone position detection data set.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the target bone region detection model comprises an RPN network and a Fast R-CNN network, wherein the RPN network and the Fast R-CNN network share a bottom layer convolution layer, the bottom layer convolution layer comprises 5 layers of convolution layers, a sixth convolution layer is arranged behind the bottom layer convolution layer, the sixth convolution layer is connected with 2 convolution branches, initial region classification scores and a boundary frame are respectively output through the 2 convolution branches to form the RPN network, and an initial interest region of a target bone is extracted through the RPN network; and the bottom convolution layer is connected with a first full-connection layer and a second full-connection layer through an ROI (region of interest) pooling layer, and the first full-connection layer and the second full-connection layer respectively output classification scores and position coordinates of the boundary frame according to the initial interest region.
The technical scheme adopted by the embodiment of the application further comprises the following steps:
bone age marking module: the bone age label is used for marking a bone age label corresponding to the target bone picture according to the bone maturity evaluation standard to obtain a bone age stage classification data set;
a second model building module: the bone age stage classification data set is used for constructing a bone stage classification model, and the constructed bone stage classification model is trained through the bone age stage classification data set.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the second model building module comprises:
an initialization unit: the method is used for initializing bone age stage classification model parameters;
a feature extraction unit: the system is used for performing convolution pooling on the target bone picture and extracting the characteristic information of the target bone picture;
a result output unit: the system is used for calculating the extracted characteristic information of the ulna picture and/or the radius picture to obtain various probabilities and outputting a bone age predicted value corresponding to a target bone picture;
a loss function calculation unit: the system comprises a parameter optimizing unit, a parameter calculating unit and a parameter calculating unit, wherein the parameter optimizing unit is used for forming a loss function according to an error between an output bone age predicted value and a bone age label, judging whether the loss function meets a minimum value or not, and optimizing network parameters through the parameter optimizing unit if the loss function does not meet the minimum value; if the minimum value is met, saving the network parameters through the parameter storage unit;
a parameter optimization unit: the system is used for adjusting network parameters by applying a back propagation algorithm;
a parameter storage unit: and the method is used for storing the network parameters after the training of the bone age stage classification model is finished.
The embodiment of the application adopts another technical scheme that: an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the following operations of the bone age identification method described above:
step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: and inputting the target bone picture into a bone stage classification model for bone age identification.
Compared with the prior art, the embodiment of the application has the advantages that: according to the bone age identification method, the bone age identification system and the electronic equipment, the position coordinates of the target bone areas are respectively detected from the bone images through the target bone area detection model, and the target bone pictures are automatically cut according to the detection results; inputting the cut target skeleton picture into a bone age stage classification model for bone age identification; according to the method, the target bone pictures are automatically classified and identified by using deep learning without manual intervention, so that the bone age identification effect and efficiency are improved, and the method has better performance than the prior art; and manual collection is not needed, so that time and labor are saved, and the method is efficient and rapid.
Drawings
Fig. 1 is a flowchart of a bone age identification method according to a first embodiment of the present application;
FIG. 2 is a flow chart of a bone age identification method according to a second embodiment of the present application;
FIG. 3 is a network architecture diagram of a target bone region detection model;
FIG. 4 is a network architecture diagram of a bone age stage classification model;
FIG. 5 is a flowchart of a bone age stage classification model training method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a bone age identification system according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of hardware equipment of a bone age identification method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The bone age identification method comprises a target bone position detection stage and a bone age identification stage. In the stage of target skeleton position detection, inputting an acquired skeleton image into a trained target skeleton region detection model, detecting position coordinates of a target skeleton region from the skeleton image by the target skeleton region detection model, and automatically cutting out a target skeleton picture according to a detection result; and in the bone age identification stage, inputting the cut target bone picture into the trained bone age stage classification model, automatically identifying the bone age stage corresponding to the target bone picture by the bone age stage classification model, and outputting an identification result.
Specifically, please refer to fig. 1, which is a flowchart illustrating a bone age identification method according to a first embodiment of the present application. The bone age identification method of the first embodiment of the present application includes the steps of:
step a: inputting the skeleton image into a target skeleton position detection model, and detecting the position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: and inputting the target bone picture into a bone stage classification model for bone age identification.
Please refer to fig. 2, which is a flowchart illustrating a bone age identification method according to a second embodiment of the present application. The bone age identification method of the embodiment of the application comprises the following steps:
step 100: collecting a skeleton image;
in step 100, the skeleton image is an X-ray image of the entire hand, and may specifically be other parts or other types of image data.
Step 200: marking an ulna terminal area and a radius terminal area in the collected skeleton image respectively to obtain a target skeleton position detection data set;
in step 200, the present embodiment uses the ulna and the radius as the target bones, and specifically, other types of bones may be sampled as the target bones.
Step 300: constructing a target skeleton region detection model, and training the target skeleton position detection model through a skeleton image in a target skeleton position detection data set;
in step 300, the method for training the target bone position detection model includes the following steps:
step 301: inputting a skeleton image in the target skeleton position detection data set into a target skeleton position detection model, and respectively detecting the position coordinates of an ulna terminal area and a radius terminal area by the target skeleton position detection model through a target detection algorithm;
in step 301, since the original skeleton image is an X-ray image of the entire hand, the entire image has a large size and is not uniform in size, which is not favorable for being input into the convolutional neural network. The ulna end area and the radius end area marked in step 200 are only a small part of the original bone image with relatively fixed positions, and the ulna end area and the radius end area have relatively fixed sizes and do not change greatly. In order to reduce the influence of other irrelevant areas and reduce the size of the area needing to be identified, the position coordinates of the ulna ending area and the radius ending area are automatically detected through the target bone position detection model, manual collection is not needed, and time and labor are saved, and the method is efficient and rapid. In the model training stage, the detection precision of the target bone position detection model can be judged according to the detection result output by the target bone position detection model and the error between the marked ulna terminal area and the radius terminal area.
In the embodiment of the present application, the network structure of the target bone region detection model is shown in fig. 3. The target skeleton region detection model uses a Fast R-CNN algorithm, and is composed of an RPN network and a Fast R-CNN network, wherein the RPN network and the Fast R-CNN network share a bottom layer convolutional layer. The RPN is used for extracting initial interest areas of ulna and radius in the bone image, and the Fast R-CNN further analyzes and adjusts the initial interest areas extracted by the RPN to obtain a final position detection result of the ulna and the radius.
Specifically, ZFNet (Matthew Zeiler written a paper in 2013 as a basic network for a target bone region detection model, and described how to visualize the entire convolutional network with a deconvolution network, and perform analysis and tuning, which is called ZFNet), i.e., the bottom convolutional layer includes 5 convolutional layers, the convolutional core of the first convolutional layer is 96 7 × 7, the first maximal pooling layer of 3 × 3 is followed by 256 convolutional cores of 5 × 5, the second maximal pooling layer of 3 × 3 is followed by the second maximal pooling layer, and the step size of both the first convolutional layer and the second convolutional layer is 2; convolution kernels of the third convolution layer, the fourth convolution layer and the fifth convolution layer are all 3x3, the number of feature maps is 384, 384 and 256 respectively, and the step length is 1. And adding a sixth convolution layer of 3x3 on the bottom convolution layer, wherein the sixth convolution layer is connected with 2 convolution branches of 1x1, outputting initial region classification scores and a boundary box through the 2 convolution branches respectively to form an RPN network, and extracting initial interest regions of ulna and radius through the RPN network. The bottom layer convolution layer is connected with a first full connection layer and a second full connection layer of two layers 4096 through an ROI pooling layer, and the first full connection layer and the second full connection layer respectively output classification scores and boundary frame position coordinates according to an initial interest region extracted by an RPN network to form boundary frame position coordinates and classification finally output by a Fast R-CNN network. The number of network output categories is set as the number of categories to be detected plus the background category, which is 3.
Due to the sharing of the underlying convolutional layer, the target bone region detection model needs four-step training and the shared features are learned through alternate optimization. Specifically, the training method of the target bone region detection model comprises the following four steps:
firstly, training an RPN network, and extracting an initial interest area through the RPN network;
secondly, training a Fast-RCNN network through the initial interest area, wherein the RPN network and the Fast-RCNN network are mutually independent, and convolution layer parameters are not shared;
initializing parameters of the front five convolutional layers of the trained Fast-RCNN network, taking the initialized front five convolutional layers as shared bottom convolutional layers, retraining the RPN network again, and extracting an initial interest region again; the learning rate of the shared bottom layer convolution layer in the training process of the RPN network is 0, and only the parameters of other layers are adjusted by fixing the network parameters.
And fourthly, continuously keeping the bottom layer convolution layer unchanged, training and adjusting a special layer of the Fast-RCNN by taking the initial interest region extracted in the third step as input, and finally combining the RPN sharing the bottom layer convolution layer with the Fast-RCNN to obtain the Fast-RCNN, thereby realizing the process of automatically extracting the initial interest region to detect a final target under the network.
Step 302: and respectively shearing the ulna end region and the radius end region according to the detected position coordinates to obtain an ulna picture and a radius picture, and adjusting the sheared ulna picture and the radius picture into a uniform size.
Step 400: marking bone age labels corresponding to the ulna picture and the radius picture according to ulna and radius tail end bone maturity evaluation standards respectively to obtain a bone age stage classification data set;
in step 400, the age labels for marking the ulna and radius respectively are specifically: the bone age stages corresponding to ulna in U1-U9 and radius in R1-R11 in the bone image were set by a specialist according to bone maturity assessment criteria proposed by Luk et al in 2013 for bone maturity assessment using radial and ulnar extremity X-ray films, respectively. In the evaluation standard of bone maturity by using X-ray films at the ends of radius and ulna, which is provided by Luk and the like, different stages of radius and ulna epiphysis maturity are defined, the radius age is classified into R1-R11, and the ulna age is classified into U1-U9, so that a close relation between abrupt increase and stop of juvenile growth is provided, great utilization value is provided for improving clinical decision, and the evaluation standard better conforms to the bone development condition of the current teenagers in China compared with other existing bone age evaluation standards. It is to be understood that the present application is equally applicable to other bone age assessment criteria.
Step 500: constructing a skeleton stage classification model;
step 600: respectively inputting ulna pictures and/or radius pictures in the bone age stage classification data set and bone age labels corresponding to the ulna and/or radius into a bone stage classification model, and training the bone stage classification model;
in step 600, a network structure of the bone age stage classification model is shown in fig. 4. The bone age stage classification model is implemented using a multi-layer convolutional neural network. The ulna picture and/or the radius picture are input from an input layer, the features are extracted through a convolution layer, a pooling layer reduces data dimensionality and improves feature invariance, and after the features are combined in a multi-layer convolution pooling mode, effective features are selected from a full-connection layer and recognition results are output through an output layer.
Specifically, please refer to fig. 5, which is a flowchart illustrating a bone age classification model training method according to an embodiment of the present application. The bone age stage classification model training method comprises the following steps:
step 601: initializing bone age stage classification model parameters;
step 602: continuously convolving and pooling the input ulna picture and/or radius picture through a plurality of convolution layers and pooling layers, and extracting characteristic information of the ulna picture and/or radius picture;
step 603: calculating the extracted characteristic information of the ulna picture and/or the radius picture through the full-connection layer, obtaining the probability that the ulna picture and/or the radius picture belong to each bone age stage through the softmax layer, and outputting a bone age predicted value corresponding to the ulna picture and/or the radius picture through the output layer;
step 604: forming a loss function L according to the output bone age predicted value and the error between the marked bone age labels in the ulna picture and/or the radius picture;
in step 604, the present application uses the multi-class cross entropy as the evaluation performance of the loss function to accurately determine the bone age stage corresponding to the ulna picture and/or the radius picture, so as to reduce the prediction error. Let X be the prediction vector, Y be the vector of the observed value, and n be the observed data size, then the multi-class cross entropy is defined as follows:
Figure BDA0001467373860000141
step 605: judging whether the loss function L meets the minimum value, if not, executing a step 606; if the minimum value is satisfied, go to step 607;
step 606: adjusting network parameters by using a back propagation algorithm until the loss function L meets the minimum value;
step 607: and (5) saving the network parameters, and finishing training the bone age stage classification model.
Step 700: inputting a bone image to be identified into a trained target bone position detection model, detecting position coordinates of an ulna terminal area and a radius terminal area through the target area detection model, and automatically cutting according to a detection result to obtain an ulna picture and a radius picture; inputting the ulna picture and/or the radius picture obtained by shearing into a trained bone age stage classification model for bone age identification;
in step 700, after the model is constructed and trained, an automatic bone age identification system including a target bone position detection model for detecting the position of a target bone region and a bone age stage classification model for identifying the target bone age is successfully established. If the bone development maturity analysis is needed, firstly inputting a hand x-ray picture, respectively detecting by a target area detection model to obtain position coordinates of an ulna ending area and a radius ending area, then automatically cutting according to a detection result to obtain an ulna picture and a radius picture, inputting the cut ulna picture and/or radius picture into a bone age stage classification model for bone age identification to obtain a final identification result, and completing the whole process of bone age automatic identification without manual intervention. The method can obtain better performance than the traditional method by automatically extracting the characteristics and identifying and classifying by using a deep learning method.
Please refer to fig. 6, which is a schematic structural diagram of a bone age identification system according to an embodiment of the present application. The bone age identification system comprises an image acquisition module, a region marking module, a first model building module, a position detection module, a picture cutting module, a bone age marking module, a second model building module and a bone age identification module.
An image acquisition module: for acquiring bone images; the skeleton image is an X-ray picture of the whole hand, and may specifically be other parts or other types of image data.
A region marking module: the system is used for marking an ulna terminal area and a radius terminal area in an acquired bone image respectively to obtain a target bone position detection data set;
a first model building module: the system is used for constructing a target skeleton region detection model and training the target skeleton position detection model through a skeleton image in a target skeleton position detection data set;
a position detection module: the system comprises a skeleton image acquisition module, a target skeleton position detection module and a data processing module, wherein the skeleton image acquisition module is used for acquiring a skeleton image of a patient; the original skeleton image is an X-ray picture of the whole hand, so that the whole image is large in size and different in size, and is not favorable for being used as the input of a convolutional neural network. The ulna end area and the radius end area are only a small part of the original bone image with relatively fixed positions, and the ulna end area and the radius end area do not change greatly in size and are relatively fixed in size. In order to reduce the influence of other irrelevant areas and reduce the size of the area needing to be identified, the position coordinates of the ulna ending area and the radius ending area are automatically detected through the target bone position detection model, manual collection is not needed, and time and labor are saved, and the method is efficient and rapid. In the model training stage, the detection precision of the target bone position detection model can be judged according to the detection result output by the target bone position detection model and the error between the marked ulna terminal area and the radius terminal area.
In the embodiment of the application, the target skeleton region detection model uses a Fast R-CNN algorithm, the target skeleton region detection model is composed of an RPN network and a Fast R-CNN network, and the RPN network and the Fast R-CNN network share a bottom layer convolution layer. The RPN is used for extracting initial interest areas of ulna and radius in the bone image, and the Fast R-CNN further analyzes and adjusts the initial interest areas extracted by the RPN to obtain a final position detection result of the ulna and the radius.
Specifically, a base network of the target bone region detection model selects ZFNet, namely, the bottom convolutional layer comprises 5 convolutional layers, the convolutional cores of the first convolutional layer are 96 convolutional layers of 7x7, the convolutional layers are connected with a first maximum pooling layer of 3x3, the convolutional cores of the second convolutional layer are 256 convolutional layers of 5x5, the convolutional layers are connected with a second maximum pooling layer of 3x3, and the step length of each of the first convolutional layer and the second convolutional layer is 2; convolution kernels of the third convolution layer, the fourth convolution layer and the fifth convolution layer are all 3x3, the number of feature maps is 384, 384 and 256 respectively, and the step length is 1. And adding a sixth convolution layer of 3x3 on the bottom convolution layer, wherein the sixth convolution layer is connected with 2 convolution branches of 1x1, outputting initial region classification scores and a boundary box through the 2 convolution branches respectively to form an RPN network, and extracting initial interest regions of ulna and radius through the RPN network. The bottom layer convolution layer is connected with a first full connection layer and a second full connection layer of two layers 4096 through an ROI pooling layer, and the first full connection layer and the second full connection layer respectively output classification scores and boundary frame position coordinates according to an initial interest region extracted by an RPN network to form boundary frame position coordinates and classification finally output by a Fast R-CNN network. The number of network output categories is set as the number of categories to be detected plus the background category, which is 3.
Due to the sharing of the underlying convolutional layer, the target bone region detection model needs four-step training and the shared features are learned through alternate optimization. Specifically, the training method of the target bone region detection model comprises the following four steps:
firstly, training an RPN network, and extracting an initial interest area through the RPN network;
secondly, training a Fast-RCNN network through the initial interest area, wherein the RPN network and the Fast-RCNN network are mutually independent, and convolution layer parameters are not shared;
initializing parameters of the front five convolutional layers of the trained Fast-RCNN network, taking the initialized front five convolutional layers as shared bottom convolutional layers, retraining the RPN network again, and extracting an initial interest region again; the learning rate of the shared bottom layer convolution layer in the training process of the RPN network is 0, and only the parameters of other layers are adjusted by fixing the network parameters.
And fourthly, continuously keeping the bottom layer convolution layer unchanged, training and adjusting a special layer of the Fast-RCNN by taking the initial interest region extracted in the third step as input, and finally combining the RPN sharing the bottom layer convolution layer with the Fast-RCNN to obtain the Fast-RCNN, thereby realizing the process of automatically extracting the initial interest region to detect a final target under the network.
The picture cutting module: and the device is used for respectively shearing the ulna end area and the radius end area according to the detected position coordinates to obtain an ulna picture and a radius picture, and adjusting the sheared ulna picture and radius picture into a uniform size.
Bone age marking module: the bone age label marking device is used for marking bone age labels corresponding to the ulna picture and the radius picture respectively according to ulna and radius tail end bone maturity evaluation standards to obtain a bone age stage classification data set; the bone age labels respectively marking the ulna picture and the radius picture specifically comprise: the bone age stages corresponding to ulna in U1-U9 in ulna pictures and radius in R1-R11 in radius pictures were set by specialist physicians according to bone maturity assessment criteria proposed by Luk et al in 2013 for bone maturity assessment using radial and ulnar extremity X-ray films, respectively. In the evaluation standard of bone maturity by using X-ray films at the ends of radius and ulna, which is provided by Luk and the like, different stages of radius and ulna epiphysis maturity are defined, the radius age is classified into R1-R11, and the ulna age is classified into U1-U9, so that a close relation between abrupt increase and stop of juvenile growth is provided, great utilization value is provided for improving clinical decision, and the evaluation standard better conforms to the bone development condition of the current teenagers in China compared with other existing bone age evaluation standards. It is to be understood that the present application is equally applicable to other bone age assessment criteria.
A second model building module: the bone age classification model is used for constructing a bone age classification model, inputting ulna pictures and/or radius pictures in the bone age classification data set and bone age labels corresponding to the ulna and/or radius into the bone age classification model, and training the bone age classification model; wherein the bone age stage classification model is implemented by using a multilayer convolutional neural network. The ulna picture and/or the radius picture are input from an input layer, the features are extracted through a convolution layer, a pooling layer reduces data dimensionality and improves feature invariance, and after the features are combined in a multi-layer convolution pooling mode, effective features are selected from a full-connection layer and recognition results are output through an output layer.
Specifically, further, the second model building module includes:
an initialization unit: the method is used for initializing bone age stage classification model parameters;
a feature extraction unit: the device is used for continuously convolving and pooling the input ulna picture and/or radius picture through the multilayer convolution layer and pooling layer, and extracting the characteristic information of the ulna picture and/or radius picture;
a result output unit: the system comprises a softmax layer, an output layer and a data processing layer, wherein the softmax layer is used for acquiring the probability that the ulna picture and/or the radius picture belong to each bone age stage, and outputting a bone age predicted value corresponding to the ulna picture and/or the radius picture through the output layer;
a loss function calculation unit: the system comprises a loss function L, a back propagation algorithm and a data processing module, wherein the loss function L is used for forming a loss function according to an error between an output bone age predicted value and a bone age label marked in an ulna picture and/or a radius picture, judging whether the loss function meets a minimum value or not, and if not, adjusting network parameters by using the back propagation algorithm; if the minimum value is met, saving the network parameters; the evaluation performance of the multi-classification cross entropy as the loss function is used for accurately judging the bone age stage corresponding to the ulna picture and/or the radius picture, and the prediction error is reduced. Let X be the prediction vector, Y be the vector of the observed value, and n be the observed data size, then the multi-class cross entropy is defined as follows:
Figure BDA0001467373860000191
a parameter optimization unit: the method is used for adjusting network parameters by applying a back propagation algorithm until a loss function L meets a minimum value;
a parameter storage unit: and the method is used for storing the network parameters after the training of the bone age stage classification model is finished.
Bone age identification module: the system comprises a training model for detecting the position of a bone, a target area detection model for detecting the position coordinates of an ulna terminal area and a radius terminal area, and a cutting device for automatically cutting the positions according to the detection result to obtain an ulna picture and a radius picture; and inputting the ulna picture and/or the radius picture obtained by shearing into a trained bone age stage classification model for bone age identification. If the bone age analysis is needed, firstly inputting a hand x-ray picture, respectively detecting the position coordinates of the ulna ending area and the radius ending area by a target area detection model, then automatically cutting the hand x-ray picture according to the detection result to obtain an ulna picture and a radius picture, inputting the cut ulna picture and/or radius picture into a bone age stage classification model for bone age identification to obtain a final identification result, and completing the whole process of bone age automatic identification without manual intervention. The method can obtain better performance than the traditional method by automatically extracting the characteristics and identifying and classifying by using a deep learning method.
Fig. 7 is a schematic structural diagram of a hardware device of a bone age identification method according to an embodiment of the present invention, and as shown in fig. 7, the device includes one or more processors and a memory. Taking a processor as an example, the apparatus may further include: an input system and an output system.
The processor, memory, input system, and output system may be connected by a bus or other means, as exemplified by the bus connection in fig. 7.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules. The processor executes various functional applications and data processing of the electronic device, i.e., implements the processing method of the above-described method embodiment, by executing the non-transitory software program, instructions and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processing system over a network. Such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input system may receive input numeric or character information and generate a signal input. The output system may include a display device such as a display screen.
The one or more modules are stored in the memory and, when executed by the one or more processors, perform the following for any of the above method embodiments:
step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: and inputting the target bone picture into a bone stage classification model for bone age identification.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
An embodiment of the present invention provides a non-transitory (non-volatile) computer storage medium storing computer-executable instructions that may perform the following operations:
step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: and inputting the target bone picture into a bone stage classification model for bone age identification.
An embodiment of the present invention provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the following:
step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: and inputting the target bone picture into a bone stage classification model for bone age identification.
According to the bone age identification method, the bone age identification system and the electronic equipment, the position coordinates of the target bone are detected from the bone image through the target bone region detection model, and the target bone picture is automatically cut according to the detection result; inputting the cut target skeleton picture into a bone age stage classification model for bone age identification; compared with the prior art, the method has at least the following advantages:
1. according to the method, the corresponding target skeleton region is automatically detected from the shot hand X-ray picture by using a target detection method based on deep learning, the corresponding target skeleton picture is automatically cut according to the detection result, manual acquisition is not needed, and the method is time-saving, labor-saving, efficient and rapid;
2. according to the method, the target bone pictures are automatically classified and identified by using the deep learning through the bone age stage classification model without manual intervention, so that the bone age identification effect and efficiency are improved, and the method has better performance than the prior art;
3. according to the method, ulna and radius are taken as target bones, and the evaluation standard for evaluating the bone maturity by using the radius and ulna tail end X-ray film, which is proposed by Luk in 2013 and the like, is adopted, so that the evaluation standard is more consistent with the bone development condition of teenagers in China compared with other bone age evaluation standards.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A bone age identification method, comprising:
step a: inputting a skeleton image into a target skeleton position detection model, and detecting position coordinates of a target skeleton in the skeleton image through the target skeleton position detection model;
step b: cutting a target skeleton picture according to the position coordinates of the target skeleton;
step c: inputting the target bone picture into a bone stage classification model for bone age identification;
the step a also comprises the following steps: marking a target skeleton region in the collected skeleton image to obtain a target skeleton position detection data set; constructing a target skeleton region detection model, and training the target skeleton position detection model through a skeleton image in the target skeleton position detection data set;
the target bone region detection model comprises an RPN network and a Fast R-CNN network, wherein the RPN network and the Fast R-CNN network share a bottom layer convolution layer, the bottom layer convolution layer comprises 5 layers of convolution layers, a sixth convolution layer is arranged behind the bottom layer convolution layer, the sixth convolution layer is connected with 2 convolution branches, initial region classification scores and a boundary frame are respectively output through the 2 convolution branches to form the RPN network, and an initial interest region of a target bone is extracted through the RPN network; the bottom layer convolution layer is connected with a first full-connection layer and a second full-connection layer through an ROI (region of interest) pooling layer, and the first full-connection layer and the second full-connection layer respectively output classification scores and position coordinates of a boundary frame according to the initial interest region;
in the step c, the inputting the target bone picture into a bone stage classification model for bone age identification further comprises: marking out bone age labels corresponding to the target bone picture according to a bone maturity evaluation standard to obtain a bone age stage classification data set, constructing a bone stage classification model, and training the constructed bone stage classification model through the bone age stage classification data set.
2. The bone age identification method according to claim 1, wherein the step a further comprises: collecting a skeleton image; the bone image is an X-ray picture.
3. The bone age identification method according to claim 2, wherein the training of the construction of the bone stage classification model by the bone age stage classification dataset specifically comprises:
step c 1: initializing bone age stage classification model parameters;
step c 2: performing convolution pooling on the target bone picture, and extracting characteristic information of the target bone picture;
step c 3: calculating the extracted characteristic information of the ulna picture and/or the radius picture to obtain various probabilities, and outputting a bone age predicted value corresponding to the target bone picture;
step c 4: forming a loss function according to the output bone age predicted value and the error between the bone age labels, judging whether the loss function meets the minimum value, and if not, adjusting network parameters by using a back propagation algorithm; if the minimum value is met, the network parameters are saved.
4. A bone age identification system, comprising:
a position detection module: the bone image is input into a target bone position detection model, and the position coordinates of a target bone in the bone image are detected through the target bone position detection model;
the picture cutting module: the system is used for cutting out a target skeleton picture according to the position coordinates of the target skeleton;
bone age identification module: the system is used for inputting the target bone picture into a bone stage classification model for bone age identification;
the system further comprises:
a region marking module: the system comprises a skeleton position detection data set, a skeleton position detection data set and a skeleton position detection data set, wherein the skeleton position detection data set is used for marking a target skeleton region in an acquired skeleton image;
a first model building module: the system is used for constructing a target skeleton region detection model, and training the target skeleton position detection model through a skeleton image in the target skeleton position detection data set;
the target bone region detection model comprises an RPN network and a Fast R-CNN network, wherein the RPN network and the Fast R-CNN network share a bottom layer convolution layer, the bottom layer convolution layer comprises 5 layers of convolution layers, a sixth convolution layer is arranged behind the bottom layer convolution layer, the sixth convolution layer is connected with 2 convolution branches, initial region classification scores and a boundary frame are respectively output through the 2 convolution branches to form the RPN network, and an initial interest region of a target bone is extracted through the RPN network; the bottom layer convolution layer is connected with a first full-connection layer and a second full-connection layer through an ROI (region of interest) pooling layer, and the first full-connection layer and the second full-connection layer respectively output classification scores and position coordinates of a boundary frame according to the initial interest region;
the system further comprises:
bone age marking module: the bone age label is used for marking a bone age label corresponding to the target bone picture according to the bone maturity evaluation standard to obtain a bone age stage classification data set;
a second model building module: the bone age stage classification data set is used for constructing a bone stage classification model, and the constructed bone stage classification model is trained through the bone age stage classification data set.
5. The bone age identification system of claim 4, further comprising an image acquisition module to acquire an image of a bone; the bone image is an X-ray picture.
6. The bone age identification system of claim 5, wherein the second model building module comprises:
an initialization unit: the method is used for initializing bone age stage classification model parameters;
a feature extraction unit: the system is used for performing convolution pooling on the target bone picture and extracting the characteristic information of the target bone picture;
a result output unit: the system is used for calculating the extracted characteristic information of the ulna picture and/or the radius picture to obtain various probabilities and outputting a bone age predicted value corresponding to a target bone picture;
a loss function calculation unit: the system comprises a parameter optimizing unit, a parameter calculating unit and a parameter calculating unit, wherein the parameter optimizing unit is used for forming a loss function according to an error between an output bone age predicted value and a bone age label, judging whether the loss function meets a minimum value or not, and optimizing network parameters through the parameter optimizing unit if the loss function does not meet the minimum value; if the minimum value is met, saving the network parameters through the parameter storage unit;
a parameter optimization unit: the system is used for adjusting network parameters by applying a back propagation algorithm;
a parameter storage unit: and the method is used for storing the network parameters after the training of the bone age stage classification model is finished.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the bone age identification method of any one of claims 1 to 3.
CN201711125692.2A 2017-11-14 2017-11-14 Bone age identification method and system and electronic equipment Active CN107895367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711125692.2A CN107895367B (en) 2017-11-14 2017-11-14 Bone age identification method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711125692.2A CN107895367B (en) 2017-11-14 2017-11-14 Bone age identification method and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN107895367A CN107895367A (en) 2018-04-10
CN107895367B true CN107895367B (en) 2021-11-30

Family

ID=61805212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711125692.2A Active CN107895367B (en) 2017-11-14 2017-11-14 Bone age identification method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN107895367B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108606811A (en) * 2018-04-12 2018-10-02 上海交通大学医学院附属上海儿童医学中心 A kind of ultrasound stone age detecting system and its method
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN108564134B (en) * 2018-04-27 2021-07-06 网易(杭州)网络有限公司 Data processing method, device, computing equipment and medium
CN108596904B (en) * 2018-05-07 2020-09-29 北京长木谷医疗科技有限公司 Method for generating positioning model and method for processing spine sagittal position image
CN108968991B (en) * 2018-05-08 2022-10-11 平安科技(深圳)有限公司 Hand bone X-ray film bone age assessment method, device, computer equipment and storage medium
CN110265119A (en) * 2018-05-29 2019-09-20 中国医药大学附设医院 Bone age assessment and prediction of height model, its system and its prediction technique
CN109215013B (en) * 2018-06-04 2023-07-21 平安科技(深圳)有限公司 Automatic bone age prediction method, system, computer device and storage medium
CN109002846B (en) * 2018-07-04 2022-09-27 腾讯医疗健康(深圳)有限公司 Image recognition method, device and storage medium
JP6999812B2 (en) * 2018-08-01 2022-01-19 中國醫藥大學附設醫院 Bone age evaluation and height prediction model establishment method, its system and its prediction method
CN110838121A (en) * 2018-08-15 2020-02-25 辽宁开普医疗系统有限公司 Child hand bone joint identification method for assisting bone age identification
CN108992082A (en) * 2018-08-21 2018-12-14 上海臻道软件技术有限公司 A kind of stone age detection system and its detection method
KR102355327B1 (en) * 2018-09-10 2022-02-21 주식회사 소노엠 A system for measuring bone age
CN109377484B (en) * 2018-09-30 2022-04-22 杭州依图医疗技术有限公司 Method and device for detecting bone age
CN109285154A (en) * 2018-09-30 2019-01-29 杭州依图医疗技术有限公司 A kind of method and device detecting the stone age
CN109272002B (en) * 2018-09-30 2020-11-24 杭州依图医疗技术有限公司 Bone age tablet classification method and device
CN109146879B (en) * 2018-09-30 2021-05-18 杭州依图医疗技术有限公司 Method and device for detecting bone age
CN109741309B (en) * 2018-12-27 2021-04-02 北京深睿博联科技有限责任公司 Bone age prediction method and device based on deep regression network
CN109816721B (en) * 2018-12-29 2021-07-16 上海联影智能医疗科技有限公司 Image positioning method, device, equipment and storage medium
US11367181B2 (en) 2018-12-29 2022-06-21 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for ossification center detection and bone age assessment
CN110051376A (en) * 2019-03-05 2019-07-26 上海市儿童医院 A kind of stone age intelligent detecting method
CN109998576A (en) * 2019-03-05 2019-07-12 上海市儿童医院 A kind of artificial intelligence stone age detection method
CN109998577A (en) * 2019-03-05 2019-07-12 上海市儿童医院 A kind of artificial intelligence stone age detection terminal device
CN109948522B (en) * 2019-03-18 2020-12-01 浙江工业大学 X-ray hand bone maturity interpretation method based on deep neural network
CN110009605A (en) * 2019-03-21 2019-07-12 浙江工业大学 A kind of stone age prediction technique and system based on deep learning
CN109961044B (en) * 2019-03-22 2021-02-02 浙江工业大学 CHN method interest area extraction method based on shape information and convolutional neural network
CN110503624A (en) * 2019-07-02 2019-11-26 平安科技(深圳)有限公司 Stone age detection method, system, equipment and readable storage medium storing program for executing
JP7226199B2 (en) * 2019-09-04 2023-02-21 株式会社島津製作所 Image analysis method, image processing device and bone densitometry device
CN110874834A (en) * 2019-10-22 2020-03-10 清华大学 Bone age prediction method and device, electronic equipment and readable storage medium
CN111046901A (en) * 2019-10-30 2020-04-21 杭州津禾生物科技有限公司 Automatic identification method for bone age image after digital processing
CN110782450B (en) * 2019-10-31 2020-09-29 北京推想科技有限公司 Hand carpal development grade determining method and related equipment
CN111415334A (en) * 2020-03-05 2020-07-14 北京深睿博联科技有限责任公司 Bone age prediction device
CN111507953A (en) * 2020-04-13 2020-08-07 武汉华晨酷神智能科技有限公司 AI bone age rapid identification method
CN111920430A (en) * 2020-07-04 2020-11-13 浙江大学山东工业技术研究院 Automatic bone age assessment method for weak supervised deep learning
CN112509688A (en) * 2020-09-25 2021-03-16 卫宁健康科技集团股份有限公司 Automatic analysis system, method, equipment and medium for pressure sore picture
CN112907537A (en) * 2021-02-20 2021-06-04 司法鉴定科学研究院 Skeleton sex identification method based on deep learning and on-site virtual simulation technology
CN115861154A (en) * 2021-09-24 2023-03-28 杭州朝厚信息科技有限公司 Method for determining development stage based on X-ray head shadow image
CN114601483B (en) * 2022-05-11 2022-08-16 山东第一医科大学第一附属医院(山东省千佛山医院) Bone age analysis method and system based on image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804868A (en) * 2006-01-19 2006-07-19 昆明利普机器视觉工程有限公司 Automatic machine image recognition method and apparatus
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804868A (en) * 2006-01-19 2006-07-19 昆明利普机器视觉工程有限公司 Automatic machine image recognition method and apparatus
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep learning for automated skeletal bone age assessment in X-ray images;C.Spampinato 等;《ELSEVIER》;20161029;第41-51页 *
Faster R-CNN:Towards Real-Time Object Detection with Region Proposal Networks;Shaoqing Ren 等;《arXiv》;20160106;第1-14页 *

Also Published As

Publication number Publication date
CN107895367A (en) 2018-04-10

Similar Documents

Publication Publication Date Title
CN107895367B (en) Bone age identification method and system and electronic equipment
JP6843086B2 (en) Image processing systems, methods for performing multi-label semantic edge detection in images, and non-temporary computer-readable storage media
US10706333B2 (en) Medical image analysis method, medical image analysis system and storage medium
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
US20240062369A1 (en) Detection model training method and apparatus, computer device and storage medium
US20230260108A1 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
WO2018108129A1 (en) Method and apparatus for use in identifying object type, and electronic device
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN110245657B (en) Pathological image similarity detection method and detection device
CN113728335A (en) Method and system for classification and visualization of 3D images
CN111462049B (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
US11501431B2 (en) Image processing method and apparatus and neural network model training method
US11684333B2 (en) Medical image analyzing system and method thereof
CN110246579B (en) Pathological diagnosis method and device
CN110827236A (en) Neural network-based brain tissue layering method and device, and computer equipment
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN111667474A (en) Fracture identification method, apparatus, device and computer readable storage medium
CN112614573A (en) Deep learning model training method and device based on pathological image labeling tool
CN113793385A (en) Method and device for positioning fish head and fish tail
CN112801940A (en) Model evaluation method, device, equipment and medium
CN110633630B (en) Behavior identification method and device and terminal equipment
CN109543716B (en) K-line form image identification method based on deep learning
CN114360695B (en) Auxiliary system, medium and equipment for breast ultrasonic scanning and analyzing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant