CN114429468A - Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium - Google Patents

Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN114429468A
CN114429468A CN202210097866.3A CN202210097866A CN114429468A CN 114429468 A CN114429468 A CN 114429468A CN 202210097866 A CN202210097866 A CN 202210097866A CN 114429468 A CN114429468 A CN 114429468A
Authority
CN
China
Prior art keywords
bone age
ultrasonic image
target
image
quality control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210097866.3A
Other languages
Chinese (zh)
Inventor
张超
吕文志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Julei Technology Co ltd
Tongji Medical College of Huazhong University of Science and Technology
Original Assignee
Wuhan Julei Technology Co ltd
Tongji Medical College of Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Julei Technology Co ltd, Tongji Medical College of Huazhong University of Science and Technology filed Critical Wuhan Julei Technology Co ltd
Priority to CN202210097866.3A priority Critical patent/CN114429468A/en
Publication of CN114429468A publication Critical patent/CN114429468A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0875Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention discloses a bone age measuring method, a bone age measuring system, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: collecting an ultrasonic image and preprocessing the ultrasonic image; performing quality screening on the ultrasonic image by using a trained quality control model based on deep learning; extracting a ossification center region and a cartilage region from an ultrasonic image qualified by quality screening by using a trained multi-target segmentation model based on deep learning; and performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value. The bone age measuring method and the bone age measuring system have the advantages of no wound and no radiation to human bodies, high safety, convenience in detection and accurate detection result.

Description

Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium
Technical Field
The invention relates to the technical field of bone age measurement, in particular to a bone age measurement method, a bone age measurement system, electronic equipment and a computer readable storage medium.
Background
Bone age is an index reflecting the bone maturity of children, and has an important role in pediatric clinical work in evaluating bone age and determining the difference between the bone age and the actual age. The most common means for assessing bone age at present are X-ray pictures of the hand and wrist, but X-ray examination has ionizing radiation and is not favorable for carrying out short-term multiple examinations on children patients. Ultrasound is a noninvasive, non-ionizing radiation, convenient, economical and practical imaging means, and is widely applied in clinical work.
Recent researches show that the bone age can be evaluated by measuring the heights of the ossification center and the epiphyseal cartilage of the bone by using conventional ultrasound to obtain the ossification rate, but the problems of manual operation error, no realization of automatic measurement and the like exist, and the use and popularization of an ultrasonic doctor are inconvenient.
Disclosure of Invention
The invention aims to overcome the defects of the background technology and provide a bone age measuring method, a bone age measuring system, electronic equipment and a computer readable storage medium.
In order to achieve the above object, the present invention provides a bone age measuring method, comprising the steps of:
step S1: collecting an ultrasonic image and preprocessing the ultrasonic image;
step S2: performing quality screening on the ultrasonic image by using a trained quality control model based on deep learning;
step S3: extracting a ossification center region and a cartilage region from an ultrasonic image qualified by quality screening by using a trained multi-target segmentation model based on deep learning;
step S4: and performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value.
Further, in step S1, the preprocessing the ultrasound image includes:
1) denoising the acquired ultrasonic image by adopting a spatial mean filtering method;
2) and turning, zooming, rotating and/or adjusting the brightness of the denoised ultrasonic image, and reserving the ultrasonic image after each type of operation to realize the quantity expansion of the ultrasonic image.
Further, in step S2, the training process of the quality control model based on deep learning includes:
marking a corresponding quality grade reference value for each frame of ultrasonic image according to experience;
building a quality control model based on a convolutional neural network structure, and setting a loss function as a classification cross loss function; training a quality control model by taking the ultrasonic image and the quality grade reference value thereof as input to obtain a quality control model training weight file;
through the back propagation of the training process, the weight of each layer of the quality control model is continuously updated in a mode of minimizing a loss function, so that the output iteration of the quality control model approaches to a true value until the classification cross loss value between the predicted value of the forward propagation of the quality control model and the labeled quality grade reference value tends to be converged, and an optimized quality control model weight file is obtained and is used as the input of the quality control model.
Further, in step S3, the training process of the multi-target segmentation model based on deep learning includes:
labeling a ossification center reference region and a cartilage reference region on each frame of ultrasonic image qualified by quality screening according to experience;
building a multi-target segmentation model based on a convolutional neural network structure, and setting a loss function as a Dice coefficient; training the multi-target segmentation model by taking the ultrasonic image with qualified quality screening, the ossification center reference area and the cartilage reference area as input to obtain a multi-target segmentation model training weight file;
continuously updating the weight of each layer of the multi-target segmentation model by adopting a mode of minimizing a loss function through back propagation in a training process so that the output iteration of the multi-target segmentation model approaches to a true value until a prediction segmentation result of forward propagation of the multi-target segmentation model and a Dice coefficient between an annotated ossification center reference region and a cartilage reference region tend to converge to obtain an optimized multi-target segmentation model weight file; and the optimized multi-target segmentation model weight file is used as the input of the multi-target segmentation model.
Further, in step S4, performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, including:
respectively extracting high-flux image features of the ossification center area and the cartilage area;
eliminating the characteristic of a constant value by adopting a variance analysis method for the image characteristic so as to eliminate irrelevant characteristic; then, deleting redundant features through single-factor significance analysis to obtain relevant image features after primary screening;
fusing clinical information of the inspected object and the preliminarily screened related image characteristics to obtain fused multi-target characteristics;
wherein the clinical information includes at least gender, actual age, and family genetic history.
Further, the image features include at least one or more of intensity features, shape features, texture features, and wavelet features.
Further, in step S4, the XGBoost regression analysis of the fused multi-target features to obtain a bone age measurement value includes:
marking the bone age reference value of each frame of ultrasonic image according to experience, inputting the bone age reference value and the fused multi-target characteristics into a trained XGboost prediction model, fitting the fused multi-target characteristics and the bone age reference value by using a regression analysis method, and obtaining a bone age measurement value according to a fitting result.
The invention also provides a system for realizing the bone age measuring method, which comprises an image acquisition and preprocessing module, a quality control module, a multi-target segmentation module and a prediction module, wherein:
the image acquisition and preprocessing module is used for acquiring and preprocessing the ultrasonic image;
the quality control module is used for performing quality screening on the ultrasonic image by utilizing a trained quality control model based on deep learning;
the multi-target segmentation module is used for extracting a ossification center region and a cartilage region from the ultrasonic image qualified by quality screening by utilizing a trained multi-target segmentation model based on deep learning;
and the prediction module is used for performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value.
The present invention also provides an electronic device, comprising:
a memory for storing a computer software program;
and the processor is used for reading and executing the computer software program to realize the bone age measuring method.
The present invention also provides a computer-readable storage medium in which a computer software program for implementing the bone age measuring method described above is stored.
Compared with the prior art, the invention has the following advantages:
first, the bone age measuring method and system provided by the invention collect ultrasonic images of the wrist and radius of the examinee as the detection basis, and avoid the radiation problem to the human body caused by the traditional X-ray detection.
Secondly, filtering and image quantity expansion are carried out through image preprocessing, so that the robustness of each subsequent deep learning model is improved; quality screening is carried out on the collected ultrasonic image samples through a quality control module so as to improve the accuracy of subsequent detection and realize rapid evaluation on the images; extracting information of a target area to be detected through a multi-target segmentation module, extracting image features of the target area to be detected, carrying out multi-target feature fusion, and carrying out regression analysis on the fused multi-target features through a prediction model to obtain a bone age measurement value.
Thirdly, the bone age prediction method based on the ultrasound can realize the bone age prediction based on the ultrasound, and has the advantages of no radiation and automation. Through the design of the multiple AI models, the stability and the usability of a prediction result can be ensured, and the bone age measurement of ultrasonic doctors of all levels can be facilitated.
Fourthly, compared with the traditional detection method, the bone age measurement method has the advantages of high safety, simplicity in operation, accurate measurement result and the like.
Drawings
FIG. 1 is a flow chart of a bone age measurement method of the present invention;
FIG. 2 is a flow chart of ultrasound image preprocessing of the present invention;
FIG. 3 is a flowchart of a quality control model training process of the present invention;
FIG. 4 is a schematic diagram of the quality class classification of ultrasound images according to the present invention;
FIG. 5 is a flow chart of a multi-objective segmentation model training process of the present invention;
FIG. 6 is a schematic diagram of multi-target segmentation areas of an ultrasound image according to the present invention;
FIG. 7 is a flow chart of a prediction process of the prediction model of the present invention;
FIG. 8 is a block diagram of a bone age measurement system based on ultrasound images according to the present invention;
FIG. 9 is a block diagram of an electronic device according to the present invention;
fig. 10 is a block diagram of a computer-readable storage medium according to the present invention.
Detailed Description
The following describes the embodiments of the present invention in detail with reference to the embodiments, but they are not intended to limit the present invention and are only examples. While the advantages of the invention will be apparent and readily appreciated by the description.
Fig. 1 is a flowchart of a bone age measuring method provided in this embodiment, and as shown in fig. 1, the method includes the following steps:
collecting an ultrasonic image and preprocessing the ultrasonic image;
performing quality screening on the ultrasonic image by using a trained quality control model based on deep learning;
extracting a ossification center region and a cartilage region from an ultrasonic image qualified by quality screening by using a trained multi-target segmentation model based on deep learning;
and performing multi-objective feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-objective features and the clinical information to obtain a bone age measurement value.
It can be appreciated that, based on the deficiencies in the background art, embodiments of the present invention provide a bone age measurement method. The invention aims to analyze an ultrasonic image of a radius at the wrist of a human by using an artificial intelligent algorithm, extract ossification information characteristics (such as ossification center area and cartilage area) of the radius in an ultrasonic image by combining a convolution neural network algorithm and a computer vision image processing algorithm, and analyze the characteristics by using an XGboost algorithm, thereby realizing the automatic measurement of the bone age. According to the invention, the ultrasonic image of the radius of the wrist of the examinee is collected as the detection basis, so that the problem of radiation to the human body caused by the traditional X-ray detection is avoided; by means of cascade design of multiple AI models, stability and usability of prediction results can be guaranteed, and bone age measurement of ultrasonic doctors of all levels is facilitated. Compared with the traditional detection method, the bone age measurement method has the advantages of high safety, simplicity in operation, accurate measurement result and the like.
In one possible embodiment, the preprocessing of the ultrasound image, as shown in fig. 2, includes:
denoising the acquired ultrasonic image by adopting a spatial mean filtering method;
and turning, zooming, rotating and/or adjusting the brightness of the denoised ultrasonic image, and reserving the ultrasonic image after each type of operation to realize the quantity expansion of the ultrasonic image.
It is understood that the ultrasound image has more shot noise than other images. The spatial mean filtering algorithm is adopted to process the image, and the filtering formula of the image under a spatial window of (2n +1) × (2n +1) at coordinates (x, y) is as follows:
Figure BDA0003491678730000061
after the ultrasonic image is filtered, in order to improve the robustness of a subsequent model and simulate the actual inspection condition, the image quantity amplification processing is performed, for example, by using the following method:
(1) image turning: turning the images left and right according to the difference of a doctor in the radial direction of a patient during radial examination, wherein the number of the turned images is expanded by 2 times;
(2) image zooming: aiming at the difference of the adjusting magnification of different doctors during the radius examination, the difference is approximately about 0.1 time after evaluation, so that the images are zoomed by the proportion of 0.90, 0.95, 1.05 and 1.10, and the number of the zoomed images is expanded by 4 times;
(3) image rotation: aiming at the fact that a certain difference exists in the angle of a doctor operating a probe during radius examination, the difference is within 20 degrees after evaluation, so that the image is rotated by-20 degrees, -15 degrees, -10 degrees, -5 degrees, 10 degrees, 15 degrees and 20 degrees, and the number of the rotated image is expanded by 8 times;
(4) adjusting the brightness of the image: aiming at the difference of adjusting machine gain by different doctors during radius examination, the brightness change is approximately about 0.1 time after evaluation, so that the brightness of the images is adjusted by the proportion of 0.90, 0.95, 1.05 and 1.10, and the number of the images is expanded by 4 times after the brightness is adjusted.
After the image quantity expansion operation, the quantity of the ultrasonic image samples is greatly increased, and the expanded sample set can be used as a training set for training an AI model in the subsequent steps and can also be used as a test set for testing the trained AI model.
In a possible embodiment, the training process of the quality control model based on deep learning shown in fig. 3 includes:
(1) each frame of ultrasound image in the training set is labeled with a corresponding quality grade reference value according to experience, and ultrasound images with different quality grades are shown in fig. 4.
It can be understood that before the quality control model is trained, a quality grade reference value serving as a reference standard needs to be set, so that the predicted result can be compared with the reference standard in the training process to judge the accuracy of the predicted result. In this embodiment, first, the radius ultrasonic image, the X-ray and the clinical information of the normal population are collected through the ultrasonic image collecting device, and the clinical information at least includes: sex, actual age, and family genetic history. This clinical information is left for subsequent steps. Then, performing bone age measurement by two doctors with high annual resources and rich experience according to the information displayed by the X-ray pictures by adopting TW3 and GP atlas methods respectively, and when the measurement results are different, performing bone age measurement by another authoritative specialist to obtain the bone age reference value of the examined person corresponding to the X-ray picture; and marking the bone age reference value of the detected person into the information of the ultrasonic image of the detected person, wherein the bone age reference value is reserved as an AI model in the subsequent steps for bone age prediction and standby. And then, respectively labeling the ossification center reference region and the cartilage reference region of each frame of ultrasonic image by two high-age-funding ultrasonic doctors, and reserving the labeled ossification center reference region and the labeled cartilage reference region as a subsequent multi-target segmentation model for use. And two senior-funded sonographers evaluate the quality of each frame of ultrasonic image, wherein the evaluation result comprises three types, namely high, medium and low, and the quality artificial evaluation result of each frame of ultrasonic image is used as the quality grade reference value of the ultrasonic image to label each frame of ultrasonic image respectively. And the quality grade reference value of the ultrasonic image is used as a reference basis during quality control model training.
(2) Building a quality control model based on a convolutional neural network structure, and setting a loss function as a classification cross loss function; and training a quality control model by taking the ultrasonic image and the quality grade reference value thereof as input to obtain a quality control model training weight file.
(3) Through the back propagation of the training process, the weight of each layer of the quality control model is continuously updated in a mode of minimizing a loss function, so that the output iteration of the quality control model approaches to a true value until the classification cross loss value between the predicted value of the forward propagation of the quality control model and the labeled quality grade reference value tends to be converged, and an optimized quality control model weight file is obtained and is used as the input of the quality control model.
It can be understood that, regarding the design of the quality control AI model, in order to prevent the model from being over-fitted, a lightweight convolutional neural network structure is designed. The method comprises the following steps: firstly, repeating two groups of convolution modules, namely a two-dimensional convolution layer and a Relu activation function, then constructing a two-dimensional pooling layer, vectorizing the characteristics through a full-connection layer, adding a one-dimensional convolution layer which is subjected to regularization constraint by L2, adding a one-dimensional pooling layer, and finally setting the output result to be three types by adopting a Softmax classifier, namely high image quality, medium image quality and low image quality, so that the image is mapped to the image quality through a convolution neural network.
After a quality control model based on a convolutional neural network structure is built, the quality control model needs to be trained. Specifically, in this embodiment, initialization is set in a truncated normal distribution manner, an optimization function is set as an Adam optimizer, a loss function is set as a classification cross loss function, accuracy is used as an evaluation function, the number of learning times is set to 1000, the size of a training batch is set to 16, an initial learning rate is set to 0.001, and the attenuation of the learning rate is set to 0.334. The input of the quality control model training is the radius ultrasonic image labeled with the quality grade reference value by a doctor, a back propagation algorithm and a random gradient descent method are adopted, the weight of each layer is iteratively updated by back propagation according to the classification cross loss value between the predicted value of the front propagation and the labeled image quality grade reference value, and the model training is stopped until the classification cross loss value between the predicted value of the quality control model and the labeled image quality grade reference value tends to be convergent, so that the trained model is obtained.
After the quality control model is trained, the trained model is stored, the server can carry out end-to-end reasoning on a large number of images, so that the image quality is quickly evaluated, and the application efficiency of the quality control AI model is evaluated through ROC curve analysis, accuracy, sensitivity, specificity, accuracy and the like. The quality control model is applied to an ultrasonic image quality screening stage of bone age detection to remove ultrasonic images with low quality grades, and ultrasonic images with medium and high quality grades are screened out and flow to a next detection node-multi-target segmentation model.
In a possible embodiment, the training process of the multi-target segmentation model based on deep learning as shown in fig. 5 includes:
(1) labeling a ossification center reference region and a cartilage reference region on each frame of ultrasonic image qualified by quality screening according to experience, wherein the labeled ossification center reference region and the labeled cartilage reference region are shown in fig. 6.
It can be understood that the ossification center reference region and the cartilage reference region labeled by a physician with high annual capital and experience can be used as standard values in the multi-target segmentation model, for example, as reference bases in the training of the multi-target segmentation model.
(2) Building a multi-target segmentation model based on a convolutional neural network structure, and setting a loss function as a Dice coefficient; and training the multi-target segmentation model by taking the ultrasonic image with qualified quality screening, the ossification center reference area and the cartilage reference area as input to obtain a multi-target segmentation model training weight file.
(3) Continuously updating the weight of each layer of the multi-target segmentation model by adopting a mode of minimizing a loss function through back propagation in a training process so that the output iteration of the multi-target segmentation model approaches to a true value until a prediction segmentation result of forward propagation of the multi-target segmentation model and a Dice coefficient between an annotated ossification center reference region and a cartilage reference region tend to converge to obtain an optimized multi-target segmentation model weight file; and the optimized multi-target segmentation model weight file is used as the input of the multi-target segmentation model.
In this embodiment, regarding the design of the multi-objective segmentation model, in order to facilitate fast and good segmentation of multiple targets (i.e., ossification center region and cartilage region) of the radius in the ultrasound image, the implementation is realized by building a simplified U-Net convolutional neural network structure, specifically, the second set of convolutional modules is reduced at the encoder stage of U-Net, so as to obtain the downsampling semantic features faster, and correspondingly, the third set of convolutional modules is reduced at the decoder stage, so as to transform the semantic features into segmentation results with the same size as the original image faster, and the two-dimensional depth separable convolutional layer is adopted to replace the two-dimensional convolutional layer to reduce the convolutional operation in the training and reasoning process, thereby improving the speed of the model.
After the multi-target segmentation model is built, the multi-target segmentation model also needs to be trained. Specifically, for training of the multi-objective segmentation model, initialization is set in a truncated normal distribution mode, an optimization function is set to be an Adam optimizer, a loss function is set to be a Dice coefficient, an average cross-over ratio is used as an evaluation function, the number of learning times is set to be 100, the size of a training batch is set to be 4, and the initial learning rate is set to be 0.0001. The model input is an image with the quality control model evaluating the image quality to be medium or high, and two segmentation labeling results of a ossification center reference region and a cartilage reference region of a radius part labeled by a doctor are used as targets for training; and (3) performing back propagation iteration to update the weight of each layer by adopting a back propagation algorithm and a random gradient descent method according to the Dice coefficient between the prediction segmentation result and the labeling result of the forward propagation until the Dice coefficient between the prediction segmentation result and the labeling result of the model tends to be converged, stopping training the model to obtain a trained model, and storing the trained model.
After the training of the multi-target segmentation model is finished, the multi-target segmentation model can be applied to the extraction steps of the ossification center and the cartilage region for bone age detection. The server can carry out end-to-end reasoning on a large number of ultrasonic images, so that the ossification center and the cartilage area of the radius in the ultrasonic images can be rapidly segmented, and the application efficiency of the quality control segmentation model is evaluated through the intersection ratio, the Dice coefficient, the accuracy and the like. The divided ossification center and cartilage area flow to the next detection node, namely the bone age prediction model.
In one possible embodiment, the multi-objective feature fusion of the ossified central region and the cartilage region as shown in fig. 7 comprises:
(1) and respectively extracting high-flux image features of the ossification center region and the cartilage region which are segmented in the last step according to an ISBI standard, wherein the image features at least comprise one or more of intensity features, shape features, texture features and wavelet features.
(2) Eliminating the characteristic of a constant value from the image characteristic by using a variance analysis method so as to eliminate irrelevant characteristics; deleting the characteristics with the P value larger than 0.05 through single-factor significance analysis, thereby deleting redundant characteristics and obtaining relevant image characteristics after primary screening;
(3) and fusing the clinical information of the checked object, such as gender, actual age, family genetic medical history and the like, with the preliminarily screened related image features to obtain fused multi-target features.
It will be appreciated that the fused multi-target features will be used as input to a predictive model for the evaluation of the final bone age measurements.
In a possible embodiment, the XGBoost regression analysis is performed on the fused multi-target features and clinical information as shown in fig. 7 to obtain the bone age measurement value, which includes:
as described above, a physician with high annual resources and experience labels the bone age reference value of each frame of ultrasonic image according to experience, inputs the bone age reference value and the fused multi-target features into a trained XGBoost prediction model, fits the fused multi-target features with the bone age reference value by using a regression analysis method, and obtains a bone age measurement value according to the fitting result. The XGboost prediction model is constructed in the prior art, and is not described in detail in the patent.
As shown in fig. 8, the present embodiment further provides a bone age measuring system based on ultrasound images, which includes an image acquisition and preprocessing module, a quality control module, a multi-target segmentation module, and a prediction module, wherein:
the image acquisition and preprocessing module is used for acquiring and preprocessing the ultrasonic image;
the quality control module is used for performing quality screening on the ultrasonic image by utilizing a trained quality control model based on deep learning;
the multi-target segmentation module is used for extracting a ossification center region and a cartilage region from the ultrasonic image qualified by quality screening by utilizing a trained multi-target segmentation model based on deep learning;
and the prediction module is used for performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-target features and the clinical information to obtain a bone age measurement value.
It can be understood that the bone age measuring system based on the ultrasonic image provided by the present invention corresponds to the bone age measuring methods provided in the foregoing embodiments, and the relevant technical features of the bone age measuring system based on the ultrasonic image may refer to the relevant technical features of the bone age measuring method, which are not described herein again.
Referring to fig. 9, fig. 9 is a schematic view of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 9, the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the following steps:
collecting an ultrasonic image and preprocessing the ultrasonic image;
performing quality screening on the ultrasonic image by using a trained quality control model based on deep learning;
extracting ossification center information and cartilage region information from the ultrasonic image qualified by quality screening by using a trained multi-target segmentation model based on deep learning;
and performing multi-target feature fusion on the ossification center information, the cartilage area information and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value.
Referring to fig. 10, fig. 10 is a schematic diagram of an embodiment of a computer-readable storage medium according to the present invention. As shown in fig. 10, the present embodiment provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps of:
collecting an ultrasonic image and preprocessing the ultrasonic image;
performing quality screening on the ultrasonic image by using a trained quality control model based on deep learning;
extracting ossification center information and cartilage region information from the ultrasonic image qualified by quality screening by using a trained multi-target segmentation model based on deep learning;
and performing multi-target feature fusion on the ossification center information, the cartilage area information and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value.
According to the bone age measuring method and system provided by the invention, the ultrasonic image of the radius of the wrist of the examinee is collected as the detection basis, so that the problem of radiation to the human body caused by the traditional X-ray detection is avoided. Filtering and image quantity expansion are carried out through image preprocessing so as to improve the robustness of each subsequent deep learning model; quality screening is carried out on the collected ultrasonic image samples through a quality control module so as to improve the accuracy of subsequent detection and realize rapid evaluation on the images; extracting information of a target area to be detected through a multi-target segmentation module, extracting image features of the target area to be detected, carrying out multi-target feature fusion, and carrying out regression analysis on the fused multi-target features through a prediction model to obtain a bone age measurement value. Compared with the traditional detection method, the bone age measurement method has the advantages of high safety, simplicity in operation, accurate measurement result and the like.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the invention.
The above description is only an embodiment of the present invention, and it should be noted that any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention, and the rest that is not described in detail is the prior art.

Claims (10)

1. A bone age measurement method, comprising: the method comprises the following steps:
step S1: collecting an ultrasonic image and preprocessing the ultrasonic image;
step S2: performing quality screening on the ultrasonic image by using a trained quality control model based on deep learning;
step S3: extracting a ossification center region and a cartilage region from the ultrasonic image qualified by quality screening by using a trained multi-target segmentation model based on deep learning;
step S4: and performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value.
2. The bone age measurement method of claim 1, wherein: in step S1, the preprocessing the ultrasound image includes:
1) denoising the acquired ultrasonic image by adopting a spatial mean filtering method;
2) and turning, zooming, rotating and/or adjusting the brightness of the denoised ultrasonic image, and reserving the ultrasonic image after each type of operation to realize the quantity expansion of the ultrasonic image.
3. The bone age measurement method of claim 1, wherein: in step S2, the training process of the quality control model based on deep learning includes:
marking a corresponding quality grade reference value for each frame of ultrasonic image according to experience;
building a quality control model based on a convolutional neural network structure, and setting a loss function as a classification cross loss function; training a quality control model by taking the ultrasonic image and the quality grade reference value thereof as input to obtain a quality control model training weight file;
through the back propagation of the training process, the weight of each layer of the quality control model is continuously updated in a mode of minimizing a loss function, so that the output iteration of the quality control model approaches to a true value until the classification cross loss value between the predicted value of the forward propagation of the quality control model and the labeled quality grade reference value tends to be converged, and an optimized quality control model weight file is obtained and is used as the input of the quality control model.
4. The bone age measurement method of claim 1, 2 or 3, wherein: in step S3, the training process of the multi-target segmentation model based on deep learning includes:
labeling a ossification center reference region and a cartilage reference region on each frame of ultrasonic image qualified by quality screening according to experience;
building a multi-target segmentation model based on a convolutional neural network structure, and setting a loss function as a Dice coefficient; training the multi-target segmentation model by taking the ultrasonic image with qualified quality screening, the ossification center reference area and the cartilage reference area as input to obtain a multi-target segmentation model training weight file;
continuously updating the weight of each layer of the multi-target segmentation model by adopting a mode of minimizing a loss function through back propagation in a training process so that the output iteration of the multi-target segmentation model approaches to a true value until a prediction segmentation result of forward propagation of the multi-target segmentation model and a Dice coefficient between an annotated ossification center reference region and a cartilage reference region tend to converge to obtain an optimized multi-target segmentation model weight file; and the optimized multi-target segmentation model weight file is used as the input of the multi-target segmentation model.
5. The bone age measurement method of claim 1, 2 or 3, wherein: in step S4, performing multi-objective feature fusion on the ossification center region, the cartilage region and the clinical information, including:
respectively extracting high-flux image features of the ossification center area and the cartilage area;
eliminating the characteristic of a constant value by adopting a variance analysis method for the image characteristic so as to eliminate irrelevant characteristic; then, deleting redundant features through single-factor significance analysis to obtain relevant image features after primary screening;
fusing clinical information of the inspected object and the preliminarily screened related image characteristics to obtain fused multi-target characteristics;
wherein the clinical information includes at least gender, actual age, and family genetic history.
6. The bone age measurement method of claim 5, wherein: the image features at least comprise one or more of intensity features, shape features, texture features and wavelet features.
7. The bone age measurement method of claim 5, wherein: in step S4, the XGBoost regression analysis of the fused multi-target features to obtain a bone age measurement value includes:
marking a bone age reference value of each frame of ultrasonic image according to experience, inputting the bone age reference value and the fused multi-target characteristics into a trained XGboost prediction model, fitting the fused multi-target characteristics and the bone age reference value by using a regression analysis method, and obtaining a bone age measurement value according to a fitting result.
8. A system for implementing the bone age measuring method according to any one of claims 1 to 7, wherein: including image acquisition and preprocessing module, quality control module, multi-target segmentation module and prediction module, wherein:
the image acquisition and preprocessing module is used for acquiring and preprocessing the ultrasonic image;
the quality control module is used for performing quality screening on the ultrasonic image by utilizing a trained quality control model based on deep learning;
the multi-target segmentation module is used for extracting a ossification center region and a cartilage region from the ultrasonic image qualified by quality screening by utilizing a trained multi-target segmentation model based on deep learning;
and the prediction module is used for performing multi-target feature fusion on the ossification center region, the cartilage region and the clinical information, and performing XGboost regression analysis on the fused multi-target features to obtain a bone age measurement value.
9. An electronic device, characterized in that: the method comprises the following steps:
a memory for storing a computer software program;
a processor for reading and executing the computer software program to implement the bone age measurement method according to any one of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer readable storage medium stores therein a computer software program for implementing the bone age measuring method according to any one of claims 1 to 7.
CN202210097866.3A 2022-01-27 2022-01-27 Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium Pending CN114429468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210097866.3A CN114429468A (en) 2022-01-27 2022-01-27 Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210097866.3A CN114429468A (en) 2022-01-27 2022-01-27 Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN114429468A true CN114429468A (en) 2022-05-03

Family

ID=81313087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210097866.3A Pending CN114429468A (en) 2022-01-27 2022-01-27 Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN114429468A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116050955A (en) * 2023-03-31 2023-05-02 杭州百子尖科技股份有限公司 Digital twinning-based carbon dioxide emission statistics method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116050955A (en) * 2023-03-31 2023-05-02 杭州百子尖科技股份有限公司 Digital twinning-based carbon dioxide emission statistics method, device and equipment
CN116050955B (en) * 2023-03-31 2023-07-28 杭州百子尖科技股份有限公司 Digital twinning-based carbon dioxide emission statistics method, device and equipment

Similar Documents

Publication Publication Date Title
Yun et al. Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
CN107730497B (en) Intravascular plaque attribute analysis method based on deep migration learning
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
CN105640577A (en) Method and system automatically detecting local lesion in radiographic image
CN110503635B (en) Hand bone X-ray film bone age assessment method based on heterogeneous data fusion network
CN111862020B (en) Method and device for predicting physiological age of anterior ocular segment, server and storage medium
CN112819821B (en) Cell nucleus image detection method
CN112950643A (en) New coronary pneumonia focus segmentation method based on feature fusion deep supervision U-Net
Hennessey et al. Artificial intelligence in veterinary diagnostic imaging: A literature review
CN113888532A (en) Medical image analysis method and device based on flat scanning CT data
CN114882014B (en) Dual-model-based fundus image quality evaluation method and device and related medium
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
CN114429468A (en) Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium
Manikandan et al. Segmentation and Detection of Pneumothorax using Deep Learning
CN110555846A (en) full-automatic bone age assessment method based on convolutional neural network
Singh et al. Good view frames from ultrasonography (USG) video containing ONS diameter using state-of-the-art deep learning architectures
CN117338234A (en) Diopter and vision joint detection method
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN110503636B (en) Parameter adjustment method, focus prediction method, parameter adjustment device and electronic equipment
Salsabili et al. Fully automated estimation of the mean linear intercept in histopathology images of mouse lung tissue
CN114419401B (en) Method and device for detecting and identifying leucocytes, computer storage medium and electronic equipment
Galveia et al. Computer aided diagnosis in ophthalmology: Deep learning applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination