CN117876372B - Bone quality identification model training method based on label-free nonlinear multi-modal imaging - Google Patents

Bone quality identification model training method based on label-free nonlinear multi-modal imaging Download PDF

Info

Publication number
CN117876372B
CN117876372B CN202410275402.6A CN202410275402A CN117876372B CN 117876372 B CN117876372 B CN 117876372B CN 202410275402 A CN202410275402 A CN 202410275402A CN 117876372 B CN117876372 B CN 117876372B
Authority
CN
China
Prior art keywords
channel
bone
image data
sub
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410275402.6A
Other languages
Chinese (zh)
Other versions
CN117876372A (en
Inventor
李婷
蒲江波
张博文
吉祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Biomedical Engineering of CAMS and PUMC
Original Assignee
Institute of Biomedical Engineering of CAMS and PUMC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Biomedical Engineering of CAMS and PUMC filed Critical Institute of Biomedical Engineering of CAMS and PUMC
Priority to CN202410275402.6A priority Critical patent/CN117876372B/en
Publication of CN117876372A publication Critical patent/CN117876372A/en
Application granted granted Critical
Publication of CN117876372B publication Critical patent/CN117876372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a bone quality recognition model training method based on label-free nonlinear multi-mode imaging, which comprises the following steps of S1, collecting image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel of sample bones; s2, dividing the image data obtained in the step S1 into a plurality of sub-region images; s3, obtaining texture features of the sub-region images; s4, training by using texture features to obtain a T2DM bone mass variation model. The invention has the beneficial effects that: more structures and material distribution characteristics in bone tissues can be observed, more dimensional information is provided for identifying T2DM concurrent bone quality reduction, and the unilateral performance of bone density evaluation indexes is compensated.

Description

Bone quality identification model training method based on label-free nonlinear multi-modal imaging
Technical Field
The invention belongs to the technical field of identifying T2DM bone quality, and particularly relates to a bone quality identification model training method based on label-free nonlinear multi-mode imaging.
Background
Type two diabetes mellitus (Type 2 diabetes mellitus, T2 DM) is a serious chronic disease that endangers human health, and particularly complications associated with it are important causes of decreasing the quality of life of patients and increasing mortality.
The problem of fracture complicated by T2DM gradually attracts clinical importance, and becomes an important clinical problem, the current technology for assessing fracture risk is developed based on bone density, wherein a (Fracture RISK ASSESSMENT Tool, FRAX) model combining clinical characteristics and bone density, which is proposed by world health organization, is a common assessment model, however, existing researches find that there is no direct connection between the reduction of bone quality complicated by T2DM and the fracture caused by the reduction of bone density, and even a part of the bone density of T2DM patients is higher than that of normal people, and the existing assessment tools generally greatly underestimate the fracture risk of the T2DM patients.
Disclosure of Invention
In view of the above, the present invention aims to propose a method for training a model for bone quality recognition based on label-free nonlinear multi-modality imaging, in order to solve at least one of the above-mentioned part of the problems.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
the invention provides a bone quality identification model training method based on label-free nonlinear multi-mode imaging, which comprises the following steps:
S1, acquiring image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel of a sample bone;
S2, dividing the image data obtained in the step S1 into a plurality of sub-region images;
s3, obtaining texture features of the sub-region images;
S4, training by using texture features to obtain a T2DM bone mass variation model.
Further, the step S1 includes the following steps:
S11, acquiring hydroxyapatite channel image data with a Raman shift of 959cm -1 by using a stimulated Raman imaging method;
Collecting lipid channel image data with a Raman shift of 2850cm -1;
collecting protein channel image data with a Raman shift of 2930cm -1;
s12, acquiring SHG channel image data by using a second harmonic imaging method;
s13, acquiring TPEF channel image data by using two-photon excitation fluorescence microscopy imaging;
In S11, the separation formula of the protein channel image data and the lipid channel image data is:
Wherein X 2930cm-1 and X 2850cm-1 represent the signal intensity distribution matrices of the Raman shift of the sample measured at 2930cm -1 and 2850cm -1, respectively, X lipid and X protein represent the signal intensity distribution matrices of the lipids and proteins in the sample measured, respectively, and Raman spectra of the proteins and lipid standards are measured using a Raman imaging device to obtain the value of a 1,a2,b1,b2.
Further, the step S1 further includes the following steps:
S14, identifying the structural boundary of the sample bone based on the TPEF signal, and storing the boundary as a mask, and marking the mask as osteon _mask;
S15, dividing the same bone unit area image data for each channel image by utilizing osteon _mask, and carrying out image filtering;
S16, converting the original coordinate image subjected to filtering processing in the S15 into a polar coordinate image by taking the geometric center of a Harves tube in a bone unit as an origin, and generating final image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel.
Further, in the step S3, texture features of the sub-region images are obtained according to the first order statistic or the gray level co-occurrence matrix;
The step of obtaining texture features of the sub-region image based on the first order statistic is as follows:
a31, converting the sub-region image into an 8-bit gray scale image;
a32, converting 256 gray scales into 20 equidistant gray scales, and counting the number of pixels under each gray scale;
a33, sequentially calculating the mean value, standard deviation, skewness, kurtosis, consistency and entropy of the statistical result obtained by the A32, and extracting one-dimensional feature from each sub-area image;
The method for acquiring the texture features of the sub-region image based on the gray level co-occurrence matrix comprises the following steps:
b31, converting the sub-region image into an 8-bit gray scale image;
b32, converting 256 gray scales into 32 equidistant gray scales to obtain 32 gray scale images;
B33, calculating 4 gray level co-occurrence matrixes of the 32 gray level images under the conditions that the pixel pitch is 4 and the angles are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively;
And B34, calculating an angular second moment, contrast, entropy, consistency and autocorrelation according to the gray level co-occurrence matrix, and averaging the feature quantities obtained by the 32 gray level images under 4 angles to obtain a one-dimensional feature vector with a downward mean square.
Further, the step S4 includes the following steps:
A41, transversely splicing texture features of different channels of the same sample bone to obtain a multidimensional feature matrix;
a42, carrying out normalization processing on the multi-dimensional feature matrix;
A43, performing dimension reduction on the multidimensional feature matrix subjected to normalization processing by using a PCA method, sorting the obtained main components in descending order according to the contribution rate of the main components to the total variance, selecting the least number of main components on the basis of ensuring that the cumulative variance contribution rate exceeds a given threshold, and projecting the original feature matrix onto the selected main components to obtain a feature matrix subjected to dimension reduction;
A44, using a random forest model as a classifier, inputting the feature matrix of the test set after dimension reduction into K trained classifiers, and selecting a model with the optimal performance on the test set as a model for identifying the T2DM bone mass change.
Further, the step S4 includes the following steps:
b41, respectively taking a plurality of channel texture feature matrixes of the sub-region images as first-layer classifiers which are independently input into a Stacking model;
And B42, obtaining first-order prediction probability of the training set through K-fold intersection operation, wherein the first-order prediction probability of the training set forms a 5-x M1-dimension first-order prediction probability matrix, and M1 is the number of sub-region images in the input training set.
And B43, obtaining first-order prediction probability of the test set through K-fold intersection operation, wherein the first-order prediction probability of the test set forms a 5-M2-dimension first-order prediction probability matrix, M2 is the number of sub-region images in the input test set, and the M2 is the average value of the K-fold intersection model prediction probability.
B44, using the 5-mm 1-dimension first-order prediction probability matrix as training set characteristic input to be transmitted to a second-layer classifier, and completing training;
Inputting a 5-mm 2-dimensional first-order prediction probability matrix of the test set into the second-layer classifier to obtain a final prediction result;
And B45, selecting a model combination with the second layer classifier optimally performing on the test set as a model for identifying the T2DM bone mass change.
In a second aspect, the present invention provides a bone quality recognition device comprising:
An acquisition module configured to acquire image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel, a TPEF channel of a sample bone;
the data processing module is configured to divide the image data acquired by the acquisition module into a plurality of sub-region images and acquire texture features of the sub-region images;
A prediction module configured to derive an identified T2DM bone mass variation model using texture feature training;
and processing the image data of the hydroxylapatite channel, the lipid channel, the protein channel, the SHG channel and the TPEF channel acquired in real time to obtain real-time texture features, inputting the real-time texture features into a prediction module to obtain a plurality of classification probabilities, and taking the average value of the classification probabilities as the final output classification probability.
A third aspect of the invention provides an electronic device comprising a processor and a memory communicatively coupled to the processor for storing instructions executable by the processor for performing the method of the first aspect.
A fourth aspect of the invention provides a server comprising at least one processor, and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor to cause the at least one processor to perform the method of the first aspect.
A fifth aspect of the invention provides a computer readable storage medium storing a computer program which when executed by a processor implements the method of the first aspect.
Compared with the prior art, the bone quality identification model training method based on the label-free nonlinear multi-modal imaging has the following beneficial effects:
(1) The bone quality identification model training method based on the label-free nonlinear multi-mode imaging can observe more structures and material distribution characteristics in bone tissues, provides more dimensional information for identifying T2DM concurrent bone quality reduction, and compensates for one-sided performance of bone density evaluation indexes.
(2) According to the marker-free nonlinear multi-mode imaging-based bone quality identification model training method, the channel image is divided into the plurality of sub-region images, the sub-region images are amplified and expanded, and the model prediction accuracy is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
fig. 1 is a schematic flow chart of a method for training a bone quality identification model based on label-free nonlinear multi-modal imaging according to an embodiment of the invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention will be described in detail below with reference to the drawings in connection with embodiments.
Embodiment one:
as shown in fig. 1, the method for training the bone quality recognition model based on label-free nonlinear multi-modal imaging comprises the following steps:
S1, acquiring image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel of a sample bone;
S2, dividing the image data obtained in the step S1 into a plurality of sub-region images;
s3, obtaining texture features of the sub-region images;
S4, training by using texture features to obtain a T2DM bone mass variation model.
The step S1 comprises the following steps:
The bone tissue to be detected is taken out from human body or experimental animal, a small amount of bone tissue to be detected is taken out from professional operation, in the example research, in order to ensure the research value, the bone tissue is taken from the fourth lumbar vertebra and the fifth lumbar vertebra, in practical application, the bone tissue of other parts is selected, but the machine learning model used later is required to be trained and obtained under the data of the corresponding structure. And after taking out the bone tissue, polishing the bone tissue by using an angle grinder, and detecting the thickness of the bone tissue by using a vernier caliper to be about 100 um. The polished bone tissue is placed on a glass slide and sealed to prepare a bone slice sample which can be stored in a refrigerator at 4 ℃ for a long time.
The bone fragment samples were multi-modal imaged using a multi-modal label-free nonlinear imaging system comprising stimulated raman imaging (Stimulated RAMAN SCATTERING, SRS), two-photon excited fluorescence microscopy imaging (two-photon excited fluorescence, TPEF), second harmonic imaging (second harmonic generation, SHG).
S11, acquiring hydroxyapatite channel image data with a Raman shift of 959cm -1 by using a stimulated Raman imaging (Stimulated RAMAN SCATTERING, SRS) method;
Collecting lipid channel image data with a Raman shift of 2850cm -1;
collecting protein channel image data with a Raman shift of 2930cm -1;
S12, acquiring SHG channel image data by using a second harmonic imaging method; SHG imaging resulted in type I collagen fibers.
S13, acquiring TPEF channel image data by using two-photon excitation fluorescence microscopy imaging; TPEF images autofluorescent substances in bone tissue.
The image data are obtained through splicing, and the splicing process is as follows:
A1, finding the coordinates of the position of the mature bone unit in the bright field
A2, carrying out complete imaging on the bone unit under the coordinates. Because the microscopic imaging range is limited, the whole bone unit cannot be covered, and the method is completed by sequential scanning and post-processing splicing. In sequential scanning, each block of bone units is imaged in coordinate order, and two adjacent fields of view require redundant X μm to ensure accuracy of stitching.
A3, imaging of SRS three channels is firstly carried out, then imaging of TPEF and SHG is carried out, and position coordinates and redundant width are kept consistent.
After the scanning of the A4 imaging, the SRS imaging, the TPEF imaging and the SHG imaging is finished, in each imaging mode, linear interpolation is carried out by means of the pixel linear relation of the adjacent redundant parts of the two fields of view, so that the splicing of the bone unit structure is finished.
Since raman spectra of protein and lipid standards have a large number of coincident fractions at two 2850cm -1 and 2930cm -1, linear decomposition was used to channel separate the protein and lipid.
In S11, the separation formula of the protein channel image data and the lipid channel image data is:
Wherein X 2930cm-1 and X 2850cm-1 represent the signal intensity distribution matrices of the Raman shift of the sample measured at 2930cm -1 and 2850cm -1, respectively, X lipid and X protein represent the signal intensity distribution matrices of the lipids and proteins in the sample measured, respectively, and Raman spectra of the proteins and lipid standards are measured using a Raman imaging device to obtain the value of a 1,a2,b1,b2.
The step S1 further comprises the following steps:
S14, identifying the structural boundary of the sample bone based on the TPEF signal, and storing the boundary as a mask, and marking the mask as osteon _mask; the mask is a binary image in which white pixels represent the region of interest and black pixels represent the background region. By converting the boundary into a mask, subsequent segmentation and processing of the bone unit region may be facilitated.
S15, dividing the same bone unit area image data for each channel image (five channel images of the same sample bone) by using osteon _mask, and carrying out image filtering; by applying a mask to the original image, the region of the image corresponding to the bone unit can be extracted, thereby achieving the positioning and segmentation of the bone unit. By utilizing osteon _mask based on TPEF signals to identify the sample bones, the structural boundary is clearer, so that the segmentation and processing of five channel images of the same sample bone are more accurate, and the accuracy of subsequent analysis is ensured.
S16, converting the original coordinate image subjected to filtering processing in the S15 into a polar coordinate image by taking the geometric center of a Harves tube in a bone unit as an origin, and generating final image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel.
The five-channel image under the polar coordinate is divided into 8 sub-area images according to the axial direction, namely, one bone unit can obtain 40 sub-area images under the five channels. There were 224 images per channel (88 from T2DM,136 from control), the images were tagged for T2DM, the images were self-scalar at training, and the tags were dependent variables.
Each sub-region image is augmented three times to expand the training set. Enhancement techniques include flipping (up and down, side to side), rotating (90 °, 180 °, 270 °), displacing, adding gaussian or pretzel noise, and resizing (halving the height). The channel image is divided into a plurality of sub-region images, and the sub-region images are amplified to expand the training set, so that the accuracy of model prediction is improved.
In the step S3, texture features of the sub-region images are obtained according to the first order statistic or the gray level co-occurrence matrix;
In some embodiments, the step of acquiring texture features of the sub-region image based on the first order statistic is as follows:
A31, converting the sub-region image into an 8-bit gray scale image; the image is converted to a gray format in which each pixel is represented by an 8-bit integer (0-255) representing its intensity level.
A32, converting 256 gray scales into 20 equidistant gray scales, and counting the number of pixels under each gray scale; reducing the number of gray levels from 256 to 20 makes the image easier to process while preserving the overall intensity distribution.
A33, sequentially calculating the mean value, standard deviation, skewness, kurtosis, consistency and entropy of the statistical result obtained by the A32, and extracting one-dimensional feature from each sub-area image; the one-dimensional features summarize the intensity distribution and texture characteristics of the sub-region image.
In other embodiments, the step of acquiring texture features of the sub-region image based on the gray level co-occurrence matrix is as follows:
b31, converting the sub-region image into an 8-bit gray scale image;
b32, converting 256 gray scales into 32 equidistant gray scales to obtain 32 gray scale images; the 256 gray levels of the 8-bit gray map are equally divided into 32 equidistant gray levels to reduce the complexity of data.
B33, calculating 4 gray level co-occurrence matrixes of the 32 gray level images under the conditions that the pixel pitch is 4 and the angles are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; and calculating the symbiotic relation between pixel values under each angle, and generating a corresponding gray level symbiotic matrix.
And B34, calculating an angular second moment, contrast, entropy, consistency and autocorrelation according to the gray level co-occurrence matrix, and averaging the feature quantities obtained by the 32 gray level images under 4 angles to obtain a one-dimensional feature vector with a downward mean square. The gray level co-occurrence matrix is used for describing the relation between pixels in the image, and the characteristic quantity under different angles is calculated and averaged, so that more comprehensive and robust texture characteristic representation can be obtained.
In some embodiments, the S4 includes the steps of:
A41, transversely splicing texture features of different channels of the same sample bone to obtain a multidimensional feature matrix;
a42, carrying out normalization processing on the multi-dimensional feature matrix;
A43, performing dimension reduction on the multidimensional feature matrix subjected to normalization processing by using a PCA method, sorting the obtained main components in descending order according to the contribution rate of the main components to the total variance, selecting the least number of main components on the basis of ensuring that the cumulative variance contribution rate exceeds a given threshold, and projecting the original feature matrix onto the selected main components to obtain a feature matrix subjected to dimension reduction;
A44, using a random forest model as a classifier, inputting the feature matrix of the test set after dimension reduction into K trained classifiers, and selecting a model with the optimal performance on the test set as a model for identifying the T2DM bone mass change.
In other embodiments, the step S4 comprises the steps of:
b41, respectively taking a plurality of channel texture feature matrixes of the sub-region images as first-layer classifiers which are independently input into a Stacking model;
And B42, obtaining first-order prediction probability of the training set through K-fold intersection operation, wherein the first-order prediction probability of the training set forms a 5-x M1-dimension first-order prediction probability matrix, and M1 is the number of sub-region images in the input training set.
And B43, obtaining first-order prediction probability of the test set through K-fold intersection operation, wherein the first-order prediction probability of the test set forms a 5-M2-dimension first-order prediction probability matrix, M2 is the number of sub-region images in the input test set, and the M2 is the average value of the K-fold intersection model prediction probability.
B44, using the 5-mm 1-dimension first-order prediction probability matrix as training set characteristic input to be transmitted to a second-layer classifier, and completing training;
Inputting a 5-mm 2-dimensional first-order prediction probability matrix of the test set into the second-layer classifier to obtain a final prediction result;
And B45, selecting a model combination with the second layer classifier optimally performing on the test set as a model for identifying the T2DM bone mass change.
The technical means of T2DM concurrent bone quality degradation is assisted and identified by utilizing image feature extraction and a machine learning algorithm, and different bone tissue lesion types caused by T2DM can be judged according to the obtained feature importance ranking. Compared with a single bone mineral density evaluation method, the technology can observe more structures and material distribution characteristics in bone tissues, provides more dimensional information for identifying T2DM concurrent bone quality reduction, compensates for the unilateral nature of bone mineral density evaluation indexes, and has important significance for researching treatment targets of T2DM fracture.
A bone quality identification device, comprising:
An acquisition module configured to acquire image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel, a TPEF channel of a sample bone;
the data processing module is configured to divide the image data acquired by the acquisition module into a plurality of sub-region images and acquire texture features of the sub-region images;
A prediction module configured to derive an identified T2DM bone mass variation model using texture feature training;
Processing image data of a hydroxylapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel which are acquired in real time to obtain real-time texture features, inputting the real-time texture features into a prediction module to obtain a plurality of classification probabilities (each sub-region image can generate a classification probability), and taking the average value of the classification probabilities as the final output classification probability;
When the sub-area image is judged to have low bone quality, the classification probability is 1, when the sub-area image is judged to have normal bone quality, the classification probability is 0, the classification probabilities of the eight sub-area images are added and divided by 8 to obtain output classification probability, the higher the value is, the lower the bone quality is, the detection accuracy is improved by a method of dividing the sub-area image, and in other embodiments, different weight coefficients of the sub-area images at different positions are positively correlated with the sample surface in the sub-area image.
The feature fusion mode taking the similar Stacking model as the core gives out the condition classification accuracy rate of different channel features, so that the source of the core material with the quality change can be known, and the quality change of different types of bones can be judged. The condition classification accuracy refers to the proportion of the number of samples correctly predicted by the first-order model according to the single-channel characteristics in the number of samples correctly predicted by the second-order model, and the rule T2DM related bone mass change with the highest proportion is derived from the channel.
For example: the first-order classifier corresponding to the TPEF channel has higher conditional classification accuracy, which indicates that the T2DM related bone quality change may be derived from advanced glycosylation end products, while the first-order classifier corresponding to the lipid channel has higher conditional classification accuracy, which indicates that the T2DM related bone quality change may be derived from lipid metabolism and the like.
Embodiment two:
an electronic device comprising a processor and a memory communicatively coupled to the processor for storing instructions executable by the processor for performing the method of the first embodiment.
Embodiment III:
A server comprising at least one processor and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor to cause the at least one processor to perform the method of embodiment one.
Embodiment four:
A computer readable storage medium storing a computer program which when executed by a processor performs the method of embodiment one.
Those of ordinary skill in the art will appreciate that the elements and method steps of each example described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the elements and steps of each example have been described generally in terms of functionality in the foregoing description to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and systems may be implemented in other ways. For example, the above-described division of units is merely a logical function division, and there may be another division manner when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. The units may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (9)

1. The bone quality identification model training method based on label-free nonlinear multi-mode imaging is characterized by comprising the following steps of:
S1, acquiring image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel of a sample bone;
S2, dividing the image data obtained in the step S1 into a plurality of sub-region images;
s3, obtaining texture features of the sub-region images;
s4, training by using texture features to obtain a recognition T2DM bone mass change model;
the step S1 further comprises the following steps:
S14, identifying the structural boundary of the sample bone based on the TPEF signal, and storing the boundary as a mask, and marking the mask as osteon _mask;
S15, dividing the same bone unit area image data for each channel image by utilizing osteon _mask, and carrying out image filtering;
S16, converting the original coordinate image subjected to filtering processing in the S15 into a polar coordinate image by taking the geometric center of a Harves tube in a bone unit as an origin, and generating final image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel.
2. The method for training a model for bone mass identification based on label-free nonlinear multi-modality imaging of claim 1, wherein S1 comprises the steps of:
S11, acquiring hydroxyapatite channel image data with a Raman shift of 959cm -1 by using a stimulated Raman imaging method;
Collecting lipid channel image data with a Raman shift of 2850cm -1;
collecting protein channel image data with a Raman shift of 2930cm -1;
s12, acquiring SHG channel image data by using a second harmonic imaging method;
s13, acquiring TPEF channel image data by using two-photon excitation fluorescence microscopy imaging;
In S11, the separation formula of the protein channel image data and the lipid channel image data is:
Wherein X 2930cm-1 and X 2850cm-1 represent the signal intensity distribution matrices of the Raman shift of the sample measured at 2930cm -1 and 2850cm -1, respectively, X lipid and X protein represent the signal intensity distribution matrices of the lipids and proteins in the sample measured, respectively, and Raman spectra of the proteins and lipid standards are measured using a Raman imaging device to obtain the value of a 1,a2,b1,b2.
3. The method for training the bone quality recognition model based on the label-free nonlinear multi-modal imaging according to claim 1, wherein in the step S3, texture features of the sub-region image are obtained according to the first order statistic or the gray level co-occurrence matrix;
The step of obtaining texture features of the sub-region image based on the first order statistic is as follows:
a31, converting the sub-region image into an 8-bit gray scale image;
a32, converting 256 gray scales into 20 equidistant gray scales, and counting the number of pixels under each gray scale;
a33, calculating the mean value, standard deviation, skewness, kurtosis, consistency and entropy of the statistical result in sequence, and extracting one-dimensional feature from each sub-area image;
The method for acquiring the texture features of the sub-region image based on the gray level co-occurrence matrix comprises the following steps:
b31, converting the sub-region image into an 8-bit gray scale image;
b32, converting 256 gray scales into 32 equidistant gray scales to obtain 32 gray scale images;
B33, calculating 4 gray level co-occurrence matrixes of the 32 gray level images under the conditions that the pixel pitch is 4 and the angles are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively;
And B34, calculating an angular second moment, contrast, entropy, consistency and autocorrelation according to the gray level co-occurrence matrix, and averaging the feature quantities obtained by the 32 gray level images under 4 angles to obtain a one-dimensional feature vector with a downward mean square.
4. The method for training a model for bone mass identification based on label-free nonlinear multi-modality imaging of claim 1, wherein S4 comprises the steps of:
A41, transversely splicing texture features of different channels of the same sample bone to obtain a multidimensional feature matrix;
a42, carrying out normalization processing on the multi-dimensional feature matrix;
A43, performing dimension reduction on the multidimensional feature matrix subjected to normalization processing by using a PCA method, sorting the obtained main components in descending order according to the contribution rate of the main components to the total variance, selecting the least number of main components on the basis of ensuring that the cumulative variance contribution rate exceeds a given threshold, and projecting the original feature matrix onto the selected main components to obtain a feature matrix subjected to dimension reduction;
A44, using a random forest model as a classifier, inputting the feature matrix of the test set after dimension reduction into K trained classifiers, and selecting a model with the optimal performance on the test set as a model for identifying the T2DM bone mass change.
5. The method for training a model for bone mass identification based on label-free nonlinear multi-modality imaging of claim 1, wherein S4 comprises the steps of:
b41, respectively taking a plurality of channel texture feature matrixes of the sub-region images as first-layer classifiers which are independently input into a Stacking model;
B42, obtaining first-order prediction probability of a training set through K-fold intersection operation, wherein the first-order prediction probability of the training set forms a first-order prediction probability matrix with 5 x M1 dimensionality, and M1 is the number of sub-region images in the input training set;
b43, obtaining first-order prediction probability of a test set through K-fold intersection operation, wherein the first-order prediction probability of the test set forms a first-order prediction probability matrix with 5 x M2 dimensions, M2 is the number of sub-region images in the input test set, and M is the average value of the K-fold intersection model prediction probability;
B44, using the 5-mm 1-dimension first-order prediction probability matrix as training set characteristic input to be transmitted to a second-layer classifier, and completing training;
Inputting a 5-mm 2-dimensional first-order prediction probability matrix of the test set into the second-layer classifier to obtain a final prediction result;
And B45, selecting a model combination with the second layer classifier optimally performing on the test set as a model for identifying the T2DM bone mass change.
6. A bone quality recognition device, comprising:
An acquisition module configured to acquire image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel, a TPEF channel of a sample bone, comprising the steps of:
S14, identifying the structural boundary of the sample bone based on the TPEF signal, and storing the boundary as a mask, and marking the mask as osteon _mask;
S15, dividing the same bone unit area image data for each channel image by utilizing osteon _mask, and carrying out image filtering;
s16, converting the original coordinate image subjected to filtering in the S15 into a polar coordinate image by taking the geometric center of a Harves tube in a bone unit as an origin, and generating final image data of a hydroxyapatite channel, a lipid channel, a protein channel, an SHG channel and a TPEF channel;
the data processing module is configured to divide the image data acquired by the acquisition module into a plurality of sub-region images and acquire texture features of the sub-region images;
A prediction module configured to derive an identified T2DM bone mass variation model using texture feature training;
and processing the image data of the hydroxylapatite channel, the lipid channel, the protein channel, the SHG channel and the TPEF channel acquired in real time to obtain real-time texture features, inputting the real-time texture features into a prediction module to obtain a plurality of classification probabilities, and taking the average value of the classification probabilities as the final output classification probability.
7. An electronic device comprising a processor and a memory communicatively coupled to the processor for storing processor-executable instructions, characterized in that: the processor is configured to perform the method of any of the preceding claims 1-5.
8. A server, characterized by: comprising at least one processor and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor to cause the at least one processor to perform the method of any of claims 1-5.
9. A computer-readable storage medium storing a computer program, characterized in that: the computer program implementing the method of any of claims 1-5 when executed by a processor.
CN202410275402.6A 2024-03-12 2024-03-12 Bone quality identification model training method based on label-free nonlinear multi-modal imaging Active CN117876372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410275402.6A CN117876372B (en) 2024-03-12 2024-03-12 Bone quality identification model training method based on label-free nonlinear multi-modal imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410275402.6A CN117876372B (en) 2024-03-12 2024-03-12 Bone quality identification model training method based on label-free nonlinear multi-modal imaging

Publications (2)

Publication Number Publication Date
CN117876372A CN117876372A (en) 2024-04-12
CN117876372B true CN117876372B (en) 2024-05-28

Family

ID=90590417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410275402.6A Active CN117876372B (en) 2024-03-12 2024-03-12 Bone quality identification model training method based on label-free nonlinear multi-modal imaging

Country Status (1)

Country Link
CN (1) CN117876372B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724357A (en) * 2020-06-09 2020-09-29 四川大学 Arm bone density measuring method based on digital radiation image and support vector regression
CN112001429A (en) * 2020-08-06 2020-11-27 中山大学 Depth forgery video detection method based on texture features
CN114373543A (en) * 2021-12-15 2022-04-19 河北工程大学附属医院 Human muscle and bone steady state assessment and health management service system
CN114792567A (en) * 2022-05-19 2022-07-26 上海交通大学医学院附属瑞金医院 Device for predicting fracture occurrence risk of type 2diabetes patient
WO2023052571A1 (en) * 2021-09-29 2023-04-06 Medimaps Group Sa Process and device for analyzing a texture of a tissue

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710908B2 (en) * 2013-01-08 2017-07-18 Agency For Science, Technology And Research Method and system for assessing fibrosis in a tissue
WO2021201908A1 (en) * 2020-04-03 2021-10-07 New York Society For The Relief Of The Ruptured And Crippled, Maintaining The Hospital For Special Surgery Mri-based textural analysis of trabecular bone
KR102510221B1 (en) * 2020-12-24 2023-03-15 연세대학교 산학협력단 A method of bone fracture prediction and an apparatus thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724357A (en) * 2020-06-09 2020-09-29 四川大学 Arm bone density measuring method based on digital radiation image and support vector regression
CN112001429A (en) * 2020-08-06 2020-11-27 中山大学 Depth forgery video detection method based on texture features
WO2023052571A1 (en) * 2021-09-29 2023-04-06 Medimaps Group Sa Process and device for analyzing a texture of a tissue
CN114373543A (en) * 2021-12-15 2022-04-19 河北工程大学附属医院 Human muscle and bone steady state assessment and health management service system
CN114792567A (en) * 2022-05-19 2022-07-26 上海交通大学医学院附属瑞金医院 Device for predicting fracture occurrence risk of type 2diabetes patient

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多模态成像技术监测基因修饰支架中血管生成及修复临界性骨缺损的研究;李建等;《集成技术》;20180131;第11-24页 *
骨质量的影响因素及其检测方法;王桂华;赵建宁;;医学研究生学报;20111015(第10期);全文 *

Also Published As

Publication number Publication date
CN117876372A (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN111178449B (en) Liver cancer image classification method combining computer vision characteristics and imaging omics characteristics
Raja'S et al. Labeling of lumbar discs using both pixel-and object-level features with a two-level probabilistic model
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
CN111681230A (en) System and method for scoring high-signal of white matter of brain
CN115578372A (en) Bone age assessment method, device and medium based on target detection and convolution transformation
CN113011450B (en) Training method, training device, recognition method and recognition system for glaucoma recognition
CN114450716A (en) Image processing for stroke characterization
CN111383222A (en) Intervertebral disc MRI image intelligent diagnosis system based on deep learning
CN114399510A (en) Skin lesion segmentation and classification method and system combining image and clinical metadata
CN114170473A (en) Method and system for classifying dMMR subtypes based on pathological images
CN117876372B (en) Bone quality identification model training method based on label-free nonlinear multi-modal imaging
CN112651955A (en) Intestinal tract image identification method and terminal device
Teranikar et al. Feature detection to segment cardiomyocyte nuclei for investigating cardiac contractility
CN111627005B (en) Fracture area identification method and system for bone subdivision based on shape
Fatema et al. Development of an automated optimal distance feature-based decision system for diagnosing knee osteoarthritis using segmented X-ray images
JP6329651B1 (en) Image processing apparatus and image processing method
Kaoungku et al. Colorectal Cancer Histology Image Classification Using Stacked Ensembles
Natrajan et al. A comparative scrutinization on diversified needle bandanna segmentation methodologies
JP2018125019A (en) Image processing apparatus and image processing method
CN117315378B (en) Grading judgment method for pneumoconiosis and related equipment
Thamilselvan Lung Cancer Examination and Risk Severity Prediction using Data Mining Algorithms
CN115148365B (en) Methods and systems for predicting prognosis of CNS germ cell tumors
CN115147378B (en) CT image analysis and extraction method
Van der Sluijs et al. Diagnostically Lossless Compression of Medical Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant