CN114287915B - Noninvasive scoliosis screening method and system based on back color images - Google Patents
Noninvasive scoliosis screening method and system based on back color images Download PDFInfo
- Publication number
- CN114287915B CN114287915B CN202111629135.0A CN202111629135A CN114287915B CN 114287915 B CN114287915 B CN 114287915B CN 202111629135 A CN202111629135 A CN 202111629135A CN 114287915 B CN114287915 B CN 114287915B
- Authority
- CN
- China
- Prior art keywords
- image
- human body
- color image
- training
- angle value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 206010039722 scoliosis Diseases 0.000 title claims abstract description 53
- 238000012216 screening Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 70
- 238000002372 labelling Methods 0.000 claims abstract description 35
- 230000011218 segmentation Effects 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000013145 classification model Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims description 35
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 238000005452 bending Methods 0.000 claims description 10
- 230000005856 abnormality Effects 0.000 claims description 6
- 210000000988 bone and bone Anatomy 0.000 claims description 6
- 210000001217 buttock Anatomy 0.000 claims description 6
- 238000013500 data storage Methods 0.000 claims description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 6
- 201000010099 disease Diseases 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 230000036541 health Effects 0.000 claims description 4
- 238000012552 review Methods 0.000 claims description 4
- 210000000746 body region Anatomy 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 claims description 3
- 230000018109 developmental process Effects 0.000 claims description 3
- 210000003692 ilium Anatomy 0.000 claims description 3
- 230000001939 inductive effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 9
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 230000000875 corresponding effect Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 4
- 239000011436 cob Substances 0.000 description 3
- 238000002405 diagnostic procedure Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 208000022567 adolescent idiopathic scoliosis Diseases 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000004441 surface measurement Methods 0.000 description 2
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 206010033799 Paralysis Diseases 0.000 description 1
- 208000019155 Radiation injury Diseases 0.000 description 1
- 208000020307 Spinal disease Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002612 cardiopulmonary effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002559 palpation Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012106 screening analysis Methods 0.000 description 1
- 230000012488 skeletal system development Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Abstract
The invention discloses a noninvasive scoliosis screening method and system based on back color pictures, wherein the method comprises the following steps: s1, acquiring a back human RGB-D image, and storing the back human RGB-D image as a depth image and a color image; s2, performing Mask-RCNN network training to obtain a human body segmentation model; s3, segmenting the human body to obtain a human body color image without a background; s4, training a YOLOv5 network to obtain a back recognition model; s5, intercepting a back area of the human body to obtain a back color image; s6, obtaining a depth map of the back of the human body, and calculating a maximum ATR angle value; and S7, formulating a classification standard, labeling a class label, and inputting the back color image and the class label into an EfficientNet network for training to obtain a spine classification model. According to the invention, by combining an artificial intelligence technology, an image processing technology and a biomechanical technology, whether one input image has risk of suffering from scoliosis can be automatically, efficiently and stably evaluated through four-stage scoliosis screening.
Description
Technical Field
The invention belongs to the technical field of image processing and artificial intelligence, and particularly relates to a noninvasive scoliosis screening method and system based on back color images.
Background
Adolescent idiopathic scoliosis (adolescent idiopathic scoliosis, AIS) is the most common spinal disorder in adolescents with a global prevalence of 0.5-5.2%. The non-intervening AIS may develop before skeletal development is mature, thereby affecting physical appearance, affecting cardiopulmonary function, and even causing paralysis. But as a chronic disease, it can be generally found early in the disease, and can be effectively controlled and corrected through timely periodic review and health education. Thus, prevention of such diseases is considered to be far more important than post-treatment.
It is common in the clinic to diagnose AIS by axial torso rotation (axial trunk rotation, ATR) angle or calculating Cobb angle. While Cobb angle is more versatile in domestic AIS diagnostics, it cannot be accurately calculated from human appearance data and requires additional invasive radiographic examination to expose full spinal features to obtain it. Thus, in the field of AIS screening, the ATR angle available directly from human surfaces is a more internationally popular and universal standard.
Currently, conventional AIS screening procedures include primary screening, outpatient screening, and instrument screening. Common means for primary screening and outpatient screening include general examination, anteversion testing, and scoliosis meter examination, among others. General examination and forward flexion experiments require the subject to expose the back, stand naturally or make a standard forward flexion posture, which is diagnosed by the examiner through visual observation. Their accuracy is highly dependent on the diagnostic experience of the clinician, resulting in subjectivity of the screening. The scoliosis measuring instrument check detects the ATR angle through the scoliosis measuring instrument on the basis of the forward bending posture. Although this is inexpensive, readily available and harmless, it is difficult to ensure standardization of diagnosis with complex diagnostic procedures and crude inspection instruments. At the same time, these diagnostic methods also need to be aided by means of palpation during the course of the diagnostic procedure, which presents ethical problems. Whereas instrument screening is often referred to as X-ray examination. The method has the advantages of specificity and authority in the aspect of diagnosis accuracy, and can accurately calculate the Cobb angle of the spine, but the limitation is still not neglected: first, the X-ray apparatus can cause radiation damage to the patient, and in addition, X-ray examination is expensive, and requires strict requirements on the shooting environment and operation by a professional technician. These drawbacks limit the usability of X-rays in routine screening.
To address these problems, many non-invasive AIS assessment methods have been proposed, for example by moire topography or parallel rays to show the back surface morphology, but strict equipment use conditions have prevented their popularization and widespread use. Another method of non-radiative injury assessment by ultrasound was introduced for imaging the shape of the spine, but required application of media to the back of the body, required specialized doctor handling, and required access to the back of the body, which is very inconvenient and inefficient to screen during covid epidemic. In addition to this, the optical non-invasive surface measurement system [9] developed on the basis of high-precision surface measurement devices enables three-dimensional reconstruction of the back or the entire torso, but this is often very expensive. Therefore, there is an urgent need to develop a non-invasive, non-contact, low cost method to achieve primary screening analysis of AIS and subsequent rehabilitation follow-up of the spine.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a noninvasive scoliosis screening method based on a back color image, which can automatically, efficiently and stably evaluate whether an input image has a risk of scoliosis or not through four-stage scoliosis screening by combining an artificial intelligence technology, an image processing technology and a biomechanics technology, and provides a noninvasive scoliosis screening system.
The aim of the invention is realized by the following technical scheme: a noninvasive scoliosis screening method based on back color images comprises the following steps:
s1, acquiring back human RGB-D images of people in different areas, different ages, different sexes and different scoliosis degrees, and respectively storing the back human RGB-D images as a depth image and a color image to obtain a corresponding depth image data set and a corresponding color image data set;
s2, marking a human body region in the color image in a manual marking mode, and inputting the original color image and the marking file into a Mask-RCNN network for training to obtain a human body segmentation model;
s3, inputting the color image to be segmented into the human body segmentation model trained in the step S2 to segment the human body, and then filling the background into black to obtain a human body color image without the background;
s4, selecting a back region from the seventh cervical vertebra to the sacrum in the human body color image through a manual labeling mode frame, training a YOLOv5 network by utilizing the human body color image and a labeling file, and obtaining a back recognition model capable of recognizing the back region from the seventh cervical vertebra to the sacrum in the image after training;
s5, inputting the human body color image to be identified into the back identification model trained in the step S4, identifying a human body back area, intercepting the human body back area in the image, and filling other positions into black to obtain a back color image;
s6, taking the human back color image obtained in the step S5 as a template, and extracting a back region of the depth image data according to the corresponding relation between the depth image and the color image to obtain a human back depth image; obtaining a back point cloud picture through the internal and external parameters of an RGB-D camera and a human back depth picture, carrying out three-dimensional reconstruction on the human back shape by utilizing the back point cloud picture, then extracting medical anatomical feature points of the human back, finding out the position of the spinous process point, extracting all cross sections where the position of the spinous process point is located, and calculating the maximum ATR angle value;
and S7, according to the maximum ATR angle value calculated in the step S6, making a classification standard, labeling a class label, inputting the back color image and the class label into an EfficientNet network for training, and obtaining a spine classification model capable of judging whether the spine is normal after training is completed.
Further, the specific implementation method of the step S2 is as follows: marking the human body part in the original color image in a point marking format by using Labelme software, naming the marking area as Person, and storing the coordinate information and the naming of each point in a marking file; uploading all original color images and annotation files to a training Mask-RCNN network in the computing equipment; the Mask-RCNN data processing process comprises the following steps: inputting the image into a built Mask-RCNN network structure, and extracting image characteristics by using a convolutional neural network CNN; generating N suggestion windows for each image using the FPN; mapping the suggestion window to the last layer convolution feature map of the CNN; then, each RoI generates a feature map with a fixed size through the RoI Align layer; and finally, classifying the human body and the background by using full connection, and returning the positions of the marking frames.
Further, the specific implementation method of the step S4 is as follows: using LabelImg software to tightly frame the organ in the Back area between the seventh cervical vertebra and the sacrum in the human body color image, and naming the rectangular frame as Back, wherein the vertex coordinate information and naming of the labeling frame are stored in a labeling file in a specific format; uploading all human body color images and labeling files to computing equipment to train a YOLOv5 network; the data processing flow of YOLOv5 is as follows: inputting the image into the built YOLOv5 network structure, and extracting image characteristics through a CSPDarknet53 structure and a Focus structure; then, the SPP module and the FPN+PAN module are used in the Neck network to further improve the diversity and the robustness of the features; and finally, outputting the labeling frame through regression, and outputting the category of the target framed by the labeling frame through classification.
Further, in the step S6, the specific implementation method for calculating the maximum ATR angle value is as follows:
s61, calculating an average curvature k1 and a Gaussian curvature k2 of the back of the human body based on the three-dimensional point cloud image of the back of the human body, and inducing the back characteristics of the human body into paraboloids, concave-convex surfaces and saddle surfaces according to curvature information: k1 Parabolic when=0 or k2=0; k2<0 is convex; k1>0 is concave; k2>0> k1 is saddle surface; then marking and positioning the bone, the sacrum, the left and right posterior upper iliac spines and spinous process lines by combining the positions and the characteristics of medical anatomical points of the back of the human body; the labeling rules are as follows: the carina is at the convex surface of the cervical vertebra part, namely where k2> 0; the sacral point is located at the lowest concave surface of the buttocks of the human body, namely where k1> 0; the left and right ilium posterior superior spines are positioned at the concave position above the buttocks, namely, k1> 0; the spinous process points are positions with the minimum curvature difference between left and right of each spinal point on the cross section of each spinal point of the back, and the connecting lines formed by the spinous process points form spinous process lines;
s62, extracting a three-dimensional human back cross section curve corresponding to a spinous process point of 18 spinal points between a carina and an L5 lumbar vertebra; on the cross section curve, respectively extracting left and right target points by taking the position of the spinous process as a center, respectively taking 20 units to the left and right, recording three-dimensional position information of the left and right target points, calculating the rotation angle of the back of the human body at the vertebra position through the relative positions of the left and right target points, respectively calculating the rotation angles of 18 vertebrae on the cross section, and defining the maximum rotation angle as the maximum ATR angle value.
Further, in the step S7, the classification criteria include the following two schemes:
scheme one, based on the maximum ATR angle value, the back color image is divided into two categories: when the maximum ATR angle value is smaller than 5 degrees, the spine of the current back image is considered to be free of abnormality or slight posture abnormality, and only regular review and health education are needed; when the maximum ATR angle value is greater than or equal to 5 degrees, the suspected scoliosis of the spine of the back image is considered, and further out-patient screening and instrument screening are needed to be performed, so that intervention and treatment are performed in time; according to the classification scheme, the back color image with the maximum ATR angle value smaller than 5 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value being more than or equal to 5 degrees is marked as abnormal, and the label value is 1;
scheme II, according to the maximum ATR angle value, the back color image is divided into three types: when the maximum ATR angle value is smaller than or equal to 4 degrees, the spine of the current back image is considered to be basically normal, and good habit is kept; when the maximum ATR angle value is between 4 degrees and 7 degrees, the spine of the current back image is considered to have a certain risk of lateral bending, but the risk degree is lower, and the disease development process needs to be further observed and monitored; when the maximum ATR angle value is larger than 7 degrees, the spine of the current back image is considered to have higher risk of lateral bending, and medical treatment needs to be carried out in time by adopting professional medical measures; according to the classification scheme, the back color image with the maximum ATR angle value smaller than or equal to 4 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value between 4 degrees and 7 degrees is marked as the low risk of scoliosis, and the label value is 1; the back color image with the maximum ATR angle value larger than 7 degrees is marked as high risk of scoliosis, and the label value is 2;
EfficientNet: the image is input into the established EfficientNet network structure, the image characteristics are extracted through CNN, and then the image category is output through the classification layer.
It is another object of the present invention to provide a non-invasive scoliosis screening system based on back color images, comprising the following modules:
and an image acquisition module: acquiring original color data of back human body images of different areas, different ages, different sexes and different lateral bending degrees, and respectively storing the original color data as a depth image and a color image to obtain a depth image data set and a color image data set which are in one-to-one correspondence;
the human body segmentation module: training the Mask-RCNN network by using the marked original color image to obtain a human body segmentation model;
human back identification module: training the YOLOv5 network by using the labeled human body color image to obtain a back recognition model;
spinal column assessment module: the method is used for training the EfficientNet network by using the marked back color image according to the classification standard to obtain a spine classification model.
Specifically, the noninvasive scoliosis screening system further comprises the following modules:
the original data labeling module: the method is used for manually marking the acquired original color image, pixel-level marking is adopted during marking, and a human body and a background are marked along the outline of the human body;
a first training data storage module: the method comprises the steps of storing an original color image after a human body is marked manually as first training data;
human color data labeling module: the method comprises the steps of manually marking a segmented human body color image, and setting a marking frame and a name at the back position between the seventh cervical vertebra and the sacrum on the image during marking;
the second training data storage module: the human body color image after the back is marked manually is used as second training data to be stored;
the classification label storage module: the values for the image name, calculated maximum ATR angle value and class label under different schemes are saved in an Excel file.
Specifically, the noninvasive scoliosis screening system based on the back color image further comprises the following modules:
Mask-RCNN network training module: training the Mask-RCNN through the first training data;
YOLO v5 network training module: for training the YOLO v5 network with the second training data;
the maximum ATR angle value calculation module: calculating a maximum ATR angle value through the depth map information of the back of the human body;
EffentNet network training module: for training the efficentenet network by classifying the data in the tag save module.
The beneficial effects of the invention are as follows: the invention provides a noninvasive scoliosis screening method and system based on back color images by combining an artificial intelligence technology, an image processing technology and a biomechanics technology, wherein the scoliosis screening task is completed in four stages, the background of an original color image is removed in the first stage, and the color image only comprising a human body area is reserved; the second stage identifies a human back region in the human color image; the third stage is based on the three-dimensional point cloud image to automatically calculate the maximum ATR angle value of the back as a label true value; and in the fourth stage, the scoliosis condition is evaluated. The invention can automatically, efficiently and stably evaluate whether one input image has risk of suffering from scoliosis or not through four-stage scoliosis screening. Therefore, the invention can realize the preliminary screening of the scoliosis disease condition in a non-invasive, non-contact and low-cost way, completely avoid radiation injury, effectively solve ethical problems and greatly reduce the examination cost of patients.
Drawings
FIG. 1 is a flow chart of a non-invasive scoliosis screening method based on a back color image of the present invention;
FIG. 2 is a diagram showing an example of the result of the step S3 of inputting the original color image and the human body segmentation model to obtain the image mask and the segmented human body color image;
FIG. 3 is a diagram showing the result of the identification of the back of the human body and an exemplary diagram of the back color image of the back only;
FIG. 4 is an exemplary view of images of different categories under two classification criteria.
Detailed Description
ATR angle and back information are highly correlated and currently the dominant pre-screening approach is to screen through ATR. Therefore, a convenient, quick and noninvasive AIS preliminary screening method is developed based on the RGB images of the back surface of the human body, so that people can finish preliminary evaluation work of scoliosis risk of teenagers in home through common equipment such as mobile phones and the like, and social medical expenditure and scoliosis incidence rate are reduced. The technical scheme of the invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the non-invasive scoliosis screening method based on back color images of the invention comprises the following steps:
s1, acquiring back human RGB-D images of people in different areas, different ages, different sexes and different scoliosis degrees, and respectively storing the back human RGB-D images as a depth image and a color image to obtain a corresponding depth image data set and a corresponding color image data set;
the essence of an RGB-D image is the combination of two images: one is a color image (i.e., RGB part in RGB-D) with three color channels of red (R), green (G), blue (B), and one is a single channel Depth image (i.e., D part in RGB-D) that records only the actual distance (Depth) of the sensor from the object. Wherein the color image can describe the appearance, color, texture information of the object, and the depth image can describe the shape, scale, geometric space information of the object. The RGB-D image may be separated into a color image and a depth image, which are stored separately.
The back human RGB-D image is taken using a non-invasive, non-contact image capture device, such as a Kinect sensor, which requires a medical anatomical landmark taken from directly behind the subject, without a coat shield, and with complete exposure of the back of the human body. The photographed image is saved on a local computer, and the depth map and the color image are required to be separated during saving, but have relevance when naming is required, so that the depth map and the color image can be in one-to-one correspondence.
S2, marking a human body region in the color image in a manual marking mode, inputting the original color image and the marking file into a Mask-RCNN network for training to obtain a human body segmentation model capable of segmenting a human body in the image;
the specific implementation method comprises the following steps: marking the human body part in the original color image in a point marking format by using Labelme software, naming the marking area as Person, and storing the coordinate information and the naming of each point in a marking file; uploading all original color images and annotation files to a training Mask-RCNN network in the computing equipment; the Mask-RCNN data processing process comprises the following steps: inputting the image into a built Mask-RCNN network structure, and extracting image characteristics by using a convolutional neural network CNN; generating N suggestion windows for each image using the FPN; mapping the suggestion window to the last layer convolution feature map of the CNN; then, each RoI generates a feature map with a fixed size through the RoI Align layer; and finally, classifying the human body and the background by using full connection, and returning the positions of the marking frames.
S3, inputting the color image to be segmented into the human body segmentation model trained in the step S2 to segment the human body, and then filling the background into black to obtain a human body color image without the background;
and inputting all the original color images into the trained human body segmentation model, and outputting a mask of the image, wherein the human body area is white, the corresponding R, G, B value is 255, the background area is black, and the corresponding R, G, B value is 0. And comparing the value of each pixel in the mask with the value of the pixel at the corresponding position of the original color image, and taking a smaller value to obtain the human color image with black background filling. The image mask and the result graph after dividing the human body obtained in this embodiment are shown in fig. 2, for example.
S4, selecting a back region from the seventh cervical vertebra to the sacrum in the human body color image through a manual labeling mode frame, training a YOLOv5 network by utilizing the human body color image and a labeling file, and obtaining a back recognition model capable of recognizing the back region from the seventh cervical vertebra to the sacrum in the image after training;
the specific implementation method comprises the following steps: using LabelImg software to tightly frame the organ in the Back area between the seventh cervical vertebra and the sacrum in the human body color image, and naming the rectangular frame as Back, wherein the vertex coordinate information and naming of the labeling frame are stored in a labeling file in a specific format; uploading all human body color images and labeling files to computing equipment to train a YOLOv5 network; the data processing flow of YOLOv5 is as follows: inputting the image into the built YOLOv5 network structure, and extracting image characteristics through a CSPDarknet53 structure and a Focus structure; then, the SPP module and the FPN+PAN module are used in the Neck network to further improve the diversity and the robustness of the features; and finally, outputting the labeling frame through regression, and outputting the category (namely the back area of the human body) of the target framed by the labeling frame through classification.
S5, inputting the human body color image to be identified into the back identification model trained in the step S4, identifying a human body back area, intercepting the human body back area in the image, and filling other positions into black to obtain a back color image;
and inputting all the human body color images into the trained back recognition model, namely outputting the images marked with the backs, the confidence that the targets marked with boxes are regarded as the backs of the human bodies, and the text documents which are named by the image names and written with the coordinates of the central points of the marked boxes and the width and the height of the boxes. And calculating the values of the upper and lower ordinate of the labeling frame according to the document information, and filling the pixels smaller than the lower ordinate and the pixels larger than the upper ordinate into black to obtain a back color image only retaining the back. The back recognition result chart and the back color image which only retains the back obtained in this embodiment are shown in fig. 3.
S6, taking the human back color image obtained in the step S5 as a template, and extracting a back region of the depth image data according to the corresponding relation between the depth image and the color image to obtain a human back depth image; obtaining a back point cloud picture through the internal and external parameters of an RGB-D camera and a human back depth picture, carrying out three-dimensional reconstruction on the human back shape by utilizing the back point cloud picture, then extracting medical anatomical feature points of the human back, finding out the position of the spinous process point, extracting all cross sections where the position of the spinous process point is located, and calculating the maximum ATR angle value; medically, the spinous process refers to a portion of the spinal anatomy. The spine is an irregular bone, the thicker round structure in front is a cone, the back of the cone is a vertebral canal, the back of the vertebral canal is provided with a bone structure protruding backwards, which is called a spinous process, and the two sides of the vertebral canal are provided with bone structures protruding like bone tips, which are transverse processes. The spinous process is therefore a posteriorly facing bony bulge of the entire spine, posteriorly located.
The specific implementation method for calculating the maximum ATR angle value comprises the following steps:
s61, calculating an average curvature k1 and a Gaussian curvature k2 of the back of the human body based on the three-dimensional point cloud image of the back of the human body, and inducing the back characteristics of the human body into paraboloids, concave-convex surfaces and saddle surfaces according to curvature information: k1 Parabolic when=0 or k2=0; k2<0 is convex; k1>0 is concave; k2>0> k1 is saddle surface; then, marking and positioning the spine (cervical vertebra 7 th section C7), sacrum, left and right posterior superior iliac spine and spinous process line (line formed by spinous process points of each vertebra) by combining the positions and the characteristics of medical anatomical points of the back of the human body; the labeling rules are as follows: the carina is at the convex surface of the cervical vertebra part, namely where k2> 0; the sacral point is located at the lowest concave surface of the buttocks of the human body, namely where k1> 0; the left and right ilium posterior superior spines are positioned at the concave position above the buttocks, namely, k1> 0; the spinous process points are positions with the minimum curvature difference between left and right of each spinal point on the cross section of each spinal point of the back, and the connecting lines formed by the spinous process points form spinous process lines;
s62, extracting a three-dimensional human back cross section curve corresponding to a spinous process point of 18 spinal points between a carina and an L5 lumbar vertebra; on the cross section curve, respectively extracting left and right target points by taking the position of the spinous process as a center, respectively taking 20 units to the left and right, recording three-dimensional position information of the left and right target points, calculating the rotation angle of the back of the human body at the vertebra position through the relative positions of the left and right target points, respectively calculating the rotation angles of 18 vertebrae on the cross section, and defining the maximum rotation angle as the maximum ATR angle value.
And S7, according to the maximum ATR angle value calculated in the step S6, making a classification standard, labeling a class label, inputting the back color image and the class label into an EfficientNet network for training, and obtaining a spine classification model capable of judging whether the spine is normal after training is completed.
Specifically, the classification standards have the following two schemes, and the schematic diagrams of the images of different categories under the two schemes are shown in fig. 4.
Scheme one, based on the maximum ATR angle value, the back color image is divided into two categories: when the maximum ATR angle value is smaller than 5 degrees, the spine of the current back image is considered to be free of abnormality or slight posture abnormality, and only regular review and health education are needed; when the maximum ATR angle value is greater than or equal to 5 degrees, the suspected scoliosis of the spine of the back image is considered, and further out-patient screening and instrument screening are needed to be performed, so that intervention and treatment are performed in time; according to the classification scheme, the back color image with the maximum ATR angle value smaller than 5 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value being more than or equal to 5 degrees is marked as abnormal, and the label value is 1;
scheme II, according to the maximum ATR angle value, the back color image is divided into three types: when the maximum ATR angle value is smaller than or equal to 4 degrees, the spine of the current back image is considered to be basically normal, and good habit is kept; when the maximum ATR angle value is between 4 degrees and 7 degrees, the spine of the current back image is considered to have a certain risk of lateral bending, but the risk degree is lower, and the disease development process needs to be further observed and monitored; when the maximum ATR angle value is larger than 7 degrees, the spine of the current back image is considered to have higher risk of lateral bending, and medical treatment needs to be carried out in time by adopting professional medical measures; according to the classification scheme, the back color image with the maximum ATR angle value smaller than or equal to 4 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value between 4 degrees and 7 degrees is marked as the low risk of scoliosis, and the label value is 1; the back color image with the maximum ATR angle value larger than 7 degrees is marked as high risk of scoliosis, and the label value is 2;
EfficientNet: the image is input into the established EfficientNet network structure, the image characteristics are extracted through CNN, and then the image category is output through a classification layer (namely, whether the image is normal or abnormal is judged).
According to the classification scheme, the back color image with the maximum ATR angle value smaller than or equal to 4 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value between 4 degrees and 7 degrees is marked as the low risk of scoliosis, and the label value is 1; a back color image with a maximum ATR angle value greater than 7 degrees is labeled as high risk of scoliosis, and the label value is 2.
Through the trained EfficientNet network, the risk degree of scoliosis can be judged on one back color image. In this embodiment, the back color image labeled with the risk degree of scoliosis is used as training data of the spine evaluation module according to the maximum ATR angle value, and the image data is uploaded to a computing device with high performance computing capability for training. The deep learning algorithm used for training is an Efficient net network, and a model for spine evaluation is obtained through training. Using a back color image as the input to the model, the risk of scoliosis in the image can be quickly reported.
The invention discloses a noninvasive scoliosis screening system based on back color images, which comprises the following modules:
and an image acquisition module: acquiring original color data of back human body images of different areas, different ages, different sexes and different lateral bending degrees, and respectively storing the original color data as a depth image and a color image to obtain a depth image data set and a color image data set which are in one-to-one correspondence;
the human body segmentation module: training the Mask-RCNN network by using the marked original color image to obtain a human body segmentation model; the human body segmentation model is used for carrying out human body segmentation on an original color image to be segmented to obtain a background-free human body color image, and storing the background-free human body color image;
human back identification module: training the YOLOv5 network by using the labeled human body color image to obtain a back recognition model; the back recognition model is used for carrying out back recognition on the human body color image to be recognized, retaining the recognized back, obtaining a back color image and storing the back color image;
spinal column assessment module: training an EfficientNet network by using the marked back color image according to the classification standard to obtain a spine classification model, wherein the spine classification model is used for classifying the spine of the input back color image so as to evaluate the occurrence condition of scoliosis;
the original data labeling module: the method is used for manually marking the acquired original color image, pixel-level marking is adopted during marking, and a human body and a background are marked along the outline of the human body;
a first training data storage module: the method comprises the steps of storing an original color image after a human body is marked manually as first training data;
human color data labeling module: the method comprises the steps of manually marking a segmented human body color image, and setting a marking frame and a name at the back position between the seventh cervical vertebra and the sacrum on the image during marking;
the second training data storage module: the human body color image after the back is marked manually is used as second training data to be stored;
the classification label storage module: the method comprises the steps of storing an image name, a calculated maximum ATR angle value and values of classification labels under different schemes in an Excel file;
Mask-RCNN network training module: training the Mask-RCNN through the first training data;
YOLO v5 network training module: for training the YOLO v5 network with the second training data;
the maximum ATR angle value calculation module: calculating a maximum ATR angle value through the depth map information of the back of the human body;
EffentNet network training module: for training the efficentenet network by classifying the data in the tag save module.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.
Claims (7)
1. A noninvasive scoliosis screening method based on back color images is characterized by comprising the following steps:
s1, acquiring back human RGB-D images of people in different areas, different ages, different sexes and different scoliosis degrees, and respectively storing the back human RGB-D images as a depth image and a color image to obtain a corresponding depth image data set and a corresponding color image data set;
s2, marking a human body region in the color image in a manual marking mode, and inputting the original color image and the marking file into a Mask-RCNN network for training to obtain a human body segmentation model;
s3, inputting the color image to be segmented into the human body segmentation model trained in the step S2 to segment the human body, and then filling the background into black to obtain a human body color image without the background;
s4, selecting a back region from the seventh cervical vertebra to the sacrum in the human body color image through a manual labeling mode frame, training a YOLOv5 network by utilizing the human body color image and a labeling file, and obtaining a back recognition model capable of recognizing the back region from the seventh cervical vertebra to the sacrum in the image after training;
s5, inputting the human body color image to be identified into the back identification model trained in the step S4, identifying a human body back area, intercepting the human body back area in the image, and filling other positions into black to obtain a back color image;
s6, taking the human back color image obtained in the step S5 as a template, and extracting a back region of the depth image data according to the corresponding relation between the depth image and the color image to obtain a human back depth image; obtaining a back point cloud picture through the internal and external parameters of an RGB-D camera and a human back depth picture, carrying out three-dimensional reconstruction on the human back shape by utilizing the back point cloud picture, then extracting medical anatomical feature points of the human back, finding out the position of the spinous process point, extracting all cross sections where the position of the spinous process point is located, and calculating the maximum ATR angle value; the specific implementation method for calculating the maximum ATR angle value comprises the following steps:
s61, calculating an average curvature k1 and a Gaussian curvature k2 of the back of the human body based on the three-dimensional point cloud image of the back of the human body, and inducing the back characteristics of the human body into paraboloids, concave-convex surfaces and saddle surfaces according to curvature information: k1 Parabolic when=0 or k2=0; k2<0 is convex; k1>0 is concave; k2>0> k1 is saddle surface; then marking and positioning the bone, the sacrum, the left and right posterior upper iliac spines and spinous process lines by combining the positions and the characteristics of medical anatomical points of the back of the human body; the labeling rules are as follows: the carina is at the convex surface of the cervical vertebra part, namely where k2> 0; the sacral point is located at the lowest concave surface of the buttocks of the human body, namely where k1> 0; the left and right ilium posterior superior spines are positioned at the concave position above the buttocks, namely, k1> 0; the spinous process points are positions with the minimum curvature difference between left and right of each spinal point on the cross section of each spinal point of the back, and the connecting lines formed by the spinous process points form spinous process lines;
s62, extracting a three-dimensional human back cross section curve corresponding to a spinous process point of 18 spinal points between a carina and an L5 lumbar vertebra; respectively extracting left and right target points on a cross section curve by taking the position of a spinous process point as a center, taking 20 units leftwards and rightwards, recording three-dimensional position information of the left and right target points, calculating the rotation angle of the back of a human body at the vertebra position through the relative positions of the left and right target points, respectively calculating the rotation angles of 18 vertebrae on the cross section, and defining the maximum rotation angle as a maximum ATR angle value;
and S7, according to the maximum ATR angle value calculated in the step S6, making a classification standard, labeling a class label, inputting the back color image and the class label into an EfficientNet network for training, and obtaining a spine classification model capable of judging whether the spine is normal after training is completed.
2. The non-invasive scoliosis screening method based on back color images according to claim 1, wherein the specific implementation method of step S2 is as follows: marking the human body part in the original color image in a point marking format by using Labelme software, naming the marking area as Person, and storing the coordinate information and the naming of each point in a marking file; uploading all original color images and annotation files to a training Mask-RCNN network in the computing equipment; the Mask-RCNN data processing process comprises the following steps: inputting the image into a built Mask-RCNN network structure, and extracting image characteristics by using a convolutional neural network CNN; subsequently generating N suggestion windows for each image using the RPN; mapping the suggestion window to the last layer convolution feature map of the CNN; then, each RoI generates a feature map with a fixed size through the RoI Align layer; and finally, classifying the human body and the background by using full connection, and returning the positions of the marking frames.
3. The non-invasive scoliosis screening method based on back color images according to claim 1, wherein the specific implementation method of step S4 is as follows: using LabelImg software to tightly frame a Back region from a seventh cervical vertebra to a sacrum in a human body color image, naming the rectangular frame as a Back, and storing vertex coordinate information and naming of the labeling frame in a labeling file in a specific format; uploading all human body color images and labeling files to computing equipment to train a YOLOv5 network; the data processing flow of YOLOv5 is as follows: inputting the image into the built YOLOv5 network structure, and extracting image characteristics through a CSPDarknet53 structure and a Focus structure; then, the SPP module and the FPN+PAN module are used in the Neck network to further improve the diversity and the robustness of the features; and finally, outputting the labeling frame through regression, and outputting the category of the target framed by the labeling frame through classification.
4. The non-invasive scoliosis screening method based on back color images according to claim 1, wherein in the step S7, the classification criteria include the following two schemes:
scheme one, based on the maximum ATR angle value, the back color image is divided into two categories: when the maximum ATR angle value is smaller than 5 degrees, the spine of the current back image is considered to be free of abnormality or slight posture abnormality, and only regular review and health education are needed; when the maximum ATR angle value is greater than or equal to 5 degrees, the suspected scoliosis of the spine of the back image is considered, and further out-patient screening and instrument screening are needed to be performed, so that intervention and treatment are performed in time; according to the classification scheme, the back color image with the maximum ATR angle value smaller than 5 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value being more than or equal to 5 degrees is marked as abnormal, and the label value is 1;
scheme II, according to the maximum ATR angle value, the back color image is divided into three types: when the maximum ATR angle value is smaller than or equal to 4 degrees, the spine of the current back image is considered to be basically normal, and good habit is kept; when the maximum ATR angle value is between 4 degrees and 7 degrees, the spine of the current back image is considered to have a certain risk of lateral bending, but the risk degree is lower, and the disease development process needs to be further observed and monitored; when the maximum ATR angle value is larger than 7 degrees, the spine of the current back image is considered to have higher risk of lateral bending, and medical treatment needs to be carried out in time by adopting professional medical measures; according to the classification scheme, the back color image with the maximum ATR angle value smaller than or equal to 4 degrees is marked as normal, and the label value is 0; the back color image with the maximum ATR angle value between 4 degrees and 7 degrees is marked as the low risk of scoliosis, and the label value is 1; the back color image with the maximum ATR angle value larger than 7 degrees is marked as high risk of scoliosis, and the label value is 2;
EfficientNet: the image is input into the established EfficientNet network structure, the image characteristics are extracted through CNN, and then the image category is output through the classification layer.
5. The non-invasive scoliosis screening system based on back color images according to any of claims 1-4, comprising the following modules:
and an image acquisition module: acquiring original color data of back human body images of different areas, different ages, different sexes and different lateral bending degrees, and respectively storing the original color data as a depth image and a color image to obtain a depth image data set and a color image data set which are in one-to-one correspondence;
the human body segmentation module: training the Mask-RCNN network by using the marked original color image to obtain a human body segmentation model;
human back identification module: training the YOLOv5 network by using the labeled human body color image to obtain a back recognition model;
spinal column assessment module: the method is used for training the EfficientNet network by using the marked back color image according to the classification standard to obtain a spine classification model.
6. The non-invasive scoliosis screening system based on back color images of claim 5, further comprising the following modules:
the original data labeling module: the method is used for manually marking the acquired original color image, pixel-level marking is adopted during marking, and a human body and a background are marked along the outline of the human body;
a first training data storage module: the method comprises the steps of storing an original color image after a human body is marked manually as first training data;
human color data labeling module: the method comprises the steps of manually marking a segmented human body color image, and setting a marking frame and a name at the back position between the seventh cervical vertebra and the sacrum on the image during marking;
the second training data storage module: the human body color image after the back is marked manually is used as second training data to be stored;
the classification label storage module: the values for the image name, calculated maximum ATR angle value and class label under different schemes are saved in an Excel file.
7. The non-invasive scoliosis screening system based on back color images of claim 6, further comprising the following modules:
Mask-RCNN network training module: training the Mask-RCNN through the first training data;
YOLO v5 network training module: for training the YOLOv5 network with the second training data;
the maximum ATR angle value calculation module: calculating a maximum ATR angle value through the depth map information of the back of the human body;
EffentNet network training module: for training the efficentenet network by classifying the data in the tag save module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111629135.0A CN114287915B (en) | 2021-12-28 | 2021-12-28 | Noninvasive scoliosis screening method and system based on back color images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111629135.0A CN114287915B (en) | 2021-12-28 | 2021-12-28 | Noninvasive scoliosis screening method and system based on back color images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114287915A CN114287915A (en) | 2022-04-08 |
CN114287915B true CN114287915B (en) | 2024-03-05 |
Family
ID=80971076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111629135.0A Active CN114287915B (en) | 2021-12-28 | 2021-12-28 | Noninvasive scoliosis screening method and system based on back color images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114287915B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024033897A1 (en) * | 2022-08-11 | 2024-02-15 | Momentum Health Inc. | Method and system for determining spinal curvature |
CN115886787B (en) * | 2023-03-09 | 2023-06-02 | 深圳市第二人民医院(深圳市转化医学研究院) | Ground reaction force transformation method for disease screening, bone disease screening system and equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458831A (en) * | 2019-08-12 | 2019-11-15 | 深圳市智影医疗科技有限公司 | A kind of scoliosis image processing method based on deep learning |
CN111666890A (en) * | 2020-06-08 | 2020-09-15 | 平安科技(深圳)有限公司 | Spine deformation crowd identification method and device, computer equipment and storage medium |
CN112258516A (en) * | 2020-09-04 | 2021-01-22 | 温州医科大学附属第二医院、温州医科大学附属育英儿童医院 | Method for generating scoliosis image detection model |
CN112381757A (en) * | 2020-10-09 | 2021-02-19 | 温州医科大学附属第二医院、温州医科大学附属育英儿童医院 | System and method for measuring and calculating scoliosis Cobb angle through full-length X-ray film of spine based on artificial intelligence-image recognition |
CN112734757A (en) * | 2021-03-29 | 2021-04-30 | 成都成电金盘健康数据技术有限公司 | Spine X-ray image cobb angle measuring method |
CN113397485A (en) * | 2021-05-27 | 2021-09-17 | 上海交通大学医学院附属新华医院 | Scoliosis screening method based on deep learning |
-
2021
- 2021-12-28 CN CN202111629135.0A patent/CN114287915B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458831A (en) * | 2019-08-12 | 2019-11-15 | 深圳市智影医疗科技有限公司 | A kind of scoliosis image processing method based on deep learning |
CN111666890A (en) * | 2020-06-08 | 2020-09-15 | 平安科技(深圳)有限公司 | Spine deformation crowd identification method and device, computer equipment and storage medium |
CN112258516A (en) * | 2020-09-04 | 2021-01-22 | 温州医科大学附属第二医院、温州医科大学附属育英儿童医院 | Method for generating scoliosis image detection model |
CN112381757A (en) * | 2020-10-09 | 2021-02-19 | 温州医科大学附属第二医院、温州医科大学附属育英儿童医院 | System and method for measuring and calculating scoliosis Cobb angle through full-length X-ray film of spine based on artificial intelligence-image recognition |
CN112734757A (en) * | 2021-03-29 | 2021-04-30 | 成都成电金盘健康数据技术有限公司 | Spine X-ray image cobb angle measuring method |
CN113397485A (en) * | 2021-05-27 | 2021-09-17 | 上海交通大学医学院附属新华医院 | Scoliosis screening method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN114287915A (en) | 2022-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5603859B2 (en) | Method for controlling an analysis system that automatically analyzes a digitized image of a side view of a target spine | |
CN114287915B (en) | Noninvasive scoliosis screening method and system based on back color images | |
US20210174505A1 (en) | Method and system for imaging and analysis of anatomical features | |
Rangayyan et al. | Improvement of sensitivity of breast cancer diagnosis with adaptive neighborhood contrast enhancement of mammograms | |
JP6170284B2 (en) | Image processing program, recording medium, image processing apparatus, and image processing method | |
Korez et al. | A deep learning tool for fully automated measurements of sagittal spinopelvic balance from X-ray images: performance evaluation | |
CN112258516B (en) | Method for generating scoliosis image detection model | |
CN106056537A (en) | Medical image splicing method and device | |
CN108309334B (en) | Data processing method of spine X-ray image | |
JP2008520344A (en) | Method for detecting and correcting the orientation of radiographic images | |
CN110503652B (en) | Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal | |
CN110246580B (en) | Cranial image analysis method and system based on neural network and random forest | |
CN110223279B (en) | Image processing method and device and electronic equipment | |
CN111127430A (en) | Method and device for determining medical image display parameters | |
KR102461343B1 (en) | Automatic tooth landmark detection method and system in medical images containing metal artifacts | |
US20210271914A1 (en) | Image processing apparatus, image processing method, and program | |
CN115797730B (en) | Model training method and device and head shadow measurement key point positioning method and device | |
CN112365438B (en) | Pelvis parameter automatic measurement method based on target detection neural network | |
US20240046498A1 (en) | Joint angle determination under limited visibility | |
WO2020215485A1 (en) | Fetal growth parameter measurement method, system, and ultrasound device | |
CN114732425A (en) | Method and system for improving DR chest radiography imaging quality | |
CN114612389A (en) | Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion | |
Zhang et al. | A novel tool to provide predictable alignment data irrespective of source and image quality acquired on mobile phones: what engineers can offer clinicians | |
TWI759946B (en) | Spine Measurement and Status Assessment Methods | |
CN107424151A (en) | Readable storage medium storing program for executing and normotopia rabat automatic testing method, system, device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |