CN115239700A - Spine Cobb angle measurement method, device, equipment and storage medium - Google Patents
Spine Cobb angle measurement method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115239700A CN115239700A CN202211005578.7A CN202211005578A CN115239700A CN 115239700 A CN115239700 A CN 115239700A CN 202211005578 A CN202211005578 A CN 202211005578A CN 115239700 A CN115239700 A CN 115239700A
- Authority
- CN
- China
- Prior art keywords
- candidate prediction
- training
- image
- vertebral
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011436 cob Substances 0.000 title claims abstract description 54
- 238000000691 measurement method Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 79
- 238000005259 measurement Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 60
- 238000002372 labelling Methods 0.000 claims description 28
- 238000000034 method Methods 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000011176 pooling Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 7
- 238000005452 bending Methods 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 22
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 210000000115 thoracic cavity Anatomy 0.000 description 8
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 description 3
- 229910052739 hydrogen Inorganic materials 0.000 description 3
- 239000001257 hydrogen Substances 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 206010039722 scoliosis Diseases 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
- G06T2207/30012—Spine; Backbone
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a spine Cobb angle measurement method, a spine Cobb angle measurement device, spine Cobb angle measurement equipment and a storage medium, wherein position coordinates of a plurality of target vertebral bodies are obtained by inputting an image to be measured into a spine positioning model; the training data of the spine positioning model comprise a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images; and determining the vertebral Cobb angle in the image to be detected according to the position coordinates of the target vertebral bodies, so that the accuracy and efficiency of vertebral body detection are improved, and the vertebral Cobb angle detection efficiency is improved.
Description
Technical Field
The disclosure relates to the field of medical detection, in particular to a spine Cobb angle measuring method, device and equipment and a storage medium.
Background
The Cobb angle is an index for measuring the severity of scoliosis, which is the maximum bending angle of the scoliosis. In the prior art, a Cobb angle measurement method can be based on a two-stage method, namely, an approximate region where a spine is located is firstly segmented through a segmentation model, then key point detection is carried out on the segmented spine region, the whole process passes through two models, and the calculation is complicated. Meanwhile, the forward detection frame used in the prior art has low detection precision due to the fact that the detection frame has a large overlapped part.
Disclosure of Invention
The present disclosure provides a spine Cobb angle measurement method, apparatus, device and storage medium to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a vertebral Cobb angle measurement method, comprising:
inputting the image to be detected into a spine positioning model to obtain position coordinates of a plurality of target vertebral bodies; the training data of the spine positioning model are a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images;
and determining the vertebral Cobb angle in the image to be detected according to the position coordinates of the target vertebral bodies.
In one embodiment, the determining of the rotation parameters of each training vertebral body corresponding to the plurality of training images includes: and determining the minimum circumscribed rectangle and the corresponding coordinates of each training vertebral body in each training image through a minAreaRect function in opencv, wherein the coordinates comprise the rotation angle.
In an embodiment, the inputting the image to be measured into the vertebral positioning model to obtain the position coordinates of the plurality of target vertebral bodies includes:
inputting the image to be detected into a feature extraction layer of the spine positioning model to obtain a feature image;
inputting the characteristic image into a regional candidate network layer, generating a plurality of preliminary prediction frames with preset sizes in a sliding window mode, and performing preliminary screening and homing on the plurality of preliminary prediction frames to obtain a plurality of candidate prediction frames, wherein the preset sizes comprise a preset rotation angle and a preset height-to-width ratio;
inputting the plurality of candidate prediction boxes into a region of interest pooling layer to unify sizes of the plurality of candidate prediction boxes;
inputting the candidate prediction frames with uniform sizes into a classification output layer to determine the position coordinates and the classes of the target vertebral bodies.
In one embodiment, the spine positioning model is a fast RCNN network model, and the loss function of the spine positioning model is:
wherein L is a loss function of the spine location model, N cls Is a vertebral number of 17; l is a radical of an alcohol cls Is a classification loss function; n is a radical of hydrogen reg The number of candidate prediction boxes; l is a radical of an alcohol reg Is of type L 1 The regression loss function of (2); l is a radical of an alcohol PIOU Is an angular regression loss function.
In one embodiment, the angle regression loss function L PIOU The calculation formula of (c) is:
wherein, b is a candidate prediction frame; b' is a marking frame of the training vertebral body in the training image; s. the b∩b′ The intersection area of the candidate prediction box and the labeling box is set; s b∪b′ Is composed ofThe union region of the candidate prediction frame and the labeling frame; and M is the corresponding group number of the candidate prediction frame and the label frame.
In one embodiment, the S b∩b′ And S b∪b′ The calculation formula of (2) is as follows:
S b∪b′ =wh+w′h′-S b∩b′ (4)
wherein p is i,j Any pixel point in the minimum forward circumscribed rectangular region of any group of the candidate prediction frame and the labeling frame is simultaneously contained in the regression correspondence of the candidate prediction frame and the labeling frame; r (p) i,j And b) is judgment of p i,j A function whether or not within the candidate prediction box b; r (p) i,j And b') is judgment of p i,j A function whether or not within the label box b'; wh is the area of the candidate prediction box; w 'h' is the area of the labeling frame;
wherein R (p) i,j B) and R (p) i,j And b') is calculated as:
wherein w and h are the width and height of the candidate prediction box, respectively; w 'and h' are the width and height of the labeling box respectively;is a point p i,j A distance to a line in the candidate prediction box,predicting a frame for the candidateThe distance from the intersection point of the central line and the perpendicular line to the central point;is a point p i,j The distance to the line in the label box,and k is a controllable parameter, and is the distance from the intersection point of the central line of the marking frame and the perpendicular line to the central point.
In one embodiment, the determining the vertebral Cobb angle through the position coordinates of the target vertebral bodies comprises:
calculating the slope of each target vertebral body according to the position coordinates of each target vertebral body; determining a plurality of lateral curved vertebral body angles by plotting the slope of each target vertebral body as a line graph; and determining the maximum included angle in the included angles of the plurality of lateral bending vertebral bodies as a Cobb angle of the vertebra.
According to a second aspect of the present disclosure, there is provided a spinal Cobb angle measurement device, comprising:
the position coordinate acquisition module is used for inputting the image to be detected into the spine positioning model so as to obtain the position coordinates of a plurality of target vertebral bodies; the training data of the spine positioning model are a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images;
and the angle determining module is used for determining the Cobb angle of the vertebra in the image to be detected according to the position coordinates of the target vertebral bodies.
In an implementation manner, the apparatus further includes a model training module, configured to determine a minimum bounding rectangle of each training vertebral body in each training image and corresponding coordinates through a minareect function in opencv, where the coordinates include a rotation angle.
In an implementation manner, the position coordinate obtaining module is specifically configured to:
inputting the image to be detected into a feature extraction layer of the spine positioning model to obtain a feature image; inputting the characteristic image into a regional candidate network layer, generating a plurality of preliminary prediction frames with preset sizes in a sliding window mode, and performing preliminary screening and homing on the plurality of preliminary prediction frames to obtain a plurality of candidate prediction frames, wherein the preset sizes comprise a preset rotation angle and a preset height-width ratio; inputting the plurality of candidate prediction boxes into a region of interest pooling layer to unify sizes of the plurality of candidate prediction boxes; inputting the candidate prediction frames with uniform sizes into a classification output layer to determine the position coordinates and the classes of the target vertebral bodies.
In one embodiment, the spine positioning model is a fast RCNN network model, and the loss function of the spine positioning model is:
wherein L is a loss function of the spine location model, N cls Is a vertebral number of 17; l is cls Is a classification loss function; n is a radical of hydrogen reg The number of candidate prediction boxes; l is reg Is of type L 1 The regression loss function of (2); l is a radical of an alcohol PIOU Is an angular regression loss function.
In one embodiment, the angle regression loss function L PIOU The calculation formula of (2) is as follows:
wherein, b is a candidate prediction frame; b' is a marking frame of the training vertebral body in the training image; s b∩b′ The intersection area of the candidate prediction box and the labeling box is set; s b∪b′ A union region of the candidate prediction box and the label box is obtained; and M is the corresponding group number of the candidate prediction frame and the label frame.
In one embodiment, the S b∩b′ And S b∪b′ The calculation formula of (2) is as follows:
S b∪b′ =wh+w′h′-S b∩b′ (4)
wherein p is i,j Any pixel point in the minimum forward circumscribed rectangular region of any group of the candidate prediction frame and the labeling frame is simultaneously contained in the regression correspondence of the candidate prediction frame and the labeling frame; r (p) i,j B) is to judge p i,j A function of whether or not within the candidate prediction box b; r (p) i,j And b') is judgment of p i,j A function whether or not within the label box b'; wh is the area of the candidate prediction box; w 'h' is the area of the labeling frame;
wherein R (p) i,j B) and R (p) i,j And b') is calculated as:
wherein w and h are the width and height of the candidate prediction box, respectively; w 'and h' are the width and height of the labeling box respectively;is a point p i,j A distance to a line in the candidate prediction box,calculating the distance from the intersection point of the central line of the candidate prediction frame and the vertical line to the central point;is a point p i,j The distance to the center line of the label frame,for the marking frameAnd k is a controllable parameter.
In an implementation manner, the angle determining module is specifically configured to:
calculating the slope of each target vertebral body according to the position coordinates of each target vertebral body; determining a plurality of lateral curved vertebral body angles by plotting the slope of each target vertebral body as a line graph; and determining the maximum included angle in the included angles of the plurality of lateral bending vertebral bodies as a vertebral Cobb angle in the image to be detected.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
According to the method, the device, the equipment and the storage medium for measuring the vertebral Cobb angle, the image to be measured is input into the vertebral positioning model to obtain the position coordinates of a plurality of target vertebral bodies; the training data of the spine positioning model comprise a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images; and determining the vertebral Cobb angle in the image to be detected according to the position coordinates of the target vertebral bodies, so that the accuracy and efficiency of vertebral body detection are improved, and the vertebral Cobb angle detection efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1A illustrates a schematic flow chart of an implementation of a vertebral Cobb angle measurement method provided in an embodiment of the present disclosure;
fig. 1B illustrates a conventional spine labeling diagram provided in accordance with an embodiment of the present disclosure;
fig. 1C is a schematic diagram illustrating a conventional spinal Cobb angle measurement provided by an embodiment of the disclosure;
fig. 2A illustrates an implementation flow diagram of a vertebral Cobb angle measurement method provided by the second embodiment of the disclosure;
FIG. 2B shows a p-type structure provided by an embodiment of the present disclosure i,j A position relation graph of the point and the candidate prediction frame;
FIG. 3 shows a schematic structural diagram of a spinal Cobb angle measurement device according to an embodiment of the disclosure;
fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Example one
Fig. 1A is a flowchart of a spine Cobb angle measurement method provided in an embodiment of the present disclosure, which may be performed by a spine Cobb angle measurement apparatus provided in an embodiment of the present disclosure, and the apparatus may be implemented in software and/or hardware. The method specifically comprises the following steps:
and S110, inputting the image to be detected into the vertebral positioning model to obtain the position coordinates of a plurality of target vertebral bodies.
The training data of the spine positioning model are a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images. For example: one training image corresponds to 17 training vertebral bodies (thoracic vertebrae and lumbar vertebrae), each training vertebral body corresponds to a relevant position parameter, and the training is performed as a rotation parameter, and the rotation parameter can comprise position information and a rotation angle.
The spine positioning model is a neural network model which is trained by taking a large number of training sets as training data and is used for identifying a target vertebral body in an image to be detected. The present embodiment does not limit the type of the neural network model, as long as the target vertebral body and the corresponding position coordinates in the image to be detected can be identified through training.
The image to be measured is an image used for identifying the target vertebral body, and may be a Digital Radiography (DR) image, for example. Since the most severe position of the lateral curvature of the spine generally occurs in the thoracic vertebra and the lumbar vertebra in the spine, the parameters for calculating the Cobb angle of the spine are also derived from the thoracic vertebra and the lumbar vertebra, and thus the spine positioning model in this embodiment is set such that the target vertebral body that can be identified through learning is only the thoracic vertebra and the lumbar vertebra in the spine.
Specifically, when a training set of the spine positioning model is prepared, the training image in the training set is preprocessed, and each spine vertebral body in the training image is labeled by using the existing labeling software in the prior art as a training vertebral body, as shown in fig. 1B; and determining the minimum circumscribed rectangle and corresponding coordinates of each training vertebral body through a minAreaRect function in opencv, wherein the coordinates comprise the rotation angle.
It should be noted that the neural network model in the prior art uses a forward detection frame, that is, a circumscribed rectangle, and belongs to a positive rectangle. The minimum circumscribed rectangle in this embodiment may be a minimum area circumscribed rectangle, and may be inclined, so the coordinate parameters corresponding to the training vertebral body obtained through calculation may include (x, y, w, h, θ), where (x, y) is coordinates of a central point of the training vertebral body, w and h are width and height of the training vertebral body, and θ is a rotation angle of the training vertebral body, specifically an included angle with a horizontal line.
And S120, determining a vertebral Cobb angle in the image to be detected through the position coordinates of the target vertebral bodies.
For easier understanding of the Cobb angle, fig. 1C is a schematic diagram of a conventional spinal Cobb angle measurement provided in one embodiment of the present disclosure. As can be seen in fig. 1C, the intervertebral space on the convex side of the lateral curvature of the spine is wider, while the first vertebral body which begins to widen in the concave side is not considered to be part of the curvature, and therefore its adjacent one is considered to be the end vertebra of the curvature. Therefore, the Cobb angle is the included angle between the upper edge of the vertebral body of the upper end vertebra with the maximum lateral curvature and the lower edge of the vertebral body of the lower end vertebra, a transverse line is drawn on the upper edge of the vertebral body of the upper end vertebra during calculation, a transverse line is drawn on the lower edge of the vertebral body of the lower end vertebra in the same way, and the included angle formed by the perpendicular lines of the two transverse lines is measured, so that the Cobb angle can be obtained.
In the disclosed embodiment, the determination of the vertebral Cobb angle through the position coordinates of a plurality of target vertebral bodies comprises the following steps: calculating the slope of each target vertebral body according to the position coordinates of each target vertebral body; determining a plurality of lateral curved vertebral body angles by plotting the slope of each target vertebral body as a line graph; and determining the maximum included angle in the included angles of the plurality of laterally bent vertebral bodies as a Cobb angle of the spine.
Specifically, after each vertebra (target vertebral body) is positioned by the vertebral positioning model, the slope of the upper and lower vertebrae can be calculated from the position coordinates of the target vertebral body. Meanwhile, as the detection frames of the target vertebral body are rectangular, each detection frame corresponds to a slope. Drawing a broken line graph by drawing software, wherein the adjacent peaks and valleys represent a section of lateral curvature of the vertebra, and taking the upper edge slope k of the upper vertebra of the lateral curvature for each section of the lateral curved (lateral curved cone) cone 1 And the lower edge slope k of the inferior vertebra 2 The included angle theta between the two can be calculated, and the calculation formula is as follows:
it should be noted that, when the slope of a certain line is 0, the included angle is:
θ=arctank (8)
where k is the slope of the other line and the value of k is not 0.
Finally, the present embodiment can take the maximum value as the final Cobb angle based on these calculated angles.
The spine Cobb angle measurement method provided by the disclosure inputs an image to be measured into a spine positioning model to obtain position coordinates of a plurality of target vertebral bodies; and determining the Cobb angle of the vertebra through the position coordinates of a plurality of target vertebral bodies. Compared with the common positioning and orientation detection of the vertebra, the positioning and orientation detection method is more accurate, and improves the accuracy and the efficiency of the detection of the vertebra body, thereby improving the detection efficiency and the accuracy of the Cobb angle of the vertebra.
Example two
Fig. 2A is a flowchart of a spine Cobb angle measurement method provided in the second embodiment of the present disclosure, where the embodiment of the present disclosure is based on the foregoing embodiment, and inputting the image to be measured into the spine positioning model includes: inputting an image to be detected into a feature extraction layer of a spine positioning model to obtain a feature image; inputting the characteristic image into a regional candidate network layer, generating a plurality of preliminary prediction frames at the center of each sliding window in a sliding window mode according to a preset rotation angle and a preset height-to-width ratio, and performing preliminary screening and homing on the plurality of preliminary prediction frames to obtain a plurality of candidate prediction frames; inputting the candidate prediction frames into the interest domain pooling layer to unify the sizes of the candidate prediction frames; and inputting the candidate prediction frames with uniform sizes into a classification output layer to determine the position coordinates and the classes of the target vertebral bodies. The method specifically comprises the following steps:
s210, inputting the image to be detected into a feature extraction layer of the spine positioning model to obtain a feature image.
In an embodiment of the present disclosure, the spine positioning model employs the fast RCNN network model. The fast RCNN network model may include: the system comprises a feature extraction layer, a region candidate network layer, an interest region pooling layer and a classification output layer.
The feature extraction layer (conv layers) may be a neural network formed by combining different groups of convolutional layers, activation layers and pooling layers, and is used for extracting feature images. Specifically, in the embodiment, the image to be detected is input into the feature extraction layer of the spine positioning model, that is, the feature images (feature maps) of the image to be detected are extracted through a group of backbone networks.
S220, inputting the characteristic image into the regional candidate network layer, generating a plurality of preliminary prediction frames with preset sizes in a sliding window mode, and performing preliminary screening and homing on the plurality of preliminary prediction frames to obtain a plurality of candidate prediction frames.
The preset size comprises a preset rotation angle, a preset height-width ratio and a preset size.
Wherein, the Region candidate Network layer (RPN) is used to generate a candidate prediction box and a category score (whether it is two categories of the object of interest).
Specifically, after the feature image is input into the regional candidate network layer, a sliding window mode is adopted to generate preliminary prediction frames with different sizes and different rotation angles on the feature image (namely, a plurality of preliminary candidates anchors and rotation angles are generated by using a sliding window method at different sizes and height-to-width ratios at each center), and then whether the preliminary detection frame is a target vertebral body or a background is judged through a preliminary screening of object two classification, and a preliminary position regression operation is performed on the preliminary prediction frame with the judgment result of the target vertebral body and the preliminary prediction frame is used as a candidate prediction frame to be input into a next stage.
It should be noted that, compared to the forward rectangular frame input by the conventional fast RCNN network, in the embodiment, since the rotating rectangular frame (the training image and the corresponding rotation parameter) is input during the model training, the angle information is also required to be added during the generation of the preliminary candidate anchor (the preliminary prediction frame), and according to the prior knowledge, the preset rotation angle range during the random generation of the preliminary prediction frame can be set to [0,15,30,45,60,120,135,150,165] degree, and the preset aspect ratio range can be set to [0.25,0.5,1]. It should be noted that the numerical values of the preset rotation angle and the preset aspect ratio recited in the present embodiment are only an example, and the present embodiment does not limit specific values thereof.
And S230, inputting the candidate prediction boxes into the interest domain pooling layer to unify the sizes of the candidate prediction boxes.
The Interest domain Pooling layer (Region of Interest, roI) is used for obtaining fixed size of images with different input sizes by using a Pooling method, and the size is unified.
Specifically, since the method of randomly generating various aspect ratio size regions is adopted when the candidate prediction box is initially generated, it is regressed to the same size at the RoI Pooling layer.
S240, inputting the candidate prediction frames with the uniform size into a classification output layer to determine the position coordinates and the classes of the target vertebral bodies.
The Classification output layer (Classification and Regression) is used for carrying out specific Classification on the input candidate prediction frames and carrying out Regression positioning on the detection frames with higher precision. The classification is used for marking and distinguishing the target vertebral body, for example, not only can distinguish thoracic vertebrae and lumbar vertebrae, but also can distinguish the specific thoracic vertebrae and lumbar vertebrae belonging to the first few so as to mark the target vertebral body in the later period.
Specifically, the classification output layer performs specific class classification on the input candidate prediction frames, for example, the candidate prediction frames may be classified into first to twelfth thoracic vertebrae and first to fifth lumbar vertebrae, and the position of the candidate prediction frame is further corrected based on the reference to the labeling frame of the training vertebral body in the training image to obtain the final accurate position coordinate and class of the target vertebral body.
Generally, the loss function of the fast RCNN network model in the training process includes a classification loss function and a regression loss function, but the spine positioning model of the embodiment adds an angle vector on the basis of the fast RCNN network model, so the total loss function of the spine positioning model in the embodiment is:
wherein L is a loss function of the spine location model, N cls Is the vertebral number 17 (17 bones in total for thoracic and lumbar vertebrae, thus being a category 17 problem); l is cls Is a classification loss function; n is a radical of reg The number of candidate prediction boxes; l is a radical of an alcohol reg Is of type L 1 The regression loss function of (1); l is PIOU Is an angular regression loss function.
Wherein the classification loss function L cls Is a cross entropy loss function and is used for improving the classification accuracy. In addition, since five parameters (x, y, w, h, θ) are needed to return the candidate prediction box to the real labeling box, and a unique rectangular box, i.e. the target vertebral body, can be determined through the five parameters, the more accurate the five parameters are regressed, the more accurate the position of the target vertebral body is, and thus, a regression loss function is used.
At the same time, because based on L 1 The regression loss function of (2) is not sensitive enough for objects with large height-width difference (vertebrae with large height-width ratio, such as the penultimate vertebra in fig. 1B), so it is necessary to add the loss function based on IOU (cross-over ratio). And because the area of the simple intersection and union region is not derivable, the calculation of the PIOU is selected, namely the ratio of the number of pixels in the intersection range to the number of pixels in the union region is calculated.
In the disclosed embodiment, the angle regression loss function L PIOU The calculation formula of (2) is as follows:
wherein, b is a candidate prediction frame; b' is a marking frame of the training vertebral body in the training image; s b∩b′ The intersection area of the candidate prediction frame and the marking frame is obtained; s b∪b′ A union region of the candidate prediction frame and the label frame is obtained; and M is the corresponding group number of the candidate prediction frame and the label frame.
In the disclosed embodiments, S b∩b′ And S b∪b′ The calculation formula of (c) is:
S b∪b′ =wh+w′h′-S b∩b′ (4)
wherein p is i,j In the regression correspondence of any group of candidate prediction frames and labeling frames, any pixel point in the minimum forward circumscribed rectangle B region of the candidate prediction frames and the labeling frames is simultaneously contained; r (p) i,j And b) is judgment of p i,j A function whether or not within candidate prediction box b; r (p) i,j B') is a judgment of p i,j Function whether or not within label box b'; wh is the area of the candidate prediction box; w 'h' is the area of the labeling frame;
wherein R (p) i,j B) and R (p) i,j And b') is calculated as:
wherein, w and h are the width and height of the candidate prediction frame respectively; w 'and h' are the width and height of the labeling box, respectively;is a point p i,j The distance to the line in the candidate prediction box,the distance from the intersection point of the central line and the vertical line of the candidate prediction frame to the central point is calculated;is a point p i,j The distance to the center line of the label box,the distance from the intersection point of the center line of the marking frame and the vertical line to the central point is taken as the distance; k is a controllable parameter which can be set to 8, for example.
As shown in fig. 2B, fig. 2B is a p-type p provided by the embodiment of the disclosure i,j Graph of the positional relationship between the point and the candidate prediction frame, and similarly, p i,j The position relationship between the dots and the labeling boxes is the same as that in fig. 2B, and only the letter marks are different, so that the description is omitted. Wherein c is the center of the candidate prediction frame, the dotted line is the central line passing through the center c,is a point p i,j The distance to this mid-line is such that,the distance from the intersection point of the central line and the vertical line to the central point.
In addition, the formula R (p) is calculated i,j B) and R (p) i,j B') the purpose is to judge p i,j Whether the point is in the candidate prediction frame or not is judged, and whether the point is in the real labeling frame or not is judged. So as to predict the candidate pixel point p i,j And the calculation amount can be greatly reduced by selecting in the minimum forward circumscribed rectangle B area instead of the whole training image. To calculate the formula R (p) i,j B) is illustrated by way of example, R (p) i,j B) is to judge p i,j Whether the point is in the function of the candidate prediction frame or not, if so, the function value is close to 1; if not, the function value is close to 0. Thus when point p is reached i,j When in box b, then R (p) i,j And b) is approximately 1, otherwise it is approximately 0.
And S250, determining a vertebral Cobb angle in the image to be detected according to the position coordinates of the target vertebral bodies.
The embodiment of the disclosure uses rotation detection, and can reduce the overlapping phenomenon of detection frames between adjacent vertebral bodies as much as possible, so that the positioning of the vertebral bone is more accurate than the common directional detection, and meanwhile, the loss function item based on the IOU is added to the loss function, thereby optimizing the detection effect of the vertebral body with large height-width difference and positioning the vertebral body more accurately.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a vertebral Cobb angle measuring device provided in an embodiment of the present disclosure, and the device specifically includes:
a position coordinate obtaining module 310, configured to input the image to be detected into the spine positioning model to obtain position coordinates of a plurality of target vertebral bodies; the training data of the spine positioning model comprise a plurality of training images and rotation parameters of each training vertebral body corresponding to the training images;
the angle determining module 320 is configured to determine a vertebral Cobb angle in the image to be measured according to the position coordinates of the multiple target vertebral bodies.
In an embodiment, the apparatus further includes a model training module for determining a minimum bounding rectangle of each training vertebral body in each training image and corresponding coordinates through a minareaRect function in opencv, wherein the coordinates include a rotation angle.
In an implementation, the position coordinate obtaining module 310 is specifically configured to:
inputting an image to be detected into a feature extraction layer of a spine positioning model to obtain a feature image;
inputting the characteristic image into a regional candidate network layer, generating a plurality of preliminary prediction frames with preset sizes in a sliding window mode, and performing preliminary screening and homing on the plurality of preliminary prediction frames to obtain a plurality of candidate prediction frames, wherein the preset sizes comprise a preset rotation angle and a preset height-width ratio; inputting a plurality of candidate prediction boxes into the interest domain pooling layer to unify sizes of the plurality of candidate prediction boxes; and inputting the candidate prediction frames with uniform sizes into a classification output layer to determine the position coordinates and the classes of the target vertebral bodies.
In one embodiment, the spine positioning model is a fast RCNN network model, and the loss function of the spine positioning model is:
wherein L is a loss function of the spine positioning model, N cls Is the vertebral number 17; l is cls Is a classification loss function; n is a radical of hydrogen reg The number of candidate prediction boxes; l is a radical of an alcohol reg Is of type L 1 The regression loss function of (2); l is PIOU Is an angular regression loss function.
In one possible embodiment, the angle regression loss function L PIOU The calculation formula of (c) is:
wherein, b is a candidate prediction frame; b' is a marking frame of the training vertebral body in the training image; s. the b∩b′ The intersection area of the candidate prediction frame and the marking frame is set; s. the b∪b′ A union region of the candidate prediction frame and the label frame is obtained; m is the corresponding group number of the candidate prediction frame and the label frame.
In one embodiment, S b∩b′ And S b∪b′ The calculation formula of (c) is:
S b∪b′ =wh+w′h′-S b∩b′ (4)
wherein p is i,j Any pixel point in the minimum forward circumscribed rectangular region B which simultaneously contains the candidate prediction frame and the labeling frame in the regression correspondence of any group of candidate prediction frames and labeling frames; r (p) i,j And b) is judgment of p i,j A function whether or not within candidate prediction box b; r (p) i,j And b') is judgment of p i,j Function whether or not within label box b'; wh is the area of the candidate prediction box; w 'h' is the area of the labeling frame;
wherein R (p) i,j B) and R (p) i,j And b') is calculated as:
wherein, w and h are the width and height of the candidate prediction frame respectively; w 'and h' are the width and height of the labeling box, respectively;is a point p i,j The distance to the line in the candidate prediction box,the distance from the intersection point of the central line and the vertical line of the candidate prediction frame to the central point is calculated;is a point p i,j The distance to the center line of the label box,the distance from the intersection point of the central line of the marking frame and the vertical line to the central point is shown.
In an implementation, the angle determining module 320 is specifically configured to:
calculating the slope of each target vertebral body according to the position coordinates of each target vertebral body; determining a plurality of lateral curved vertebral body angles by plotting the slope of each target vertebral body as a line graph; and determining the maximum included angle in the included angles of the plurality of laterally bent vertebral bodies as a vertebral Cobb angle in the image to be detected.
Example four
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
FIG. 4 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the device 400 comprises a computing unit 401, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A spinal Cobb angle measurement method, comprising:
inputting the image to be detected into a vertebral positioning model to obtain position coordinates of a plurality of target vertebral bodies; the training data of the spine positioning model comprise a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images;
and determining the vertebral Cobb angle in the image to be detected according to the position coordinates of the target vertebral bodies.
2. The method of claim 1, wherein the rotation parameters of each training vertebral body corresponding to the plurality of training images are determined by:
and determining the minimum bounding rectangle of each training cone in each training image and corresponding coordinates through a minAreaRect function in opencv, wherein the coordinates comprise the rotation angle.
3. The method according to claim 2, wherein inputting the image to be measured into a vertebral positioning model to obtain position coordinates of a plurality of target vertebral bodies comprises:
inputting the image to be detected into a feature extraction layer of the spine positioning model to obtain a feature image;
inputting the characteristic image into a regional candidate network layer, generating a plurality of preliminary prediction frames with preset sizes in a sliding window mode, and performing preliminary screening and homing on the plurality of preliminary prediction frames to obtain a plurality of candidate prediction frames, wherein the preset sizes comprise a preset rotation angle and a preset height-to-width ratio;
inputting the plurality of candidate prediction boxes into a region of interest pooling layer to unify sizes of the plurality of candidate prediction boxes;
inputting the candidate prediction boxes with uniform sizes into a classification output layer to determine the position coordinates and the classes of the target vertebral bodies.
4. The method according to claim 3, wherein the spine localization model is a Faster RCNN network model with a loss function of:
wherein L is a loss function of the spine location model, N cls Is a vertebral number of 17; l is cls Is a classification loss function; n is a radical of reg The number of candidate prediction boxes; l is reg Is of type L 1 The regression loss function of (1); l is PIOU Is an angular regression loss function.
5. The method of claim 4, wherein the angular regression loss function L PIOU The calculation formula of (2) is as follows:
wherein, b is a candidate prediction frame; b' is a marking frame of the training vertebral body in the training image; s b∩b′ The intersection area of the candidate prediction box and the labeling box is set; s b∪b′ A union region of the candidate prediction box and the label box is obtained; and M is the corresponding group number of the candidate prediction frame and the label frame.
6. The method of claim 5, wherein S is b∩b′ And S b∪b′ The calculation formula of (2) is as follows:
S b∪b′ =wh+w′h′-S b∩b′
wherein p is i,j Any pixel point in the minimum forward circumscribed rectangular region B of any group of the candidate prediction frame and the labeling frame is simultaneously contained in the regression correspondence of the candidate prediction frame and the labeling frame; r (p) i,j And b) is judgment of p i,j A function of whether or not within the candidate prediction box b; r (p) i,j And b') is judgment of p i,j A function whether or not within the label box b'; whAn area of the candidate prediction box; w 'h' is the area of the label frame;
wherein R (p) i,j B) and R (p) i,j And b') is calculated as:
wherein w and h are the width and height of the candidate prediction box, respectively; w 'and h' are the width and height of the labeling box respectively;is a point p i,j A distance to a line in the candidate prediction box,calculating the distance from the intersection point of the central line and the vertical line of the candidate prediction frame to the central point;is a point p i,j The distance to the center line of the label frame,the distance from the intersection point of the central line of the marking frame and the vertical line to the central point is obtained; k is a controllable parameter.
7. The method according to claim 6, wherein the determining the vertebral Cobb angle in the image to be measured through the position coordinates of the plurality of target vertebral bodies comprises:
calculating the slope of each target vertebral body according to the position coordinates of each target vertebral body;
determining a plurality of lateral curved vertebral body angles by plotting the slope of each target vertebral body as a line graph;
and determining the maximum included angle in the included angles of the plurality of lateral bending vertebral bodies as a vertebral Cobb angle in the image to be detected.
8. A spinal Cobb angle measurement device, comprising:
the position coordinate acquisition module is used for inputting the image to be detected into the spine positioning model so as to obtain the position coordinates of a plurality of target vertebral bodies; the training data of the spine positioning model are a plurality of training images and rotation parameters of training vertebral bodies corresponding to the training images;
and the angle determining module is used for determining the vertebral Cobb angle in the image to be detected according to the position coordinates of the target vertebral bodies.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211005578.7A CN115239700A (en) | 2022-08-22 | 2022-08-22 | Spine Cobb angle measurement method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211005578.7A CN115239700A (en) | 2022-08-22 | 2022-08-22 | Spine Cobb angle measurement method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115239700A true CN115239700A (en) | 2022-10-25 |
Family
ID=83680854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211005578.7A Pending CN115239700A (en) | 2022-08-22 | 2022-08-22 | Spine Cobb angle measurement method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115239700A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117765062A (en) * | 2024-02-22 | 2024-03-26 | 天津市天津医院 | Image processing method and system for detecting scoliosis of teenagers |
WO2024183380A1 (en) * | 2023-03-09 | 2024-09-12 | 中国科学院深圳先进技术研究院 | Optical sensing image signal detection method and apparatus, device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415291A (en) * | 2019-08-07 | 2019-11-05 | 清华大学 | Image processing method and relevant device |
CN111047572A (en) * | 2019-12-10 | 2020-04-21 | 南京安科医疗科技有限公司 | Automatic spine positioning method in medical image based on Mask RCNN |
CN112347994A (en) * | 2020-11-30 | 2021-02-09 | 四川长虹电器股份有限公司 | Invoice image target detection and angle detection method based on deep learning |
WO2021114622A1 (en) * | 2020-06-08 | 2021-06-17 | 平安科技(深圳)有限公司 | Spinal-column curvature measurement method, apparatus, computer device, and storage medium |
CN113850763A (en) * | 2021-09-06 | 2021-12-28 | 中山大学附属第一医院 | Method, device, equipment and medium for measuring vertebral column Cobb angle |
CN114078120A (en) * | 2021-11-22 | 2022-02-22 | 北京欧应信息技术有限公司 | Method, apparatus and medium for detecting scoliosis |
-
2022
- 2022-08-22 CN CN202211005578.7A patent/CN115239700A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415291A (en) * | 2019-08-07 | 2019-11-05 | 清华大学 | Image processing method and relevant device |
CN111047572A (en) * | 2019-12-10 | 2020-04-21 | 南京安科医疗科技有限公司 | Automatic spine positioning method in medical image based on Mask RCNN |
WO2021114622A1 (en) * | 2020-06-08 | 2021-06-17 | 平安科技(深圳)有限公司 | Spinal-column curvature measurement method, apparatus, computer device, and storage medium |
CN112347994A (en) * | 2020-11-30 | 2021-02-09 | 四川长虹电器股份有限公司 | Invoice image target detection and angle detection method based on deep learning |
CN113850763A (en) * | 2021-09-06 | 2021-12-28 | 中山大学附属第一医院 | Method, device, equipment and medium for measuring vertebral column Cobb angle |
CN114078120A (en) * | 2021-11-22 | 2022-02-22 | 北京欧应信息技术有限公司 | Method, apparatus and medium for detecting scoliosis |
Non-Patent Citations (1)
Title |
---|
ZHIMING CHEN: "PIoU Loss: Towards Accurate Oriented Object Detection in Complex Environments", 《ARXIV:2007.09584V1》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024183380A1 (en) * | 2023-03-09 | 2024-09-12 | 中国科学院深圳先进技术研究院 | Optical sensing image signal detection method and apparatus, device and storage medium |
CN117765062A (en) * | 2024-02-22 | 2024-03-26 | 天津市天津医院 | Image processing method and system for detecting scoliosis of teenagers |
CN117765062B (en) * | 2024-02-22 | 2024-04-26 | 天津市天津医院 | Image processing method and system for detecting scoliosis of teenagers |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148130B (en) | Method and device for detecting part defects | |
CN115239700A (en) | Spine Cobb angle measurement method, device, equipment and storage medium | |
CN110781885A (en) | Text detection method, device, medium and electronic equipment based on image processing | |
US8340433B2 (en) | Image processing apparatus, electronic medium, and image processing method | |
CN113298169A (en) | Convolutional neural network-based rotating target detection method and device | |
CN113139543A (en) | Training method of target object detection model, target object detection method and device | |
CN112597837A (en) | Image detection method, apparatus, device, storage medium and computer program product | |
CN115456990B (en) | CT image-based rib counting method, device, equipment and storage medium | |
CN110910445B (en) | Object size detection method, device, detection equipment and storage medium | |
CN106570538B (en) | Character image processing method and device | |
CN115409990B (en) | Medical image segmentation method, device, equipment and storage medium | |
US20220172376A1 (en) | Target Tracking Method and Device, and Electronic Apparatus | |
CN112464829A (en) | Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system | |
CN113205041A (en) | Structured information extraction method, device, equipment and storage medium | |
CN116128883A (en) | Photovoltaic panel quantity counting method and device, electronic equipment and storage medium | |
CN113537192A (en) | Image detection method, image detection device, electronic equipment and storage medium | |
CN113610809B (en) | Fracture detection method, fracture detection device, electronic equipment and storage medium | |
CN114445825A (en) | Character detection method and device, electronic equipment and storage medium | |
JP3661635B2 (en) | Image processing method and apparatus | |
CN114596431A (en) | Information determination method and device and electronic equipment | |
CN114266879A (en) | Three-dimensional data enhancement method, model training detection method, three-dimensional data enhancement equipment and automatic driving vehicle | |
CN113706705A (en) | Image processing method, device and equipment for high-precision map and storage medium | |
CN111598033B (en) | Goods positioning method, device, system and computer readable storage medium | |
CN114862761B (en) | Power transformer liquid level detection method, device, equipment and storage medium | |
CN115953463A (en) | Package marking method, device and equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221025 |
|
RJ01 | Rejection of invention patent application after publication |