CN110826557A - Method and device for detecting fracture - Google Patents

Method and device for detecting fracture Download PDF

Info

Publication number
CN110826557A
CN110826557A CN201911023815.0A CN201911023815A CN110826557A CN 110826557 A CN110826557 A CN 110826557A CN 201911023815 A CN201911023815 A CN 201911023815A CN 110826557 A CN110826557 A CN 110826557A
Authority
CN
China
Prior art keywords
image
frame
bone
fracture
fracture point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911023815.0A
Other languages
Chinese (zh)
Inventor
石磊
魏子昆
华铱炜
柏慧屏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
According To Hangzhou Medical Technology Co Ltd
Original Assignee
According To Hangzhou Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by According To Hangzhou Medical Technology Co Ltd filed Critical According To Hangzhou Medical Technology Co Ltd
Priority to CN201911023815.0A priority Critical patent/CN110826557A/en
Publication of CN110826557A publication Critical patent/CN110826557A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Abstract

The invention discloses a method and a device for detecting bone fracture, the method comprises the steps of obtaining a 3D image, segmenting the 3D image to obtain a plurality of groups of 2D image layers, determining three-dimensional coordinates of suspected bone fracture points of the bone based on the plurality of groups of 2D image layers and a target detection network model, determining a region containing the suspected bone fracture points from the 3D image as an ROI based on the three-dimensional coordinates of the suspected bone fracture points, and determining the bone fracture points of the bone based on the ROI and a 3D convolution neural network classification model. The method comprises the steps of obtaining three-dimensional coordinates of suspected fracture points by dividing a 3D image containing bones into a plurality of groups of 2D image layers for target detection, classifying areas containing the suspected fracture points obtained based on the three-dimensional coordinates, and automatically determining the fracture points of the bones, so that compared with a traditional doctor manual diagnosis mode, the method can reduce diagnosis error rate caused by doctor level difference, and improve accuracy and efficiency of fracture detection.

Description

Method and device for detecting fracture
Technical Field
The embodiment of the invention relates to the technical field of machine learning, in particular to a fracture detection method and device, computing equipment and a computer-readable non-volatile storage medium.
Background
Doctors can diagnose various diseases by medical knowledge and medical experience at present. That is, in the prior art, most of the diseases are determined by the diagnosis of doctors, however, since the medical levels of various regions are not uniform and the personal experience levels of doctors are also different, the conventional method for diagnosing diseases by doctors is easily affected by the medical levels of regions and the personal experience levels of doctors, and the diagnosis error is large.
Taking the detection of bone fracture as an example, it is a routine requirement for CT examination to observe whether there is bone fracture on CT. Especially in emergency situations after an accident, it is desirable to locate the fracture as quickly as possible. However, in general, the viewing angle of CT is difficult to find all possible fracture points at a glance, so that doctors often spend a lot of time searching. The outstanding contradiction here is the difference between the urgent time requirement for bone fracture detection and the slow manual detection.
Based on this, there is a need for a method for detecting bone fracture, which can improve the accuracy and efficiency of detecting bone fracture.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting bone fracture, which are used for improving the accuracy and efficiency of bone fracture detection.
In a first aspect, an embodiment of the present invention provides a method for detecting a fracture, including:
acquiring a 3D image, wherein the 3D image comprises a bone;
segmenting the 3D image to obtain a plurality of groups of 2D image layers, wherein any 2D image layer comprises a plurality of frames of 2D images;
determining three-dimensional coordinates of suspected fracture points of the bone based on the multiple groups of 2D image layers and the target detection network model; the target detection network model is obtained by training and learning a plurality of groups of 2D image layers marked with fracture point positions as training samples;
determining a region containing a suspected fracture point from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point;
determining a fracture point of the bone based on the ROI and a 3D convolutional neural network classification model; the 3D convolutional neural network classification model is obtained by training and learning with the ROI marked with the fracture points as training samples.
In the technical scheme, the 3D image containing the bone is divided into a plurality of groups of 2D image layers to carry out target detection to obtain the three-dimensional coordinates of the suspected fracture points, then the areas containing the suspected fracture points obtained based on the three-dimensional coordinates are classified, and the fracture points of the bone are automatically determined, so that compared with the traditional manual diagnosis mode of doctors, the method can reduce the diagnosis error rate caused by the level difference of doctors, and improve the accuracy and efficiency of fracture detection.
Optionally, after the determining the fracture point of the bone, the method further includes:
when the number of pixel points of which the gray value of the pixel points in the neighborhood of the fracture points of the bone is larger than the first threshold is larger than the second threshold, determining the fracture points of the bone as false-alarm fracture points;
alternatively, the first and second electrodes may be,
and when the fracture point of the bone is positioned near the center of the thoracic cavity of the 3D image or in the edge area of the 3D image, determining the fracture point of the bone as a false-alarm fracture point.
According to the technical scheme, the false alarm removing processing of the fracture points can further improve the accuracy of fracture detection.
Optionally, the determining three-dimensional coordinates of the suspected fracture point of the bone based on the multiple 2D image layers and the target detection network model includes:
sequentially inputting the multiple groups of 2D image layers to the target detection network model, and outputting a prediction frame of a suspected fracture point on each frame of 2D image and a confidence coefficient corresponding to the prediction frame;
obtaining a target frame based on the position of a prediction frame on a plurality of frames of 2D images and the confidence corresponding to the prediction frame;
and obtaining the three-dimensional coordinates of the suspected fracture point of the bone based on the position of the target frame.
According to the technical scheme, the target frame is obtained by combining the prediction frames on the multi-frame 2D images based on the positions of the prediction frames on the multi-frame 2D images and the confidence degrees corresponding to the prediction frames, and the three-dimensional coordinate of the suspected fracture point is obtained based on the position of the target frame, so that the three-dimensional coordinate of the suspected fracture point can be accurately and quickly obtained, and the detection efficiency of the three-dimensional coordinate of the suspected fracture point is improved.
Optionally, the obtaining a target frame based on the position of the prediction frame on the multiple frames of 2D images and the confidence corresponding to the prediction frame includes:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, determining initial target frames on other frames of 2D images, wherein the distance between the initial target frames and the center position is smaller than a preset threshold value; determining the initial target frame and the initial target frame on the other frames of the 2D image with the highest confidence level as the target frame;
obtaining three-dimensional coordinates of a suspected fracture point of the bone based on the position of the target frame, comprising:
and determining the three-dimensional coordinates of the suspected fracture point of the bone based on the central position of the target frame and the segmentation position of the 2D image where the target frame is located.
In the above technical solution, the initial target frame with the highest confidence coefficient located near the center position of the initial target frame is used as the target frame, so that the accuracy of detecting the suspected fracture point can be improved.
Optionally, the obtaining a target frame based on the position of the prediction frame on the multiple frames of 2D images and the confidence corresponding to the prediction frame includes:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, determining an initial target frame on other frames of 2D images, the distance between the initial target frame and the center position of which is less than a preset threshold value, averaging the center coordinates of the initial target frame and the center coordinates of the initial target frame on the other frames of 2D images to obtain a first coordinate of the target frame, and averaging the coordinates of the segmentation position of the 2D image where the initial target frame is located and the coordinates of the segmentation position of the other frames of 2D images to obtain a second coordinate of the target frame;
obtaining three-dimensional coordinates of a suspected fracture point of the bone based on the position of the target frame, comprising:
determining three-dimensional coordinates of a suspected fracture point of the bone based on the first and second coordinates.
In the above technical solution, the accuracy of detecting the suspected fracture point can be improved by averaging the center coordinates of the initial target frame on the one frame of 2D image and the initial target frames of the other frames of 2D images near the initial target frame to obtain the first coordinate, averaging the coordinates of the segmentation positions of the one frame of 2D image and the other frames of 2D images to obtain the second coordinate, and obtaining the three-dimensional coordinate of the suspected fracture point based on the first coordinate and the second coordinate.
Optionally, the preset 3D convolutional neural network classification model includes a feature extraction module and a fully connected classification module;
the determining fracture points of the bone based on the ROI and a 3D convolutional neural network classification model comprises:
inputting the ROI into the feature extraction module to obtain a feature vector;
inputting the feature vector to the fully-connected classification module to obtain the confidence coefficient of whether the suspected fracture point is a fracture point;
and determining the fracture points of the bone according to the confidence coefficient of whether each suspected fracture point is a fracture point.
In a second aspect, an embodiment of the present invention provides a fracture detection apparatus, including:
an acquisition unit configured to acquire a 3D image, the 3D image including a bone;
the processing unit is used for segmenting the 3D images to obtain a plurality of groups of 2D image layers, wherein any 2D image layer comprises a plurality of frames of 2D images; determining three-dimensional coordinates of suspected fracture points of the bone based on the multiple groups of 2D image layers and the target detection network model; the target detection network model is obtained by training and learning a plurality of groups of 2D image layers marked with fracture point positions as training samples; determining a region containing the suspected fracture point from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point; determining a fracture point of the bone based on the ROI and a 3D convolutional neural network classification model; the 3D convolutional neural network classification model is obtained by training and learning with the ROI marked with the fracture points as training samples.
Optionally, the processing unit is further configured to:
after the fracture point of the bone is determined, when the number of pixel points of which the gray value of the pixel points in the neighborhood of the fracture point of the bone is larger than a first threshold value is larger than a second threshold value, determining the fracture point of the bone as a false-alarm fracture point;
alternatively, the first and second electrodes may be,
and when the fracture point of the bone is positioned near the center of the thoracic cavity of the 3D image or in the edge area of the 3D image, determining the fracture point of the bone as a false-alarm fracture point.
Optionally, the processing unit is specifically configured to:
sequentially inputting the multiple groups of 2D image layers to the target detection network model, and outputting a prediction frame of a suspected fracture point on each frame of 2D image and a confidence coefficient corresponding to the prediction frame;
obtaining a target frame based on the position of a prediction frame on a plurality of frames of 2D images and the confidence corresponding to the prediction frame;
and obtaining the three-dimensional coordinates of the suspected fracture point of the bone based on the position of the target frame.
Optionally, the processing unit is specifically configured to:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, according to the central position of the initial target frame, determining the initial target frame on other frames of 2D images, the distance between which and the central position is less than a preset threshold value, and determining the initial target frame and the initial target frame on other frames of 2D images with the highest confidence level as the target frame;
and determining the three-dimensional coordinates of the suspected fracture point of the bone based on the central position of the target frame and the segmentation position of the 2D image where the target frame is located.
Optionally, the processing unit is specifically configured to:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, determining an initial target frame on other frames of 2D images, the distance between the initial target frame and the center position of which is less than a preset threshold value, averaging the center coordinates of the initial target frame and the center coordinates of the initial target frame on the other frames of 2D images to obtain a first coordinate of the target frame, and averaging the coordinates of the segmentation position of the 2D image where the initial target frame is located and the coordinates of the segmentation position of the other frames of 2D images to obtain a second coordinate of the target frame;
determining three-dimensional coordinates of a suspected fracture point of the bone based on the first and second coordinates.
Optionally, the 3D convolutional neural network classification model includes a feature extraction module and a fully connected classification module;
the processing unit is specifically configured to:
inputting the ROI into the feature extraction module to obtain a feature vector;
inputting the feature vector to the fully-connected classification module to obtain the confidence coefficient of whether the suspected fracture point is a fracture point;
and determining the fracture points of the bone according to the confidence coefficient of whether each suspected fracture point is a fracture point.
In a third aspect, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the fracture detection method according to the obtained program.
In a fourth aspect, embodiments of the present invention further provide a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer-readable instructions are read and executed by a computer, the computer is caused to execute the above-mentioned method for detecting bone fracture.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for detecting bone fracture according to an embodiment of the present invention;
FIG. 3 is a schematic view of a 3D image including a bone according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a fracture detection apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a system architecture provided in an embodiment of the present invention. Referring to fig. 1, the system architecture may be a server 100 including a processor 110, a communication interface 120, and a memory 130.
The communication interface 120 is used for communicating with a terminal device suitable for a doctor, and receiving and transmitting information transmitted by the terminal device to realize communication.
The processor 110 is a control center of the server 100, connects various parts of the entire server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by running or executing software programs and/or modules stored in the memory 130 and calling data stored in the memory 130. Alternatively, processor 110 may include one or more processing units.
The memory 130 may be used to store software programs and modules, and the processor 110 executes various functional applications and data processing by operating the software programs and modules stored in the memory 130. The memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to a business process, and the like. Further, the memory 130 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
It should be noted that the structure shown in fig. 1 is only an example, and the embodiment of the present invention is not limited thereto.
Based on the above description, fig. 2 exemplarily shows a flow of a method for detecting bone fracture according to an embodiment of the present invention, and the flow may be performed by a device for detecting bone fracture, which may be located in the server 100 shown in fig. 1 or the server 100.
As shown in fig. 2, the process specifically includes:
in step 201, a 3D image is acquired.
In an embodiment of the present invention, the 3D image may include a bone, the 3D image may be an image acquired by a Computed Tomography (CT) apparatus, an image acquired by a Magnetic Resonance Imaging (MRI) apparatus, and the like, and fig. 3 illustrates the 3D image of the bone of a patient for better describing the 3D image of the bone. The bone mentioned in the embodiment of the present invention is a bone including all positions of the whole body, such as a rib of a chest, a skull of a head, a hand bone of a hand, a leg bone of a leg, and the like, and is not particularly limited.
Step 202, segmenting the 3D image to obtain a plurality of groups of 2D image layers.
After obtaining the 3D image including the bone, the 3D image may be segmented to obtain a plurality of 2D image layers, where any one of the 2D image layers includes a plurality of 2D images.
It is understood that the 2D image in the embodiment of the present invention may be a cross-sectional image, a sagittal image, or a coronal image corresponding to the 3D image. Taking the cross-sectional image as an example, each frame of image may be sliced along the Z-axis of the 3D image, and then a predetermined number of frames of 2D cross-sectional images are extracted as a set of 2D image layers, where the predetermined number may be set according to experience.
It should be noted that, in the embodiment of the present invention, the multi-frame 2D image in any 2D image layer may be a continuous multi-frame image or a discontinuous multi-frame image.
Step 203, determining three-dimensional coordinates of the suspected fracture point of the bone based on the plurality of sets of 2D image layers and the target detection network model.
In the embodiment of the invention, the target detection network model is obtained by training and learning a plurality of groups of 2D image layers marked with fracture point positions as training samples. In a specific implementation, the target detection network model may be fast-RCNN, SSD, YOLO, or the like. The specific training process of the target detection network model is not described in detail in the embodiments of the present invention.
Based on the target detection network model, multiple groups of 2D image layers can be sequentially input into the target detection network model, so that a prediction frame for predicting a suspected fracture point on each frame of 2D image and a corresponding confidence thereof can be obtained. At this time, the prediction frames at the closer positions can be merged and de-duplicated according to the prediction frame on each frame of the 2D image and the corresponding confidence coefficient thereof, so that a better prediction frame can be selected. Specifically, the target frame may be obtained based on the position of the prediction frame on the multi-frame 2D image and the confidence corresponding to the prediction frame, and then the three-dimensional coordinates of the suspected fracture point of the bone may be obtained based on the position of the target frame.
When obtaining the target frame based on the position of the prediction frame on the multi-frame 2D image and the confidence degree corresponding to the prediction frame and then obtaining the three-dimensional coordinate of the suspected fracture point based on the position of the target frame, the following two specific ways may be implemented:
in a first mode
The method includes the steps of firstly adopting a non-maximum suppression method for a prediction frame on each 2D image in a plurality of frames of 2D images to obtain an initial target frame on each 2D image, then determining initial target frames on other frames of 2D images, of which the distance from the center position is smaller than a preset threshold value, according to the center position of the initial target frame for any initial target frame on each 2D image, determining the initial target frame and the initial target frames on other frames of 2D images with the highest reliability as target frames, and finally determining three-dimensional coordinates of a suspected bone fracture point of a bone based on the center position of the target frame and the segmentation position of the 2D image where the target frame is located.
That is, for a certain suspected bone fracture point on each frame of 2D image, it is first necessary to determine an initial target frame on each frame of 2D image by using a non-maximum suppression method, and by using the non-maximum suppression method, the prediction frame with the highest confidence level on each frame of 2D image can be retained as the initial target frame on the frame of 2D image, and the repeated prediction frame on each frame of 2D image is removed. Then, according to the center position of the initial target frame on each frame of 2D image, the initial target frames on other frames of 2D images whose distance from the center position is smaller than the preset threshold are determined, which is equivalent to finding out the initial target frames on other frames located near the center position of the initial target frame on each frame of 2D image, here also for deduplication, and the preset threshold may be set empirically. And then, the initial target frame with the highest confidence level in the confidence levels of the initial target frame of the frame and the initial target frames on the 2D images of the other frames with the distance to the center position smaller than the preset threshold is taken as the target frame, namely the area where the suspected fracture point is located. Through the combination in the above manner, the position of the suspected fracture point, that is, the coordinate of the suspected fracture point on a certain plane can be obtained. And finally, obtaining the three-dimensional coordinates of the suspected fracture point according to the central position of the target frame and the segmentation position of the 2D image where the target frame is located.
Taking a cross-sectional image as an example, the target frame is located on a two-dimensional plane (XOY plane), the center position of the target frame is a two-dimensional coordinate, and the 2D image corresponding to the target frame is sliced along the Z axis during the slicing, so that the slicing position of the 2D image corresponding to the target frame, that is, the coordinate in the Z axis direction, can be obtained, and therefore, the three-dimensional coordinate of the suspected bone fracture point is obtained by combining the center position of the target frame and the slicing position of the 2D image corresponding to the target frame.
Mode two
It is also necessary to first apply a non-maximum suppression method to the predicted frame on each 2D image in the multiple 2D images to obtain an initial target frame on each 2D image, then aiming at any initial target frame on each frame of 2D image, according to the central position of the initial target frame, determining the initial target frame on other frames of 2D images with the distance from the central position less than a preset threshold value, the center coordinates of the initial target frame and the center coordinates of the initial target frame on the other frames of the 2D images are averaged to obtain a first coordinate of the target frame, and averaging the coordinates of the segmentation position of the 2D image where the initial target frame is located and the coordinates of the segmentation position of the 2D image of other frames to obtain a second coordinate of the target frame, and finally determining the three-dimensional coordinates of the suspected fracture point of the bone based on the first coordinate and the second coordinate.
In the second method, the predicted frame on each frame of the 2D image needs to be deduplicated by using a non-maximum suppression method, then the center coordinates of the initial target frame on the frame and the other frames of the 2D images are averaged, the coordinates of the segmentation positions of the frame and the other frames of the 2D images are averaged, and the first coordinate and the second coordinate are obtained respectively after determining the initial target frame on the other frames of the 2D images near the initial target frame on the frame of the 2D image, so that the three-dimensional coordinates of the suspected bone fracture point can be obtained. Wherein the first coordinate is a plane coordinate, and the second coordinate is a coordinate of the slicing position.
Accordingly, taking the cross-sectional image as an example, when the initial target frame on one frame of 2D image is merged with the initial target frame on the other frame of 2D image nearby, the center coordinates of the initial target frame on the frame and the other frame of 2D image nearby may be averaged to obtain the first coordinate. For example, the center coordinates of the initial object frame a1 on one frame 2D video are (15,20), the center coordinates of the initial object frame a2 on the other frame 2D video are (17,18), and the first coordinates (16,19) can be obtained by averaging these two coordinates. Further, when the coordinates of the slicing positions of the one frame of 2D image and the other frame of 2D image in the vicinity are averaged, the Z-axis coordinates of the one frame of 2D image and the other frame of 2D image in the vicinity may be directly averaged, for example, the Z-axis coordinate of the 2D image in which the initial target frame a1 is located is (0,0,6), the Z-axis coordinate of the 2D image in which the initial target frame a2 is located is (0,0,8), and the second coordinate (0,0,7) may be obtained by averaging the two Z-axis coordinates. Finally, the first coordinates (16,19) and the second coordinates (0,0,7) are combined together to obtain the three-dimensional coordinates (16,19,7) of the suspected fracture point.
Based on the above description, the flow of determining the three-dimensional coordinates of the suspected fracture point will be described below with an embodiment.
When a group of 2D image layers is input to the target detection network model to obtain a prediction frame for predicting a suspected bone fracture point on each frame of 2D image and the corresponding confidence level thereof, for example, 8 frames of 2D images, the prediction frame and the confidence level thereof shown in table 1 can be obtained.
TABLE 1
Figure BDA0002248062290000111
Figure BDA0002248062290000121
Based on table 1, firstly, a non-maximum suppression method is applied to the prediction frame on each frame of 2D image to obtain an initial target frame on each frame of 2D image, starting from the 1 st frame of 2D image, there are three prediction frames on the frame, which are respectively a prediction frame a11, a prediction frame a12 and a prediction frame a13, and the non-maximum suppression method is applied to merge the multiple prediction frames on the 1 st frame of 2D image, and the prediction frame with the highest confidence level, that is, the confidence level of 98%, is the initial target frame a11 on the 1 st frame of 2D image. Similarly, the initial target frame on the 2D image of the other frame may be determined, and the initial target frame may be directly determined corresponding to the 2D image with only one prediction frame, which may be specifically shown in table 2.
TABLE 2
2D image Initial target frame Confidence level
Frame 1 A11 98%
Frame 2 A23 96%
Frame 3 A31 96%
Frame 4 A42 94%
Frame 5 A51 95%
Frame 6 A61 98%
Frame 7 A71 97%
Frame 8 A81 99%
Then, starting from the initial target frame a11, it is determined that the initial target frame between the initial target frame a23 and the initial target frame a81, whose distance from the center position of the initial target frame a11 is smaller than the preset threshold value, is the initial target frame a31, the initial target frame a11 and the initial target frame a31 may be merged, specifically, the initial target frame with the highest confidence may be retained, and the initial target frame a11 may be retained as the target frame a11 because the confidence of the initial target frame a11 is greater than that of the initial target frame a 31. And then merging is started from the initial target frame a23, and if no initial target frame with the distance from the center position of the initial target frame a23 smaller than a preset threshold value is found from the initial target frame a42 to the initial target frame a81, the initial target frame a23 is retained as the target frame a 23. And merging from the initial target frame a42, determining an initial target frame a71 from the initial target frame a51 to the initial target frame a81, wherein the distance between the initial target frame a71 and the center position of the initial target frame a42 is less than a preset threshold, merging the initial target frame a71 and the initial target frame a71, and keeping the initial target frame a71 as the target frame a71 because the confidence of the initial target frame a71 is high. Next, starting from the initial target frame a51, the initial target frame a81 whose distance from the center position of the initial target frame a51 is less than the preset threshold is determined from the initial target frame a61 and the initial target frame a81, and the two are merged, and the initial target frame a81 is retained as the target frame a81 because the confidence of the initial target frame a81 is high. Thus, the set of 2D video layers finally obtains the coordinates of the center positions of the 5 object frames, i.e., the two-dimensional coordinates of the 5 virtual fracture points, i.e., the object frame a11, the object frame a23, the object frame a61, the object frame a71, and the object frame a 81. Then, the segmentation positions of the 2D images corresponding to the 5 target frames are combined with the two-dimensional coordinates of the 5 suspected fracture points, so as to obtain the three-dimensional coordinates of the 5 suspected fracture points.
And step 204, determining a region containing the suspected fracture point from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point.
Specifically, after obtaining the three-dimensional coordinates of the suspected fracture point, a three-dimensional block of a preset size may be cut from the 3D image with the three-dimensional coordinates of the suspected fracture point as the center, and the three-dimensional block is determined as a region containing the suspected fracture point and is used as the ROI. The three-dimensional block is a pixel three-dimensional block of a suspected fracture point, and the preset size can be set according to experience.
It should be noted that the ROI may be of various shapes, the cube described above is only one example, and in other possible examples, the ROI may be a sphere or other shapes.
Step 205, determining fracture points of the bone based on the ROI and the 3D convolutional neural network classification model.
In the embodiment of the invention, the 3D convolutional neural network classification model comprises a feature extraction module and a full-link classification module, when the fracture points of the bone are determined, the ROI can be input into the feature extraction module to obtain feature vectors, then the feature vectors are input into the full-link classification module to obtain the confidence coefficient of whether the suspected fracture points are the fracture points, and finally the fracture points of the bone are determined according to the confidence coefficient of whether each suspected fracture point is the fracture point. The fully-connected classification module outputs confidence, that is, the category with the highest confidence may be determined as the category whether each suspected fracture point is a fracture point, for example, if the confidence that a certain suspected fracture point is a fracture is high, it indicates that the suspected fracture point is not a fracture point.
The 3D convolutional neural network classification model can be obtained by training and learning with the ROI marked with fracture points as training samples. The training process of the 3D convolutional neural network classification model is not specifically limited. The feature extraction module includes N successive convolution modules, each convolution module including an mxmxmxm 3D convolution layer, a Batch Normalization (BN) layer, an activation function layer, and a yxyyyy pooling (max boosting) layer. The fully-connected classification module may include two successive fully-connected layers. Wherein N is less than or equal to the first number threshold, M is less than or equal to the second number threshold, and Y is less than or equal to the third number threshold, and those skilled in the art can set specific values of the first number threshold, the second number threshold, and the third number threshold according to experience and practical situations, which are not limited herein.
It should be noted that after obtaining the fracture point of the bone, the operation of removing false alarm is also needed, specifically, the operation can be divided into two types: when the number of pixel points with the gray value of the pixel points larger than the first threshold value in the neighborhood of the fracture points of the bone is larger than the second threshold value, the fracture points of the bone are determined to be false-alarm fracture points. In this case, it is shown that the pixel density of the fracture point is not large enough, and the pixel density does not reach the pixel density corresponding to the bone, and the fracture point is not the bone and correspondingly not the fracture point, so that the pixel density is considered as false alarm and needs to be removed. And the other method is to determine the bone fracture point of the bone as a false-alarm fracture point when the bone fracture point is positioned near the center of the thorax of the 3D image or in the edge area of the 3D image. In this case, the fracture point is located near the center of the chest, which is known to have no bone, and therefore the fracture point at this position can be considered as a false alarm, and similarly, the fracture point located in the edge region of the 3D image is also considered as a false alarm and needs to be removed. Based on the operation of removing the false alarm, the accuracy rate of the fracture point detection can be further improved.
Therefore, the embodiment of the invention can obtain a 3D image, the 3D image includes bones, the 3D image is segmented to obtain a plurality of groups of 2D image layers, any one of the 2D image layers includes a plurality of 2D images, three-dimensional coordinates of a suspected fracture point of the bones are determined based on the plurality of groups of 2D image layers and a target detection network model, the target detection network model is obtained by training and learning a plurality of groups of 2D image layers with fracture point positions marked as training samples, a region including the suspected fracture point is determined from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point, the fracture point of the bones is determined based on the 3D convolutional neural network classification model, and the 3D convolutional neural network classification model is obtained by training and learning the ROI with the fracture point marked as a training sample. In the technical scheme, the 3D image containing the bone is divided into a plurality of groups of 2D image layers to carry out target detection to obtain the three-dimensional coordinates of the suspected fracture points, then the areas containing the suspected fracture points obtained based on the three-dimensional coordinates are classified, and the fracture points of the bone are automatically determined, so that compared with the traditional manual diagnosis mode of doctors, the method can reduce the diagnosis error rate caused by the level difference of doctors, and improve the accuracy and efficiency of fracture detection.
Based on the same technical concept, fig. 4 exemplarily shows a structure of a fracture detection apparatus provided by an embodiment of the present invention, which can perform a fracture detection process, and the apparatus may be located in the server 100 shown in fig. 1, or the server 100.
As shown in fig. 4, the apparatus specifically includes:
an acquiring unit 401, configured to acquire a 3D image, where the 3D image includes a bone;
a processing unit 402, configured to segment the 3D image to obtain multiple groups of 2D image layers, where any 2D image layer includes multiple frames of 2D images; determining three-dimensional coordinates of suspected fracture points of the bone based on the multiple groups of 2D image layers and the target detection network model; the target detection network model is obtained by training and learning a plurality of groups of 2D image layers marked with fracture point positions as training samples; determining a region containing the suspected fracture point from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point; determining a fracture point of the bone based on the ROI and a 3D convolutional neural network classification model; the 3D convolutional neural network classification model is obtained by training and learning with the ROI marked with the fracture points as training samples.
Optionally, the processing unit 402 is further configured to:
after the fracture point of the bone is determined, when the number of pixel points of which the gray value of the pixel points in the neighborhood of the fracture point of the bone is larger than a first threshold value is larger than a second threshold value, determining the fracture point of the bone as a false-alarm fracture point;
alternatively, the first and second electrodes may be,
and when the fracture point of the bone is positioned near the center of the thoracic cavity of the 3D image or in the edge area of the 3D image, determining the fracture point of the bone as a false-alarm fracture point.
Optionally, the processing unit 402 is specifically configured to:
sequentially inputting the multiple groups of 2D image layers to the target detection network model, and outputting a prediction frame of a suspected fracture point on each frame of 2D image and a confidence coefficient corresponding to the prediction frame;
obtaining a target frame based on the position of a prediction frame on a plurality of frames of 2D images and the confidence corresponding to the prediction frame;
and obtaining the three-dimensional coordinates of the suspected fracture point of the bone based on the position of the target frame.
Optionally, the processing unit 402 is specifically configured to:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, according to the central position of the initial target frame, determining the initial target frame on other frames of 2D images, the distance between which and the central position is less than a preset threshold value, and determining the initial target frame and the initial target frame on other frames of 2D images with the highest confidence level as the target frame;
and determining the three-dimensional coordinates of the suspected fracture point of the bone based on the central position of the target frame and the segmentation position of the 2D image where the target frame is located.
Optionally, the processing unit 402 is specifically configured to:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, determining an initial target frame on other frames of 2D images, the distance between the initial target frame and the center position of which is less than a preset threshold value, averaging the center coordinates of the initial target frame and the center coordinates of the initial target frame on the other frames of 2D images to obtain a first coordinate of the target frame, and averaging the coordinates of the segmentation position of the 2D image where the initial target frame is located and the coordinates of the segmentation position of the other frames of 2D images to obtain a second coordinate of the target frame;
determining three-dimensional coordinates of a suspected fracture point of the bone based on the first and second coordinates.
Optionally, the 3D convolutional neural network classification model includes a feature extraction module and a fully connected classification module;
the processing unit 402 is specifically configured to:
inputting the ROI into the feature extraction module to obtain a feature vector;
inputting the feature vector to the fully-connected classification module to obtain the confidence coefficient of whether the suspected fracture point is a fracture point;
and determining the fracture points of the bone according to the confidence coefficient of whether each suspected fracture point is a fracture point.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the fracture detection method according to the obtained program.
Based on the same technical concept, the embodiment of the invention also provides a computer-readable non-volatile storage medium, which comprises computer-readable instructions, and when the computer-readable instructions are read and executed by a computer, the computer is enabled to execute the method for detecting the fracture.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method of bone fracture detection, comprising:
acquiring a 3D image, wherein the 3D image comprises a bone;
segmenting the 3D image to obtain a plurality of groups of 2D image layers, wherein any 2D image layer comprises a plurality of frames of 2D images;
determining three-dimensional coordinates of suspected fracture points of the bone based on the multiple groups of 2D image layers and the target detection network model; the target detection network model is obtained by training and learning a plurality of groups of 2D image layers marked with fracture point positions as training samples;
determining a region containing the suspected fracture point from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point;
determining a fracture point of the bone based on the ROI and a 3D convolutional neural network classification model; the 3D convolutional neural network classification model is obtained by training and learning with the ROI marked with the fracture points as training samples.
2. The method of claim 1, wherein after the determining the fracture point of the bone, further comprising:
when the number of pixel points of which the gray value of the pixel points in the neighborhood of the fracture points of the bone is larger than a first threshold value is larger than a second threshold value, determining the fracture points of the bone as false-alarm fracture points;
alternatively, the first and second electrodes may be,
and when the fracture point of the bone is positioned near the center of the thoracic cavity of the 3D image or in the edge area of the 3D image, determining the fracture point of the bone as a false-alarm fracture point.
3. The method of claim 1, wherein determining three-dimensional coordinates of a suspected fracture point of the bone based on the plurality of sets of 2D image layers and a target detection network model comprises:
sequentially inputting the multiple groups of 2D image layers to the target detection network model, and outputting a prediction frame of a suspected fracture point on each frame of 2D image and a confidence coefficient corresponding to the prediction frame;
obtaining a target frame based on the position of a prediction frame on a plurality of frames of 2D images and the confidence corresponding to the prediction frame;
and obtaining the three-dimensional coordinates of the suspected fracture point of the bone based on the position of the target frame.
4. The method of claim 3, wherein obtaining the target frame based on the position of the predicted frame on the multiple frames of 2D images and the confidence corresponding to the predicted frame comprises:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, determining initial target frames on other frames of 2D images, wherein the distance between the initial target frames and the center position is smaller than a preset threshold value; determining the initial target frame and the initial target frame on the other frames of the 2D image with the highest confidence level as the target frame;
the obtaining three-dimensional coordinates of a suspected fracture point of the bone based on the position of the target frame comprises:
and determining the three-dimensional coordinates of the suspected fracture point of the bone based on the central position of the target frame and the segmentation position of the 2D image where the target frame is located.
5. The method of claim 3, wherein obtaining the target frame based on the position of the predicted frame on the multiple frames of 2D images and the confidence corresponding to the predicted frame comprises:
adopting a non-maximum suppression method for a prediction frame on each frame of 2D image in a plurality of frames of 2D images to obtain an initial target frame on each frame of 2D image;
aiming at any initial target frame on each frame of 2D image, determining an initial target frame on other frames of 2D images, the distance between the initial target frame and the center position of which is less than a preset threshold value, averaging the center coordinates of the initial target frame and the center coordinates of the initial target frame on the other frames of 2D images to obtain a first coordinate of the target frame, and averaging the coordinates of the segmentation position of the 2D image where the initial target frame is located and the coordinates of the segmentation position of the other frames of 2D images to obtain a second coordinate of the target frame;
obtaining three-dimensional coordinates of a suspected fracture point of the bone based on the position of the target frame, comprising:
determining three-dimensional coordinates of a suspected fracture point of the bone based on the first and second coordinates.
6. The method of any one of claims 1 to 5, wherein the 3D convolutional neural network classification model comprises a feature extraction module and a fully connected classification module;
the determining fracture points of the bone based on the ROI and a 3D convolutional neural network classification model comprises:
inputting the ROI into the feature extraction module to obtain a feature vector;
inputting the feature vector to the fully-connected classification module to obtain the confidence coefficient of whether the suspected fracture point is a fracture point;
and determining the fracture points of the bone according to the confidence coefficient of whether each suspected fracture point is a fracture point.
7. A fracture detection device, comprising:
an acquisition unit configured to acquire a 3D image, the 3D image including a bone;
the processing unit is used for segmenting the 3D images to obtain a plurality of groups of 2D image layers, wherein any 2D image layer comprises a plurality of frames of 2D images; determining three-dimensional coordinates of suspected fracture points of the bone based on the multiple groups of 2D image layers and the target detection network model; the target detection network model is obtained by training and learning a plurality of groups of 2D image layers marked with fracture point positions as training samples; determining a region containing the suspected fracture point from the 3D image as an ROI based on the three-dimensional coordinates of the suspected fracture point; determining a fracture point of the bone based on the ROI and a 3D convolutional neural network classification model; the 3D convolutional neural network classification model is obtained by training and learning with the ROI marked with the fracture points as training samples.
8. The apparatus as recited in claim 7, said processing unit to further:
after the fracture point of the bone is determined, when the number of pixel points of which the gray value of the pixel points in the neighborhood of the fracture point of the bone is larger than a first threshold value is larger than a second threshold value, determining the fracture point of the bone as a false-alarm fracture point;
alternatively, the first and second electrodes may be,
and when the fracture point of the bone is positioned near the center of the thoracic cavity of the 3D image or in the edge area of the 3D image, determining the fracture point of the bone as a false-alarm fracture point.
9. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 6 in accordance with the obtained program.
10. A computer-readable non-transitory storage medium including computer-readable instructions which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 6.
CN201911023815.0A 2019-10-25 2019-10-25 Method and device for detecting fracture Pending CN110826557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911023815.0A CN110826557A (en) 2019-10-25 2019-10-25 Method and device for detecting fracture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911023815.0A CN110826557A (en) 2019-10-25 2019-10-25 Method and device for detecting fracture

Publications (1)

Publication Number Publication Date
CN110826557A true CN110826557A (en) 2020-02-21

Family

ID=69550807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911023815.0A Pending CN110826557A (en) 2019-10-25 2019-10-25 Method and device for detecting fracture

Country Status (1)

Country Link
CN (1) CN110826557A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium
CN111967540A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Maxillofacial fracture identification method and device based on CT database and terminal equipment
CN112785591A (en) * 2021-03-05 2021-05-11 杭州健培科技有限公司 Method and device for detecting and segmenting costal fracture in CT image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107007294A (en) * 2016-01-28 2017-08-04 株式会社日立制作所 X-ray imaging apparatus and bone density measurement method
CN107072612A (en) * 2014-09-25 2017-08-18 株式会社日立制作所 Medical X-ray measurement apparatus and method
CN108664971A (en) * 2018-05-22 2018-10-16 中国科学技术大学 Pulmonary nodule detection method based on 2D convolutional neural networks
US20180342061A1 (en) * 2016-07-15 2018-11-29 Beijing Sensetime Technology Development Co., Ltd Methods and systems for structured text detection, and non-transitory computer-readable medium
CN108986073A (en) * 2018-06-04 2018-12-11 东南大学 A kind of CT image pulmonary nodule detection method based on improved Faster R-CNN frame
CN109523546A (en) * 2018-12-21 2019-03-26 杭州依图医疗技术有限公司 A kind of method and device of Lung neoplasm analysis
CN109711315A (en) * 2018-12-21 2019-05-03 四川大学华西医院 A kind of method and device of Lung neoplasm analysis
CN110200650A (en) * 2019-05-31 2019-09-06 昆明理工大学 A method of detection bone density

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107072612A (en) * 2014-09-25 2017-08-18 株式会社日立制作所 Medical X-ray measurement apparatus and method
CN107007294A (en) * 2016-01-28 2017-08-04 株式会社日立制作所 X-ray imaging apparatus and bone density measurement method
US20180342061A1 (en) * 2016-07-15 2018-11-29 Beijing Sensetime Technology Development Co., Ltd Methods and systems for structured text detection, and non-transitory computer-readable medium
CN108664971A (en) * 2018-05-22 2018-10-16 中国科学技术大学 Pulmonary nodule detection method based on 2D convolutional neural networks
CN108986073A (en) * 2018-06-04 2018-12-11 东南大学 A kind of CT image pulmonary nodule detection method based on improved Faster R-CNN frame
CN109523546A (en) * 2018-12-21 2019-03-26 杭州依图医疗技术有限公司 A kind of method and device of Lung neoplasm analysis
CN109711315A (en) * 2018-12-21 2019-05-03 四川大学华西医院 A kind of method and device of Lung neoplasm analysis
CN110200650A (en) * 2019-05-31 2019-09-06 昆明理工大学 A method of detection bone density

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium
CN111967540A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Maxillofacial fracture identification method and device based on CT database and terminal equipment
CN111967540B (en) * 2020-09-29 2021-06-08 北京大学口腔医学院 Maxillofacial fracture identification method and device based on CT database and terminal equipment
CN112785591A (en) * 2021-03-05 2021-05-11 杭州健培科技有限公司 Method and device for detecting and segmenting costal fracture in CT image

Similar Documents

Publication Publication Date Title
US10846853B2 (en) Medical image processing apparatus, medical image processing method, and medical image processing program
CN110826557A (en) Method and device for detecting fracture
JP2015036123A (en) Medical image processor, medical image processing method and classifier training method
CN112767346B (en) Multi-image-based full-convolution single-stage mammary image lesion detection method and device
CN111340756B (en) Medical image lesion detection merging method, system, terminal and storage medium
CN110782446B (en) Method and device for determining volume of lung nodule
US10706534B2 (en) Method and apparatus for classifying a data point in imaging data
CN110974294A (en) Ultrasonic scanning method and device
US20130016884A1 (en) Methods, apparatuses, and computer program products for identifying a region of interest within a mammogram image
CN110210519B (en) Classification method, computer device, and storage medium
CN112561908B (en) Mammary gland image focus matching method, device and storage medium
US20210374452A1 (en) Method and device for image processing, and elecrtonic equipment
CN108564044B (en) Method and device for determining pulmonary nodule density
Santosh et al. Automatically detecting rotation in chest radiographs using principal rib-orientation measure for quality control
US8306354B2 (en) Image processing apparatus, method, and program
US20190046127A1 (en) Image processing apparatus, image processing method, and program
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN114332132A (en) Image segmentation method and device and computer equipment
CN112102235A (en) Human body part recognition method, computer device, and storage medium
CN110634554A (en) Spine image registration method
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
EP4156089A1 (en) Method, device and system for automated processing of medical images to output alerts for detected dissimilarities
CN112381805B (en) Medical image processing method
CN109767468B (en) Visceral volume detection method and device
CN113808130B (en) Intelligent classification method, device and equipment for tumor images and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221