CN111080573B - Rib image detection method, computer device and storage medium - Google Patents

Rib image detection method, computer device and storage medium Download PDF

Info

Publication number
CN111080573B
CN111080573B CN201911133164.0A CN201911133164A CN111080573B CN 111080573 B CN111080573 B CN 111080573B CN 201911133164 A CN201911133164 A CN 201911133164A CN 111080573 B CN111080573 B CN 111080573B
Authority
CN
China
Prior art keywords
image
region
interest
candidate
rib
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911133164.0A
Other languages
Chinese (zh)
Other versions
CN111080573A (en
Inventor
宋燕丽
宣锴
吴迪嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201911133164.0A priority Critical patent/CN111080573B/en
Publication of CN111080573A publication Critical patent/CN111080573A/en
Application granted granted Critical
Publication of CN111080573B publication Critical patent/CN111080573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The application relates to a rib image detection method, a computer device and a storage medium. The method comprises the following steps: acquiring an original medical image, the original medical image comprising ribs; performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs; inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest; performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest; and inputting the candidate image region into a second neural network model for processing to obtain the target position of the region of interest. By adopting the method, the detection time can be saved and the detection accuracy can be improved.

Description

Rib image detection method, computer device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a rib image detection method, a computer device, and a storage medium.
Background
The rib is an arc-shaped ossicle, is a thoracic bone bracket, the front side is connected with the sternum, and the rear side is connected with the thoracic vertebra, so that the rib can play a role in protecting the thoracic cavity, the lung cavity and the heart of a human body, and is very important for other organs of the chest of the human body, and is also very important for rib detection.
In the related art, when detecting the rib, a scanning device is generally used for scanning the rib of the human body, then the scanned data is reconstructed to obtain a rib image, and the obtained rib image is input into a neural network model for processing to obtain a result of whether the rib is diseased or not.
However, the rib detection using the above technique has the problems that the detection process is time-consuming and the detection result is inaccurate.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a rib image detection method, apparatus, computer device, and storage medium that can reduce detection time and improve detection accuracy.
A rib image detection method, the method comprising:
acquiring an original medical image, the original medical image comprising ribs;
performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs;
Inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest;
and inputting the candidate image region into a second neural network model for processing to obtain the target position of the region of interest.
A rib image detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring an original medical image, wherein the original medical image comprises ribs;
the unfolding module is used for conducting unfolding processing on the ribs in the original medical image to obtain unfolded images of the ribs;
the first processing module is used for inputting the expanded image of the rib into the first neural network model for processing to obtain candidate positions of the region of interest;
the mapping module is used for carrying out region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest;
And the second processing module is used for inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an original medical image, the original medical image comprising ribs;
performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs;
inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest
And inputting the candidate image region into a second neural network model for processing to obtain the target position of the region of interest.
A readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring an original medical image, the original medical image comprising ribs;
performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs;
inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest
And inputting the candidate image region into a second neural network model for processing to obtain the target position of the region of interest.
According to the rib image detection method, the device, the computer equipment and the storage medium, the original medical image comprising the ribs is obtained, the ribs are unfolded, after the ribs are unfolded, the rib unfolding diagram is input into the first neural network model to obtain candidate positions of the region of interest, region division is carried out on the original medical image according to the candidate positions of the region of interest to obtain candidate image regions comprising the candidate positions of the region of interest, and the candidate image regions are input into the second neural network model to obtain target positions of the region of interest. In the method, as two-stage network detection is adopted when the region of interest is detected, the detection accuracy of the method is higher; in addition, as the method adopts the rib unfolding image when the region of interest is initially positioned, the candidate position of the region of interest can be rapidly obtained, namely, a part of detection time can be saved; meanwhile, when the region of interest is finely detected, the candidate image region obtained by mapping the candidate position onto the original image is used as the input of fine detection, so that the accuracy of an input image source can be ensured, and the finally obtained target position of the region of interest can be more accurate.
Drawings
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a flow chart of a rib image detection method according to an embodiment;
FIG. 3 is a flow chart of a rib image detection method according to another embodiment;
FIG. 4a is a flowchart of a rib image detection method according to another embodiment;
FIG. 4b is a detailed view of a rib unfolding process according to another embodiment;
FIG. 5a is a flowchart of a rib image detection method according to another embodiment;
FIG. 5b is a schematic diagram of a classification model according to another embodiment;
FIG. 6 is a flowchart of a rib image detection method according to another embodiment;
FIG. 7a is a flowchart of a rib image detection method according to another embodiment;
FIG. 7b is a schematic diagram of a first neural network model according to another embodiment;
fig. 8 is a block diagram showing a structure of a rib image detection apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
At present, rib disease diagnosis is an important aspect of clinical chest diagnosis and treatment, for which, many research groups and manufacturers develop many systems for rib disease detection, but on one hand, ribs are in a diagonally bent tubular structure, real rib areas are very few and are difficult to effectively separate from other areas, on the other hand, the proportion of real focal areas on the ribs is relatively small, when rib detection is carried out, a scanning device is usually used for scanning human ribs, then scanned data are reconstructed to obtain rib images, and then the obtained rib images are input into a neural network model for processing to obtain a result of whether the ribs are diseased or not, and many redundant calculations exist in the detection process, so that the detection process is relatively time-consuming and the detection accuracy is not high. The embodiment of the application provides a rib image detection method, a device, computer equipment and a storage medium, which aim to solve the problems existing in the above technology.
The rib image detection method provided by the embodiment of the application can be applied to computer equipment, and an internal structure diagram of the computer equipment can be shown as shown in fig. 1. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a rib image detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The execution subject of the embodiments of the present application may be a rib image detection device or a computer device, and the following embodiments will describe the execution subject using the computer device as the execution subject.
In one embodiment, a rib image detection method is provided, and this embodiment relates to a specific process of how to expand a rib and perform secondary detection on a rib expanded image. As shown in fig. 2, the method may include the steps of:
s202, acquiring an original medical image, wherein the original medical image comprises ribs.
The method for acquiring the original medical image may include: obtaining an original medical image of the subject by image reconstruction and correction of data of the subject acquired by a scanning device, which may be an MR device (Magnetic Resonance ), a CT device (Computed Tomography, electronic computed tomography), a PET device (Positron Emission Computed Tomography, positron emission tomography), a PET-CT device, a PET-MR device, or the like; or, the original medical image can be reconstructed and corrected in advance and stored in the computer equipment, and when the original medical image needs to be processed, the original medical image is directly read from the memory of the computer equipment; or, the computer device may also obtain the original medical image from the external device, for example, store the original medical image in the cloud, and when the processing operation needs to be performed, obtain the original medical image from the cloud. The present embodiment does not limit the acquisition manner in which the original medical image is acquired.
Specifically, the computer device may acquire the original medical image through the above means, where the original medical image may include ribs, and may include a vertebral body, and may include other structures, and so on.
S204, unfolding the ribs in the original medical image to obtain an unfolded image of the ribs.
When the ribs are unfolded, part of the ribs can be unfolded, or all of the ribs can be unfolded, and in the embodiment, all of the ribs are mainly unfolded; when the rib is unfolded, the rib can be unfolded along the rib, or can be unfolded along the central vertebral body of the rib, or can be unfolded along other directions, and the like.
Specifically, when the rib is unfolded, the rib, the vertebral body and the like in the original medical image can be detected first to obtain a rib segmentation result and a vertebral body detection result, and the data of the original medical image is stretched and flattened according to the rib segmentation result and the vertebral body detection result to obtain an unfolded image of the rib. Compared with the original medical image, the unfolding image of the rib is more convenient for the subsequent neural network model processing, and the time consumption of calculation can be reduced.
S206, inputting the unfolded image of the rib into the first neural network model for processing to obtain candidate positions of the region of interest.
The candidate position of the region of interest may be one position or may be a plurality of positions, and the candidate position may be a coordinate, and may be a one-dimensional coordinate, a two-dimensional coordinate, a three-dimensional coordinate, or the like; the first neural network model may be a segmentation model, such as a V-Net model, a U-Net model, or the like; the region of interest may here be a lesion on a rib or the like, since the lesion on a rib may not be the only one, and there may be a plurality of candidate locations here, since the first detected region of interest may also be subject to inaccuracy.
In addition, before the expanded image of the rib is input into the first neural network model, the expanded image of the rib may be preprocessed, for example, normalization processing may be performed by using bone windows, means, standard deviation processing, and the like, and if bone windows are used, normalization may be performed by using the window width level of bone tissue, and the maximum value Imax (1000) and the minimum value Imin (0) are set, and normalization is performed by the following formula: ic= (Ic-Imin)/(Imax-Imin); if mean or standard deviation is used, the treatment may be performed by (Ic-mean)/standard deviation or fixed threshold, etc.
Specifically, after obtaining the expanded image of the rib, the computer device may input the expanded image of the rib into the first neural network model for segmentation or detection processing, so as to obtain candidate positions of the region of interest on the expanded image of the rib.
S208, performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest.
Specifically, after obtaining the candidate position of the region of interest, the computer device may set the rib expanded image and the original medical image under the same coordinate system, then map the candidate position of the region of interest onto the original medical image, that is, find the corresponding candidate position of the region of interest on the original medical image, and then perform region division on the original medical image with the found candidate position of the region of interest as a center and with a certain step length, so as to obtain an image region including the candidate position of the region of interest, and record the image region as a candidate image region, where the candidate image region may also be a candidate image block. The step size may be an integer value between 20mm×20mm and 64mm×64mm, and of course, other step size values may be used. In addition, the candidate image area or the candidate image block may be two-dimensional or three-dimensional, and then, the candidate image area or the candidate image block may be one or more, and a plurality of candidate image areas or candidate image blocks are mainly used in the embodiment.
S210, inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
Wherein the second neural network model may be a classification model, a segmentation model, etc., which may be a convolutional neural network, etc. Before inputting the candidate image area into the neural network model, the candidate image area may be resampled into an image area with a preset resolution, for example, between 0.4mm by 0.4mm and 1.0mm by 1.0mm, so that a plurality of candidate image areas may be conveniently and uniformly processed.
Specifically, after obtaining the candidate image region, the computer device may input the candidate image region into the second neural network model for segmentation processing or classification processing, and determine the target position, that is, the more accurate position, of the region of interest from the candidate positions of the region of interest.
In the rib image detection method, an original medical image comprising a rib is obtained, the rib is unfolded, after the rib is unfolded, a rib unfolding diagram is input into a first neural network model to obtain candidate positions of an interested region, region division is carried out on the original medical image according to the candidate positions of the interested region to obtain candidate image regions comprising the candidate positions of the interested region, and the candidate image regions are input into a second neural network model to obtain target positions of the interested region. In the method, as two-stage network detection is adopted when the region of interest is detected, the detection accuracy of the method is higher; in addition, as the method adopts the rib unfolding image when the region of interest is initially positioned, the candidate position of the region of interest can be rapidly obtained, namely, a part of detection time can be saved; meanwhile, when the region of interest is finely detected, the candidate image region obtained by mapping the candidate position onto the original image is used as the input of fine detection, so that the accuracy of an input image source can be ensured, and the finally obtained target position of the region of interest can be more accurate.
In another embodiment, another rib image detection method is provided, and this embodiment relates to a specific process of how to develop a rib to obtain a rib developed image. Based on the above embodiment, as shown in fig. 3, the above 204 may include the following steps:
s302, detecting and processing the original medical image to obtain a rib segmentation result and at least one centrum center point.
Specifically, the computer device may manually label the rib region, manually segment the rib region to obtain a rib segmentation result, or input the original medical image into a segmentation model to obtain a rib segmentation result, and if the segmentation model is used, the segmentation model may be a model that is trained based on a sample image and a corresponding rib tag, rib centerline tag, or the like, and can segment the rib and the rib centerline simultaneously, where the rib segmentation result includes a rib segmentation result and a rib centerline segmentation result. In addition, the computer device can perform positioning detection on the vertebral bodies in the original medical image through a vertebral body detection positioning marking method to obtain at least one central point of the vertebral bodies, wherein the central points are generally a plurality of central points of the vertebral bodies.
S304, analyzing and processing at least one centrum center point, and determining the target direction of the centrum corresponding to the at least one centrum center point.
The analysis processing here may include fitting processing, principal component analysis processing, and the like.
Specifically, the computer device may analyze the plurality of centrum center points obtained in the previous step, determine a direction in which the centrum best meets the actual requirement, and use the direction as the target direction.
S306, unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
In this step, when specifically expanding the rib, the method steps shown in fig. 4 may be optionally adopted for expansion, and as shown in fig. 4a, the expansion steps include the following steps S402 to S404:
s402, establishing a coordinate system based on the target direction of the vertebral body and at least one central point of the vertebral body, and determining a distance image of a rib segmentation result under the coordinate system.
The coordinate system may be a cylindrical coordinate system, a polar coordinate system, or the like, and the embodiment mainly adopts a cylindrical coordinate system, and may be based on one detected centrum center point, all detected centrum center points, or some detected centrum center points when the coordinate system is established, which is not limited specifically. In addition, the distance image refers to the distance from the coordinate center of the rib segmentation result in the established coordinate system.
Specifically, the computer device may establish a cylindrical coordinate system according to the detected multiple centrum center points and the detected target directions of the centrum, calculate the distance between the rib segmentation result and the coordinate center point in the cylindrical coordinate system, obtain multiple distance values, and form a distance image from the multiple distance values. Wherein each plane of the cylinder is a polar coordinate and the planes are parallel.
S404, establishing a mapping relation between the distance image and the original medical image according to the distance image and the original medical image of the rib segmentation result under the coordinate system, and mapping the original medical image onto the distance image according to the mapping relation to obtain an expanded image of the rib.
In this step, when the mapping relationship is established, the following steps a and B may be optionally used for establishment:
and step A, carrying out interpolation and smoothing on the distance image of the rib segmentation result under the coordinate system to obtain a processed distance image.
And B, establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
Specifically, since there may be a very obvious point in the distance image obtained in the previous step, which may cause the subsequent rib expansion image to be more abrupt, the distance image is first subjected to interpolation processing, where linear interpolation or nonlinear interpolation, spline interpolation, and the like may be adopted, and after the interpolation processing, smoothing processing means such as convolution and the like may be continuously performed on the distance image, so as to obtain the distance image after the interpolation and smoothing processing. After that, each distance value on the processed distance image can be mapped back into the original medical image, the original medical image is under the cartesian coordinate system, the processed distance image is under the cylindrical coordinate system, and the mapping relationship between the cylindrical coordinate system and the cartesian coordinate system can be obtained by calculating each distance value on the processed distance image and the data of each point on the original medical image, namely, the mapping relationship between the processed distance image and the original medical image can be obtained, and the mapping relationship is similar to the conversion relationship, the conversion matrix and the like.
After the mapping relation is obtained, each value in the rib segmentation result can be traversed, interpolation and other processing are carried out in the corresponding original medical image, and an expanded image of the rib is obtained.
A detailed process of rib expansion according to the embodiment of the present application will be given below with reference to fig. 4b, where a specific rib expansion process is as follows:
after the computer device acquires the raw medical data, the following steps 1) -6) may be performed, as follows:
1) The original medical image is detected to obtain rib segmentation results and at least one centrum center point, as shown in fig. 4b (a).
2) A cylindrical coordinate system is established according to the detected centrum center points, wherein each plane of the cylinder is a polar coordinate, each plane is parallel, the cone center points are used as principal component analysis, processing and the like, the direction of the maximum component is obtained and is used as a normal vector z of all two-dimensional polar coordinate planes, a straight line is interpolated from the centrum points, the point on the straight line deviates from the sternum position by about 10-50 mm, the central point of the polar coordinate planes intersected with the straight line is used as the center point of the polar coordinate planes, the obtained cylindrical coordinates are shown as a graph (b) in fig. 4b, the coordinate on the polar coordinate planes corresponding to the normal vector direction is marked as z, the parameters on the polar coordinate planes corresponding to z are marked as ρ, θ, ρ is rib radius, each distance field is equivalent to one layer of images in the three-dimensional cylindrical coordinate system, and one radius value is offset, and the other distance field can be obtained.
3) Determining the distance rho of the segmentation result from the coordinate center in the cylindrical coordinate system through the established segmentation result and the cylindrical coordinate system z,θ . Taking z as an ordinate and θ as an abscissa, and ρ corresponding to rib segmentation result z,θ As shown in fig. 4b, diagram (c), in particular, for each z and θ, all possible ρs can be traversed, which can be noted if there is a segmentation of the ribs for the corresponding ρ; if all of the rib segmentations do not exist, we will correspond to ρ z,θ The value is recorded as 0; if there are multiple ρ present rib segments, their minima can be taken.
4) To fill ρ z,θ And smoothing the resulting expanded rib results as smooth as possible, where ρ obtained in 3) can be reduced z,θ Interpolation and smoothing. Linear interpolation can be adopted in the rib region, nearest neighbor interpolation is adopted outside the rib region, and in order to smooth the image, a large-scale Gaussian kernel can be selected for convolution, and the result rho 'after interpolation and smoothing is adopted' z,θ As shown in figure 4b, panel (d).
5) Will ρ' z,θ Projected back into the original medical image data, it can be seen that ρ' z,θ The corresponding plane is smooth and lies against the rib, as shown in figure 4b, panel (e)It is shown that z and θ can be traversed by having a mapping of cylindrical coordinates to Cartesian coordinates, where (z, θ, ρ ')' z,θ ) The corresponding original medical image (or rib segmented image) is interpolated to form an expanded two-dimensional rib image (or segmented image) as shown in fig. 4b, panel (f).
6) All ρ' z,θ Adding or subtracting an offset can obtain a plurality of two-dimensional rib unfolding images, and stacking or splicing the rib unfolding images can obtain a three-dimensional rib unfolding image.
According to the rib image detection method, the original medical image is detected to obtain a rib segmentation result and at least one cone center point, the at least one cone center point is analyzed to obtain a target direction of a cone corresponding to the at least one cone center point, and the original medical image is unfolded according to the target direction of the cone to obtain an unfolded image of the rib. In this embodiment, since the rib expansion image is expanded according to the rib segmentation result and the target direction of the vertebral body, the rib expansion image obtained in this embodiment is closer to the actual rib situation, that is, the rib expansion image obtained in this embodiment is more accurate.
In another embodiment, another rib image detection method is provided, and this embodiment relates to a specific process of how to process the candidate image region by using the classification model to obtain the target position of the region of interest if the second neural network model is the classification model. On the basis of the above embodiment, as shown in fig. 5a, the step S210 may include the following steps:
S502, inputting the candidate image area into a classification model to obtain the category of the candidate image area; the categories include target categories and non-target categories.
Wherein the target class may be false positive, the non-target class may be true positive, etc. The category output here may be identified as 0 or 1.
Specifically, after obtaining the candidate image areas, the computer device generally obtains a plurality of candidate image areas, and performs convolution processing such as downsampling on each candidate image area by using a classification model as shown in fig. 5b to obtain features of each candidate image area, and performs full-connection processing and classification processing on the features of each candidate image area by using the full-connection layer and softmax to obtain the category of each candidate image area.
S504, acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the region of interest.
Specifically, after obtaining the category of each candidate image area, the computer device may find a candidate image area corresponding to the target category from the category identification, record the candidate image area as a target candidate image area, map the target candidate image area back onto the original medical image, obtain a candidate position corresponding to the target candidate image area, and use the candidate position corresponding to the target candidate image area as a fine position of the region of interest, that is, a target position.
In addition, before using the classification model, the embodiment may train the classification model first, and during training, sample images (here, the sample images may be sample image areas or sample image blocks) may be obtained, and each sample image may be amplified, where the amplification includes translation (three directions are randomly ranging from plus or minus 10 mm), rotation (rotating around a random rotation axis, and rotating around a random angle ranging from plus or minus 20 degrees), scaling (randomly scaling by 0.7 to 1.3 times), and the like, each sample image is labeled with a category, and then the initial classification model may be trained based on the amplified sample image and the labeled category, so as to obtain the classification model.
In the rib image detection method provided by the embodiment, if the second neural network model is a classification model, the candidate image areas are input into the classification model to obtain the category of each candidate image area, the category comprises a target category and a non-target category, the target candidate image area corresponding to the target category is obtained, and the candidate position corresponding to the target candidate image area is determined from the original medical image and is used as the target position of the region of interest. In this embodiment, the classification model is used to process each candidate image area, so that the category of each candidate image area can be quickly and accurately obtained, and the candidate image area corresponding to the target category can be accurately obtained according to the category of each candidate image area, and the target position of the region of interest can be accurately obtained.
In another embodiment, another rib image detection method is provided, and this embodiment relates to a specific process of how to process the candidate image region by using the segmentation model to obtain the target position of the region of interest if the second neural network model is the segmentation model. On the basis of the above embodiment, as shown in fig. 6, the step S210 may include the steps of:
s602, inputting the candidate image area into the segmentation model to obtain the initial target position of the region of interest.
The segmentation model here may be a graph cut algorithm model, a watershed algorithm model, a GrabCut algorithm model, a machine learning model, or the like.
Specifically, after obtaining the candidate image areas, the computer device generally obtains a plurality of candidate image areas, and may perform convolution processing on each candidate image to obtain a fine detection position of the region of interest, which is referred to herein as an initial target position.
S604, fusing the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
Specifically, if the first neural network model is also a segmentation model, the results output by the first neural network model and the segmentation model herein may be probability maps, then the fusion process herein may be to map the probability map obtained by detecting the first neural network model onto the original medical image to obtain a probability map I1, where the probability map obtained by detecting the segmentation model is I2, and the final probability map i=a×i1+b×i2, a+b=1, 0< a <1,0< b <1, and the target position of the region of interest may be obtained according to the final probability map and the corresponding position calculation method.
Optionally, after obtaining the position of the region of interest, a connected domain processing may be further performed on the target position of the region of interest to obtain the size of the region of interest; and performing threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest. That is, after the final probability map is obtained, a suitable threshold (optional range 0.3-0.9) may be selected, the final probability map is converted into a binarized map, and morphological operations, such as operations of taking connected domains, are performed on the binarized map, so as to obtain the size and the category of the region of interest.
In addition, before using the segmentation model, the embodiment may train the segmentation model first, during training, a sample image (the sample image may be a sample image area or a sample image block) may be obtained, each sample image is amplified, the amplification includes translation (three directions are randomly within plus or minus 10 mm), rotation (rotation around a random rotation axis, and rotation within plus or minus 20 degrees of an angle range), scaling (random scaling by 0.7-1.3 times), and the like, each sample image is labeled with a region of interest, and then the initial segmentation model may be trained based on the amplified sample image and the labeled region of interest, to obtain the segmentation model.
In the rib image detection method provided by the embodiment, if the second neural network model is a segmentation model, the candidate image region is input into the segmentation model to obtain the initial target position of the region of interest, and fusion processing is performed on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest. In this embodiment, since fine detection of the rib can be achieved through the two-stage segmentation model, the target position of the region of interest of the rib can be obtained more accurately.
In another embodiment, another rib image detection method is provided, and this embodiment relates to a specific process of how to train the first neural network model. On the basis of the above embodiment, as shown in fig. 7a, the training method of the first neural network model includes the following steps:
s702, acquiring a sample image; the sample image is an expanded sample image of the ribs, the sample image including the labeling locations of the region of interest.
S704, carrying out normalization processing on the sample image to obtain a normalized sample image.
S706, training the initial first neural network model based on the normalized sample image to obtain a first neural network model.
The number of the sample images may be one or more, and in this embodiment, the plurality of sample images are used, and each sample image contains labeling position information of the region of interest. The first neural network model may be a segmentation model V-Net, U-Net, or the like. The training process can adopt an Adam method, a random gradient descent method SGD (generalized gradient descent) method and the like. In addition, each sample image can be amplified in the training process, and the amplification comprises translation (the random range of three directions is plus or minus 50 mm), rotation (the random angle is rotated around a random rotation axis, the angle range is plus or minus 20 degrees), scaling (the random scaling is 0.7-1.3 times) and the like.
Specifically, when the computer device acquires the sample medical image, the acquisition method may be the same as the method for acquiring the original medical image in S202, which is not described herein, and in addition, in the training process, the sample image may be randomly acquired and interpolated, where the size of the obtained random block ranges from 40 x 40 to 100 x 100, and the resolution is between 0.4mm x 0.4mm and 1.0mm x 1.0 mm; and then, normalizing the sizes, the gray scales, the pixels and the like of the sample images, fixing the sizes, the gray scales, the pixels and the like in a unified range, obtaining sample images in the unified range of the sizes, the gray scales, the pixels and the like, and obtaining normalized sample images.
And then, inputting each normalized sample medical image into an initial first neural network model, wherein the first neural network model structure can be shown in fig. 7b, obtaining the predicted position of the region of interest of each sample medical image, calculating the loss between the labeling position of the region of interest and the predicted position of the region of interest according to the labeling position of the region of interest and the predicted position of the region of interest, taking the loss as a value of a loss function, training the initial first neural network model by utilizing the value of the loss function, and finally obtaining the trained first neural network model. Here, the loss may be an error, variance, norm, etc. between the predicted position of the region of interest and the noted position of the region of interest; when the sum of the loss functions of the first neural network model is less than a preset threshold value during training of the first neural network model, or when the sum of the loss functions is basically stable (i.e. no change occurs any more), the first neural network model can be determined to be trained, otherwise, training is continued.
According to the rib image detection method, the sample image is obtained, the sample image is the unfolded sample image of the rib, the sample image comprises the labeling position of the region of interest, the sample image is normalized to obtain a normalized sample image, and the initial first neural network model is trained based on the normalized sample image to obtain the first neural network model. In this embodiment, since the first neural network model is obtained by training using the sample medical image including the labeling position of the region of interest, the obtained first neural network model is relatively accurate, and thus the obtained processing result is relatively accurate when the rib expanded image is processed by using the accurate network.
It should be understood that, although the steps in the flowcharts of fig. 2-4, 5a, 6, 7a are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 2-4, 5a, 6, 7a may comprise a plurality of sub-steps or phases, which are not necessarily performed at the same time, but may be performed at different times, nor does the order of execution of the sub-steps or phases necessarily follow one another, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or phases of other steps.
In one embodiment, as shown in fig. 8, there is provided a rib image detection apparatus including: an acquisition module 10, a deployment module 11, a first processing module 12, a mapping module 13 and a second processing module 14, wherein:
an acquisition module 10 for acquiring a raw medical image, the raw medical image comprising ribs;
the unfolding module 11 is used for conducting unfolding processing on the ribs in the original medical image to obtain unfolded images of the ribs;
The first processing module 12 is configured to input the expanded image of the rib into a first neural network model for processing, so as to obtain candidate positions of the region of interest;
the mapping module 13 is configured to perform region division on the original medical image according to the candidate position of the region of interest, so as to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest;
and the second processing module 14 is used for inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
For specific limitations of the rib image detection apparatus, reference may be made to the above limitation of the rib image detection method, and the description thereof will be omitted.
In another embodiment, another rib image detection apparatus is provided, and the expansion module 11 includes: detection unit, analysis unit and expansion unit, wherein:
the detection unit is used for detecting and processing the original medical image to obtain a rib segmentation result and at least one centrum center point;
the analysis unit is used for analyzing and processing the at least one centrum center point and determining the target direction of the centrum corresponding to the at least one centrum center point;
The unfolding unit is used for unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
Optionally, the developing unit may include: determining a subunit and an unfolding subunit, wherein:
a determining subunit, configured to establish a coordinate system based on the target direction of the vertebral body and the at least one central point of the vertebral body, and determine a distance image of the rib segmentation result under the coordinate system;
and the unfolding subunit is used for establishing a mapping relation between the distance image and the original medical image according to the distance image of the rib segmentation result in the coordinate system and the original medical image, and mapping the original medical image onto the distance image according to the mapping relation to obtain an unfolding image of the rib.
Optionally, the expansion subunit is further configured to interpolate and smooth the distance image of the rib segmentation result in the coordinate system, so as to obtain a processed distance image; and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
In another embodiment, another rib image detection apparatus is provided, and if the second neural network model is a classification model, the second processing module 14 includes: a classification unit and a first determination unit, wherein:
the classification unit is used for inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
the first determining unit is used for obtaining a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the region of interest.
In another embodiment, another rib image detection apparatus is provided, and if the second neural network model is a segmentation model, the second processing module 14 includes: a dividing unit and a second determining unit, wherein:
the segmentation unit is used for inputting the candidate image area into the segmentation model to obtain an initial target position of the region of interest;
And the second determining unit is used for carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
Optionally, the second processing module 14 further includes a third determining unit, configured to perform a connected domain fetching process on the target position of the region of interest, to obtain the size of the region of interest; and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
In another embodiment, another rib image detection apparatus is provided, where, on the basis of the foregoing embodiment, the apparatus may further include a training module, where the training module is configured to obtain a sample image; the sample image is an unfolding sample image of the rib, and the sample image comprises an annotation position of the region of interest; carrying out normalization processing on the sample image to obtain a normalized sample image; training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
For specific limitations of the rib image detection apparatus, reference may be made to the above limitation of the rib image detection method, and the description thereof will be omitted.
The respective modules in the rib image detection apparatus described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an original medical image, the original medical image comprising ribs;
performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs;
inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest;
And inputting the candidate image region into a second neural network model for processing to obtain the target position of the region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of:
detecting the original medical image to obtain a rib segmentation result and at least one centrum center point;
analyzing and processing the at least one centrum central point, and determining a target direction of the centrum corresponding to the at least one centrum central point;
and according to the target direction of the vertebral body, performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs.
In one embodiment, the processor when executing the computer program further performs the steps of:
establishing a coordinate system based on the target direction of the vertebral body and the at least one central point of the vertebral body, and determining a distance image of the rib segmentation result under the coordinate system;
and establishing a mapping relation between the distance image and the original medical image according to the distance image of the rib segmentation result under the coordinate system and the original medical image, and mapping the original medical image onto the distance image according to the mapping relation to obtain an unfolded image of the rib.
In one embodiment, the processor when executing the computer program further performs the steps of:
interpolation and smoothing are carried out on the distance image of the rib segmentation result under the coordinate system, and a processed distance image is obtained; and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
and acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting the candidate image region into the segmentation model to obtain an initial target position of the region of interest;
And carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of:
performing connected domain extraction processing on the target position of the region of interest to obtain the size of the region of interest;
and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a sample image; the sample image is an unfolding sample image of the rib, and the sample image comprises an annotation position of the region of interest;
carrying out normalization processing on the sample image to obtain a normalized sample image;
training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
In one embodiment, a readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an original medical image, the original medical image comprising ribs;
Performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs;
inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest;
and inputting the candidate image region into a second neural network model for processing to obtain the target position of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting the original medical image to obtain a rib segmentation result and at least one centrum center point;
analyzing and processing the at least one centrum central point, and determining a target direction of the centrum corresponding to the at least one centrum central point;
and according to the target direction of the vertebral body, performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Establishing a coordinate system based on the target direction of the vertebral body and the at least one central point of the vertebral body, and determining a distance image of the rib segmentation result under the coordinate system;
and establishing a mapping relation between the distance image and the original medical image according to the distance image of the rib segmentation result under the coordinate system and the original medical image, and mapping the original medical image onto the distance image according to the mapping relation to obtain an unfolded image of the rib.
In one embodiment, the computer program when executed by the processor further performs the steps of:
interpolation and smoothing are carried out on the distance image of the rib segmentation result under the coordinate system, and a processed distance image is obtained;
and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
And acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the candidate image region into the segmentation model to obtain an initial target position of the region of interest;
and carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing connected domain extraction processing on the target position of the region of interest to obtain the size of the region of interest;
and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a sample image; the sample image is an unfolding sample image of the rib, and the sample image comprises an annotation position of the region of interest;
Carrying out normalization processing on the sample image to obtain a normalized sample image;
training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A rib image detection method, the method comprising:
acquiring an original medical image, the original medical image comprising ribs;
performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs;
inputting the expanded image of the rib into a first neural network model for processing to obtain candidate positions of the region of interest;
Performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes candidate locations of the region of interest;
inputting the candidate image area into a second neural network model for processing to obtain a target position of the region of interest;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region, including:
setting the expanded image of the rib and the original medical image in the same coordinate system, mapping the candidate position of the region of interest to the original medical image, and carrying out region division on the original medical image by taking the found candidate position of the region of interest as the center and a certain step length to obtain a candidate image region comprising the candidate position of the region of interest.
2. The method according to claim 1, wherein the performing an unfolding process on the rib in the original medical image to obtain an unfolded image of the rib includes:
Detecting the original medical image to obtain a rib segmentation result and at least one centrum center point;
analyzing and processing the at least one centrum central point, and determining a target direction of the centrum corresponding to the at least one centrum central point;
and according to the target direction of the vertebral body, performing unfolding treatment on the ribs in the original medical image to obtain unfolded images of the ribs.
3. The method according to claim 2, wherein the expanding the ribs in the original medical image according to the target direction of the vertebral body to obtain an expanded image of the ribs comprises:
establishing a coordinate system based on the target direction of the vertebral body and the at least one central point of the vertebral body, and determining a distance image of the rib segmentation result under the coordinate system;
and establishing a mapping relation between the distance image and the original medical image according to the distance image of the rib segmentation result under the coordinate system and the original medical image, and mapping the original medical image onto the distance image according to the mapping relation to obtain an unfolded image of the rib.
4. A method according to claim 3, wherein said establishing a mapping relationship between the distance image and the original medical image from the distance image and the original medical image of the rib segmentation result in the coordinate system comprises:
Interpolation and smoothing are carried out on the distance image of the rib segmentation result under the coordinate system, and a processed distance image is obtained;
and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
5. The method of claim 1, wherein the second neural network model is a classification model, and the inputting the candidate image region into the second neural network model for processing results in the target location of the region of interest comprises:
inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
and acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the region of interest.
6. The method of claim 1, wherein the second neural network model is a segmentation model, and the inputting the candidate image region into the second neural network model for processing results in the target location of the region of interest comprises:
Inputting the candidate image region into the segmentation model to obtain an initial target position of the region of interest;
and carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
7. The method of claim 6, wherein the method further comprises:
performing connected domain extraction processing on the target position of the region of interest to obtain the size of the region of interest;
and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
8. The method of claim 1, wherein the training method of the first neural network model comprises:
acquiring a sample image; the sample image is an unfolding sample image of the rib, and the sample image comprises an annotation position of the region of interest;
carrying out normalization processing on the sample image to obtain a normalized sample image;
training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
10. A readable storage medium having stored thereon a computer program, which when executed by a processor realizes the steps of the method according to any of claims 1 to 8.
CN201911133164.0A 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium Active CN111080573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911133164.0A CN111080573B (en) 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911133164.0A CN111080573B (en) 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN111080573A CN111080573A (en) 2020-04-28
CN111080573B true CN111080573B (en) 2024-02-27

Family

ID=70311015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911133164.0A Active CN111080573B (en) 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN111080573B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968102B (en) * 2020-08-27 2023-04-07 中冶赛迪信息技术(重庆)有限公司 Target equipment detection method, system, medium and electronic terminal
CN112950552B (en) * 2021-02-05 2021-12-17 慧影医疗科技(北京)有限公司 Rib segmentation marking method and system based on convolutional neural network
CN113160242B (en) * 2021-03-17 2023-03-14 中南民族大学 Rectal cancer tumor image preprocessing method and device based on pelvic structure
CN113160199B (en) * 2021-04-29 2022-06-17 武汉联影医疗科技有限公司 Image recognition method and device, computer equipment and storage medium
CN113139954B (en) * 2021-05-11 2023-06-20 上海杏脉信息科技有限公司 Medical image processing device and method
CN113255762B (en) * 2021-05-20 2022-01-11 推想医疗科技股份有限公司 Image processing method and device
CN113610825B (en) * 2021-08-13 2022-03-29 推想医疗科技股份有限公司 Method and system for identifying ribs of intraoperative image
CN115035136B (en) * 2022-08-09 2023-01-24 南方医科大学第三附属医院(广东省骨科研究院) Method, system, device and storage medium for bone subregion segmentation in knee joint image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
CN105550985A (en) * 2015-12-31 2016-05-04 上海联影医疗科技有限公司 Organ cavity wall expanding method
CN107798682A (en) * 2017-08-31 2018-03-13 深圳联影医疗科技有限公司 Image segmentation system, method, apparatus and computer-readable recording medium
CN109035141A (en) * 2018-07-13 2018-12-18 上海皓桦科技股份有限公司 Rib cage expanding unit and method
CN109124662A (en) * 2018-07-13 2019-01-04 上海皓桦科技股份有限公司 Rib cage center line detecting device and method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109697449A (en) * 2017-10-20 2019-04-30 杭州海康威视数字技术股份有限公司 A kind of object detection method, device and electronic equipment
CN109859233A (en) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 The training method and system of image procossing, image processing model
CN109993726A (en) * 2019-02-21 2019-07-09 上海联影智能医疗科技有限公司 Detection method, device, equipment and the storage medium of medical image
CN110084175A (en) * 2019-04-23 2019-08-02 普联技术有限公司 A kind of object detection method, object detecting device and electronic equipment
CN110458799A (en) * 2019-06-24 2019-11-15 上海皓桦科技股份有限公司 Fracture of rib automatic testing method based on rib cage expanded view

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020233B2 (en) * 2011-09-19 2015-04-28 Siemens Aktiengesellschaft Method and system for up-vector detection for ribs in computed tomography volumes
US10140709B2 (en) * 2017-02-27 2018-11-27 International Business Machines Corporation Automatic detection and semantic description of lesions using a convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
CN105550985A (en) * 2015-12-31 2016-05-04 上海联影医疗科技有限公司 Organ cavity wall expanding method
CN107798682A (en) * 2017-08-31 2018-03-13 深圳联影医疗科技有限公司 Image segmentation system, method, apparatus and computer-readable recording medium
CN109697449A (en) * 2017-10-20 2019-04-30 杭州海康威视数字技术股份有限公司 A kind of object detection method, device and electronic equipment
CN109035141A (en) * 2018-07-13 2018-12-18 上海皓桦科技股份有限公司 Rib cage expanding unit and method
CN109124662A (en) * 2018-07-13 2019-01-04 上海皓桦科技股份有限公司 Rib cage center line detecting device and method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109859233A (en) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 The training method and system of image procossing, image processing model
CN109993726A (en) * 2019-02-21 2019-07-09 上海联影智能医疗科技有限公司 Detection method, device, equipment and the storage medium of medical image
CN110084175A (en) * 2019-04-23 2019-08-02 普联技术有限公司 A kind of object detection method, object detecting device and electronic equipment
CN110458799A (en) * 2019-06-24 2019-11-15 上海皓桦科技股份有限公司 Fracture of rib automatic testing method based on rib cage expanded view

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Automated Rib Fracture Detection of Postmortem Computed Tomography Images Using Machine Learning Techniques;Samuel Gunz等;《Arxiv》;1-12 *
Deep Learning Based Rib Centerline Extraction and Labeling;Matthias Lenga等;《Arxiv》;1-12 *
一种新的可视化肋骨骨折诊断方法研究;赵晓飞;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;第2016年卷(第8期);E066-198 *
基于深度学习的股骨分割;王萌;《中国优秀硕士学位论文全文数据库 基础科学辑》;第2019年卷(第9期);A006-343 *

Also Published As

Publication number Publication date
CN111080573A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111080573B (en) Rib image detection method, computer device and storage medium
CN108520519B (en) Image processing method and device and computer readable storage medium
CN109993726B (en) Medical image detection method, device, equipment and storage medium
US8437521B2 (en) Systems and methods for automatic vertebra edge detection, segmentation and identification in 3D imaging
CN109754396B (en) Image registration method and device, computer equipment and storage medium
CN110766730B (en) Image registration and follow-up evaluation method, storage medium and computer equipment
US8135189B2 (en) System and method for organ segmentation using surface patch classification in 2D and 3D images
CN111160367A (en) Image classification method and device, computer equipment and readable storage medium
CN110717961B (en) Multi-modal image reconstruction method and device, computer equipment and storage medium
CN111311655B (en) Multi-mode image registration method, device, electronic equipment and storage medium
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
CN110599465B (en) Image positioning method and device, computer equipment and storage medium
EP3722996A2 (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
EP4156096A1 (en) Method, device and system for automated processing of medical images to output alerts for detected dissimilarities
CN111462071B (en) Image processing method and system
CN112381762A (en) CT rib fracture auxiliary diagnosis system based on deep learning algorithm
US8306354B2 (en) Image processing apparatus, method, and program
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN114155193B (en) Blood vessel segmentation method and device based on feature enhancement
Kim et al. Vertebrae localization in CT using both local and global symmetry features
CN111192268A (en) Medical image segmentation model construction method and CBCT image bone segmentation method
CN109087357B (en) Scanning positioning method and device, computer equipment and computer readable storage medium
CN112950684B (en) Target feature extraction method, device, equipment and medium based on surface registration
CN113129418B (en) Target surface reconstruction method, device, equipment and medium based on three-dimensional image
Reddy et al. Anatomical Landmark Detection using Deep Appearance-Context Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant