WO2021078040A1 - 一种病灶的定位方法及装置 - Google Patents
一种病灶的定位方法及装置 Download PDFInfo
- Publication number
- WO2021078040A1 WO2021078040A1 PCT/CN2020/120627 CN2020120627W WO2021078040A1 WO 2021078040 A1 WO2021078040 A1 WO 2021078040A1 CN 2020120627 W CN2020120627 W CN 2020120627W WO 2021078040 A1 WO2021078040 A1 WO 2021078040A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- spine
- medical image
- line
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
- G06T2207/30012—Spine; Backbone
Definitions
- This application relates to the technical field of medical diagnosis, and in particular to a method and device for locating a lesion.
- the relative position of the lesion and the trunk that is, which area of the trunk the lesion is in (for example, the area includes the front, back, left, and right, etc.), is also of great significance for the diagnosis and treatment of the disease.
- manual labeling or additional markers are usually used to determine the relative position of the lesion and the trunk.
- the present application provides a method and device for locating a lesion, aiming to provide an efficient, accurate, and widely applicable lesion locating technology.
- a method for locating lesions including:
- the position parameter of the spine including the center point of the area occupied by the spine;
- the extension line of the spine is a straight line extending forward of the spine from the center point, the spine
- the forward direction is the direction opposite to the direction of the spinous process in the medical image.
- the spinous process points from the first end to the second end, and the end of the spinous process closer to the vertebral body of the spine is The first end is the second end; the other end is the second end;
- the area where the lesion is located is determined.
- the location parameter further includes:
- the angle of the spine, the angle of the spine is the angle between the extension line of the spine and the horizontal direction.
- the determining the intersection of the extension line of the spine and the edge of the trunk includes:
- the center point as the initial starting point, the reference point as the initial end point, and the midpoint between the starting point and the end point as the initial target point.
- the target point located on the edge of the torso is taken as the intersection of the extension line of the spine and the edge of the torso.
- the process of determining the positional relationship between the target point and the torso includes:
- the target point is within the torso
- the target point is outside the torso
- the target point is on the edge of the torso.
- the method before the update of the target point according to the following steps until the target point is a point on the edge of the torso, the method further includes:
- the torso is segmented from the medical image to obtain a segmented image.
- pixels of the torso are the target pixels, and other pixels are the background pixels.
- the obtaining a reference point according to the equation of the center point, the angle, the straight line, and the size of the medical image includes:
- the intersection of the extension line and the edge of the medical image is obtained as the reference point.
- the detecting the position parameter of the spine from the medical image includes:
- the position parameter of the spine in the medical image is determined, and the second parameter is the spine in each medical image before the medical image in the medical image sequence.
- the positional parameters are determined, and the second parameter is the spine in each medical image before the medical image in the medical image sequence.
- the determining the position parameter of the spine in the medical image according to the first parameter and the second parameter includes:
- the weighted sum of the first parameter and the second parameter is used as the position of the spine in the medical image, wherein the weight of the first parameter is smaller than the weight of the second parameter.
- a device for locating lesions including:
- a spine detection unit configured to detect a position parameter of the spine from a medical image, the position parameter of the spine including the center point of the area occupied by the spine;
- the intersection point determination unit is configured to determine the intersection point between the extension line of the spine and the edge of the trunk in the medical image, where the extension line of the spine starts from the center point and extends forward of the spine
- the anterior direction of the spine is the direction opposite to the direction of the spinous process in the medical image.
- the spinous process points from the first end to the second end, and the spinous process is a distance from the vertebral body of the spine. The closer end is the first end, and the other end is the second end;
- the first boundary unit is configured to use the line connecting the center point and the intersection point as the boundary line between the left area and the right area of the medical image;
- the second demarcation unit is used to determine the front-to-back demarcation line of the torso in the medical image according to the target perpendicular of the line connecting the center point and the intersection point, where the target perpendicular is the center point Among the perpendicular lines of the line with the intersection point, a perpendicular line passing through a predetermined point on the line between the center point and the intersection point;
- the area determining unit is used to determine the area where the lesion is located according to the dividing line.
- a processor for running a program wherein the above-mentioned method for locating a lesion is executed when the program is running.
- a storage medium includes a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to execute the above-mentioned method for locating the lesion.
- the lesion location method and device, processor, and storage medium provided in the present application detect the position parameters of the spine from a medical image, and in the medical image, determine the intersection of the extension line of the spine and the edge of the trunk.
- the line connecting the center point and the intersection point is used as the dividing line between the left area and the right area of the medical image.
- the front-to-back boundary line of the torso in the medical image is determined.
- the vertical line is a vertical line that passes through a preset point on the line connecting the center point and the intersection point among the vertical lines connecting the center point and the intersection point. According to the dividing line, determine the area where the lesion is located.
- Fig. 1a is a schematic diagram of a method for locating a lesion according to an embodiment of the application.
- Fig. 1b is a schematic diagram of a CT image provided by an embodiment of the application.
- Fig. 1c is a schematic diagram of another CT image provided by an embodiment of the application.
- Fig. 1d is a schematic diagram of a medical image provided by an embodiment of the application.
- Fig. 1e is a schematic diagram of another medical image provided by an embodiment of the application.
- FIG. 1f is a schematic diagram of another medical image provided by an embodiment of the application.
- FIG. 2 is a schematic diagram of a specific implementation manner of detecting the position parameter of the spine from a medical image provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of another method for locating a lesion according to an embodiment of the application.
- FIG. 4 is a schematic structural diagram of a device for locating a lesion according to an embodiment of the application.
- the embodiments of the present application provide an efficient, accurate, and widely applicable lesion location technology.
- a schematic diagram of a method for locating a lesion provided in an embodiment of this application includes the following steps.
- the position parameter of the spine includes the position coordinates of the center point of the area occupied by the spine.
- the area occupied by the spine refers to the area of the human spine in a medical image (for example, a CT image).
- a medical image for example, a CT image.
- the circumscribed polygonal area of the area occupied by the imaging pixels of the spine in the cross-section of the human medical image for example, Rectangular area
- the center point is the center of the circumscribed polygonal area (for example, rectangular area).
- the polygonal area and the center point can be referred to the CT image shown in FIG. 1b.
- the coordinate position of the center point of the area occupied by the spine in the medical image can be obtained preliminarily based on the deep learning algorithm.
- S102 In the medical image, determine the intersection of the extension line of the spine and the edge of the trunk.
- the extension line of the spine is a straight line extending forward of the spine from the center point of the area occupied by the spine.
- the anterior direction of the spine is the direction opposite to the direction of the spinous process in the medical image.
- the spinous process points from the first end to the second end, the end of the spinous process closer to the vertebral body of the spine is the first end, and the other end is the second end.
- the extension line of the spine, the intersection of the extension line of the spine and the edge of the trunk, the anterior direction of the spine, the direction of the spinous process, and other physiological structures of the spine can be seen in the CT image shown in FIG. 1c.
- the line connecting the center point and the intersection point divides the medical image into two regions.
- the left and right are already marked in the existing CT images.
- the dividing line between the left and right regions is given. It is also possible to mark the two areas as the left area and the right area according to the doctor's marking habits, which is not limited here.
- the medical image shown in Figure 1d is not limited here.
- S104 Determine the front-to-back dividing line of the torso in the medical image according to the target perpendicular line connecting the center point and the intersection point.
- the target vertical line is a vertical line that passes through a preset point on the line connecting the center point and the intersection point among the vertical lines connecting the center point and the intersection point. Since the number of vertical lines connecting the center point and the intersection point is multiple, in this embodiment, the vertical line passing through the midpoint of the line connecting the center point and the intersection point can be used as the dividing line in the front-rear direction. In this case, the vertical line of the midpoint of the line between the center point and the intersection point further divides the medical image that has been divided into two areas into 4 areas, and the line is used as the left and right dividing line of the torso in the medical image.
- the vertical line is used as the front-rear dividing line of the torso in the front-rear direction in the medical image, and four areas of the front left area, front right area, rear left area, and rear right area of the torso are obtained, for example, the medical image shown in FIG. 1e.
- the preset point is not limited to the midpoint of the line between the center point and the intersection, but can also be a point at 3/5 of the line between the center point and the intersection.
- the specific location of the preset point can be determined by the technician according to the actual situation.
- the settings are not limited in the embodiment of this application.
- the region can be further divided.
- the torso in the medical image is divided into 8 regions according to the angle line that forms a preset angle (for example, 45°) with the above-mentioned vertical line, and the front left area, front right area, middle front left area, middle front right area of the torso are obtained.
- Area, middle and rear left area, middle and rear right area, rear left area and rear right area, the specific trunk area distribution map can be seen in Figure 1f.
- S105 Determine the area where the lesion is located according to the dividing line.
- the specific location of the lesion in the medical image can be determined through the existing lesion recognition technology. After determining the specific location of the lesion, based on the specific location and boundary of the lesion, the area where the lesion is located can be determined. For example, if the specific location of the lesion is on the left side of the left and right dividing line and on the upper side of the front and rear dividing line, the lesion is located in the front left area of the trunk.
- the position parameter of the spine is detected from the medical image, and in the medical image, the intersection of the extension line of the spine and the edge of the trunk is determined.
- the line connecting the center point and the intersection point is used as the dividing line between the left area and the right area of the medical image.
- the front-to-back boundary line of the torso in the medical image is determined.
- the vertical line is a vertical line that passes through a preset point on the line connecting the center point and the intersection point among the vertical lines connecting the center point and the intersection point. According to the dividing line, determine the area where the lesion is located.
- this application based on the physiological structure characteristics of the spine, determines the intersection point between the extension line of the spine and the edge of the trunk, and further determines the line between the center point and the intersection point of the area occupied by the spine and the vertical line of the line, thereby determining the points of different regions.
- the boundary line realizes the purpose of determining the area where the lesion is located based on the boundary line. Because the physiological structure of the spine is relatively stable, it has a higher accuracy compared with manual labeling, and because it realizes automatic realization, it has higher efficiency compared with human labeling. Further, because there is no need Additional annotations, so the scope of application is wider.
- a schematic diagram of a specific implementation manner of detecting the position parameter of the spine from a medical image includes the following steps.
- S201 Input the medical image into a preset model, and obtain the position parameter of the spine output by the model as the first parameter.
- the preset model includes, but is not limited to, a deep learning model such as a Single Shot MultiBox Detector (SSD) model.
- SSD Single Shot MultiBox Detector
- the specific process of performing target detection on medical images and obtaining the position of the spine includes:
- A1 Use a medical image (for example, a CT image) as the input of the feature extraction module in the target detection model, and perform feature extraction on the medical image to obtain the spine features in the medical image.
- a medical image for example, a CT image
- the feature extraction module can be constructed based on the ResNet50+FPN structure.
- the spine feature is used as the input of the prediction module in the target detection model, and the direction and angle of the spine feature are predicted to obtain the direction and angle of the spine.
- the angle of the spine is the angle between the extension line of the spine and the horizontal direction.
- the specific process of predicting the direction of the spine features includes: global max pooling (GMP) processing of the spine features, and the processed spine features as the input of the fully connected network.
- GMP global max pooling
- the output result of the connection network is used as the direction of the spine feature.
- the specific process of predicting the angle of the spine features includes: performing Global Max Pooling (GMP) processing on the spine feature, using the processed spine feature as the input of another fully connected network, and outputting the fully connected network The result is the angle of the spine feature.
- GMP Global Max Pooling
- the spine feature is used as the input of the target detection module in the target detection model, and the position of the spine feature is detected to obtain the position coordinates of the center point of the area occupied by the spine.
- the training process of the above-mentioned target detection model is similar to the detection process of the above-mentioned target detection model, and only the sample medical image and the spine features marked with specific coordinate positions are used as the input of the initial target detection model.
- S202 Determine the position parameter of the spine in the medical image according to the first parameter and the second parameter.
- the second parameter is the position parameter of the spine in each medical image before the medical image in the medical image sequence, and the weight of the first parameter is smaller than the weight of the second parameter.
- the medical images will be input into the model in the form of image frames.
- the model performs spine detection on each image frame according to the arrangement order of each image frame in the medical image sequence, and obtains the position parameter of the spine in each image frame.
- EWMA Exponential Weighted Moving Average
- the weighted sum of the first parameter and the second parameter may be used as the position parameter of the spine in the medical image. Based on the characteristics of the EWMA algorithm, the calculation process of the weighted sum of the first parameter and the second parameter is shown in formula (1).
- y t represents the weighted sum of the first parameter and the second parameter
- ⁇ represents the weight corresponding to the second parameter
- y t-1 represents the second parameter
- (1- ⁇ ) represents the weight corresponding to the first parameter.
- Weight, x t represents the first parameter.
- the deviation can be reduced by the above formula.
- the first parameter x t is (70, 90)
- the second parameter y t-1 is (40, 32)
- the weight ⁇ corresponding to the second parameter is 0.95.
- the angle of the spine can also be obtained based on the model, and the EWMA algorithm can also be used to optimize the spine angle of each image frame, thereby correcting the single-level error of the spine angle output by the model, and smoothing the final The output result.
- the above formula can reduce the amount of deviation.
- the first angle parameter x t is 90°
- the second angle parameter y t-1 is 32°
- the weight ⁇ corresponding to the second angle parameter is 0.95.
- weighted sum is only a specific implementation of S202.
- other calculation methods can also be used to determine the final position parameter based on the first parameter and the second parameter.
- a preset model is used to obtain the position parameter of the spine output by the model as the first parameter. More importantly, the position parameter of the spine in each medical image before the medical image in the medical image sequence is used as the second parameter. Using the first parameter obtained by the second parameter optimization model to obtain the final position parameter is beneficial to improve the accuracy of the position parameter.
- a schematic diagram of another method for locating a lesion provided in an embodiment of this application includes the following steps.
- the position parameter includes the position coordinates of the center point of the area occupied by the spine and the angle of the spine, and the angle of the spine is the angle between the extension line of the spine and the horizontal direction.
- the position coordinates of the center point of the area occupied by the spine and the angle of the spine can be obtained based on the steps shown in FIG. 2 above.
- other existing deep learning model algorithms can also be used to obtain the coordinate position of the center point of the area occupied by the spine and the angle of the spine.
- the value of the parameter a in the linear equation is usually tan ⁇ , and ⁇ is the angle of the spine.
- the intersection point of the extension line and the edge of the medical image is obtained, and the intersection point is used as the reference point.
- a rectangular coordinate system is established based on the medical image.
- the two edges of the medical image are respectively used as the x-axis and y-axis of the rectangular coordinate system.
- intersection point (0, B) is used as the reference point. If the value of A*w+B is in the range of [0, h], the intersection point (w, A* w+B) as a reference point.
- S303 Segment the torso from the medical image to obtain a segmented image.
- the pixels of the torso are the target pixels, and the other pixels are the background pixels.
- the target pixel is displayed as 1 in the segmented image of the CT image
- the background pixel is displayed as 0 in the segmented image of the CT image.
- a threshold segmentation algorithm is used to separate the body parts from the medical image, and the maximum connected domain method is used to exclude other non-trunk parts (such as shoulders, arms) in the body parts, so as to obtain a segmented image with only the torso remaining.
- S303 is not only the execution sequence provided in the embodiment of the present application, but S303 can also be executed before S301 and/or S302.
- S304 Use the center point as the initial starting point, the reference point as the initial end point, and the midpoint between the start point and the end point as the initial target point, and update the target point according to the preset steps until the target point is on the edge of the torso Point.
- the preset steps include: if the target point is inside the torso, the target point is used as the new starting point, and the reference point is used as the end point to update the target point; if the target point is outside the torso, the target point is used as the new End point, using the center point as the starting point to update the target point.
- the target point is within the torso
- the target point is outside the torso
- the target point is on the edge of the torso.
- the center point is (x v , y v )
- the reference point is (x i , y i )
- (x v , y v ) is used as the starting point
- (x i , y i ) is used as the end point
- the midpoint (x m , y m ) between the two points is calculated.
- Analyze the specific distribution of points in the (x m , y m ) window for example, a 3 ⁇ 3 pixel window
- the points in the window of (x m , y m ) are all 1, and then (x m , y m ) is within the torso.
- the selection of the target point is not only the midpoint between the start point and the end point, but also the preset division position point between the start point and the end point (for example, the 3/5 point on the line between the start point and the end point). point).
- the coordinates of the start point and the end point are the same, or the coordinate position of the start point exceeds the coordinate position of the end point, reselect a new reference point and/or center point and execute the preset steps.
- S305 Use the target point located on the edge of the trunk as the intersection of the extension line of the spine and the edge of the trunk.
- the line connecting the center point and the intersection point divides the medical image into two regions, and according to the doctor's labeling habits, the two regions are respectively labeled as the left region and the right region.
- S307 Determine the boundary line of the torso in the front and back direction of the torso in the medical image by connecting the target perpendicular line connecting the center point and the intersection point.
- the target vertical line connecting the center point and the intersection point further divides the medical image that has been divided into two areas into 4 regions.
- the line is the left and right dividing line of the torso in the medical image, and the target vertical line is used as
- the front-to-back dividing line of the torso in the front-to-back direction in the medical image is obtained from the front left area, front right area, back left area, and back right area of the torso.
- S308 Determine the area where the lesion is located according to the dividing line.
- the area where the lesion is located can be determined according to the specific location and boundary of the lesion.
- the position parameter of the spine is obtained from the medical image, and the position parameter includes the center point of the area occupied by the spine and the angle of the spine, and the torso is segmented from the medical image to obtain the segmented image.
- the reference point is obtained, and the intersection point between the extension line of the spine and the edge of the torso is determined according to the position relationship of the reference point in the segmented image.
- the dividing line of the medical image is obtained. In order to determine the area where the lesion is located according to the dividing line.
- the reference point obtained based on the center point of the area occupied by the spine and the angle of the spine can accurately obtain the intersection point between the extension line of the spine and the edge of the torso.
- the dividing line of the medical image is obtained, so as to realize the determination of the area where the lesion is located. Since the physiological structure of the spine is relatively stable, the dividing line is obtained based on the center point and the intersection point, and there will be no regional labeling errors or boundary deviations, so it has higher accuracy than human labeling. And because it realizes automatic realization, it has higher efficiency compared with human labeling. Furthermore, because no additional annotations are required, the scope of application is wide.
- a schematic structural diagram of a lesion locating device provided in this embodiment of the present application includes:
- the spine detection unit 100 is used to detect the position parameter of the spine from the medical image, and the position parameter of the spine includes the center point of the area occupied by the spine.
- the position parameters mentioned in the spine detection unit 100 also include: the angle of the spine, which is the angle between the extension line of the spine and the horizontal direction.
- the spine detection unit 100 is specifically configured to: input the medical image into a preset model, and obtain the position parameter of the spine output by the model as the first parameter. According to the first parameter and the second parameter, the position parameter of the spine in the medical image is determined.
- the second parameter is the position parameter of the spine in each medical image before the medical image in the medical image sequence.
- the spine detection unit 100 determines the position parameter of the spine in the medical image according to the first parameter and the second parameter.
- the specific implementation method includes: taking the weighted sum of the first parameter and the second parameter as the position of the spine in the medical image, where , The weight of the first parameter is less than the weight of the second parameter.
- the intersection point determination unit 200 is used to determine the intersection point between the extension line of the spine and the edge of the torso in the medical image.
- the extension line of the spine is a straight line extending from the center point to the forward direction of the spine, and the forward direction of the spine is medical In the image, the direction opposite to the direction of the spinous process, the spinous process points from the first end to the second end, the end of the spinous process closer to the vertebral body of the spine is the first end, and the other end is the second end.
- the specific implementation manner of the intersection point determination unit 200 determining the intersection point between the extension line of the spine and the edge of the trunk includes: obtaining the reference point according to the center point, the angle, the equation of the straight line, and the size of the medical image.
- the torso is segmented from the medical image to obtain a segmented image.
- the pixels of the torso are the target pixels, and the other pixels are the background pixels.
- the target point will be used as the new starting point and the reference point will be used as the end point to update the target point. If the target point is outside the torso, the target point is used as the new end point, and the center point is used as the starting point to update the target point.
- the target point located on the edge of the torso is regarded as the intersection of the extension line of the spine and the edge of the torso.
- the intersection point determination unit 200 obtains the reference point according to the center point, the angle, the equation of the straight line, and the size of the medical image, and the specific implementation method includes: using the center point and the angle to solve the straight line equation to obtain the equation of the extension line. According to the equation of the extension line and the size of the medical image, the intersection point of the extension line and the edge of the medical image is obtained as a reference point.
- the process of determining the positional relationship between the target point and the torso in the intersection point determining unit 200 includes: if the points in the window including the target point are all target pixel points, the target point is within the torso. If the points in the window including the target point are all background pixels, the target point is outside the torso. If the point in the window including the target point includes the background pixel point and the target pixel point, the target point is on the edge of the torso.
- the first dividing unit 300 is used to connect the line between the center point and the intersection point as the dividing line between the left area and the right area of the medical image.
- the second dividing unit 400 is used to determine the front-to-back dividing line of the torso in the medical image according to the target vertical line connecting the center point and the intersection point, where the target vertical line is the line connecting the center point and the intersection point In the vertical line, the vertical line passing through the preset point on the line connecting the center point and the intersection point.
- the area determining unit 500 is used to determine the area where the lesion is located according to the dividing line.
- the position parameter of the spine is detected from the medical image, and in the medical image, the intersection of the extension line of the spine and the edge of the trunk is determined.
- the line connecting the center point and the intersection point is used as the dividing line between the left area and the right area of the medical image.
- the front-to-back boundary line of the torso in the medical image is determined.
- the vertical line is a vertical line that passes through a preset point on the line connecting the center point and the intersection point among the vertical lines connecting the center point and the intersection point. According to the dividing line, determine the area where the lesion is located.
- this application determines the intersection point between the extension line of the spine and the edge of the trunk based on the physiological structure characteristics of the spine, and further determines the line between the center point and the intersection point of the area occupied by the spine and the vertical line of the line, thereby determining the points of different regions.
- the boundary line achieves the purpose of determining the area where the lesion is located based on the boundary line. Because the physiological structure of the spine is relatively stable, it has a higher accuracy than artificial marking. Moreover, because of the realization of automatic realization, it has higher efficiency compared with manual labeling. Furthermore, because no additional annotations are required, the scope of application is wide.
- an embodiment of the present application further provides a processor configured to run a program, wherein the above-mentioned method for locating the lesion disclosed in the embodiment of the present application is executed when the program is running.
- an embodiment of the present application also provides a storage medium on which a program is stored, and when the program is executed by a processor, the above-mentioned method for locating the lesion disclosed in the embodiment of the present application is realized.
- the functions described in the methods of the embodiments of the present application are implemented in the form of software functional units and sold or used as independent products, they can be stored in a storage medium readable by a computing device.
- a computing device which may be a personal computer, a server, a mobile computing device, or a network device, etc.
- a computing device which may be a personal computer, a server, a mobile computing device, or a network device, etc.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
Abstract
Description
Claims (11)
- 一种病灶的定位方法,包括:从医学图像中检测脊椎的位置参数,所述脊椎的位置参数包括所述脊椎所占区域的中心点;在所述医学图像中,确定所述脊椎的延长线与躯干的边缘的交点,所述脊椎的延长线为以所述中心点为起点,向所述脊椎的前向延伸的直线,所述脊椎的前向为所述医学图像中,与棘突的指向相反的方向,所述棘突从第一端指向第二端,所述棘突上距离所述脊椎的椎体较近的端为所述第一端,另一端为所述第二端;将所述中心点与所述交点的连线,作为所述医学图像的左区域和右区域的分界线;依据所述中心点与所述交点的连线的目标垂线,确定所述医学图像中所述躯干在前后方向的分界线,所述目标垂线为所述中心点与所述交点的连线的垂线中,经过所述中心点与所述交点的连线上的预设点的垂线;依据所述分界线,确定病灶所在的区域。
- 根据权利要求1所述的方法,其中,所述位置参数还包括:所述脊椎的角度,所述脊椎的角度为所述脊椎的延长线与水平方向的夹角。
- 根据权利要求2所述的方法,其中,所述确定所述脊椎的延长线与躯干的边缘的交点,包括:依据所述中心点、所述角度、直线的方程和所述医学图像的尺寸,得到参考点;将所述中心点作为初始的起点,将所述参考点作为初始的终点,将所述起点与所述终点的中点,作为初始的目标点,按照以下步骤更新目标点,直到所述目标点为所述躯干的边缘上的点:如果所述目标点在所述躯干之内,则将所述目标点作为新的起点,将所述参考点作为终点,更新所述目标点;如果所述目标点在所述躯干之外,则将所述目标点作为新的终点,将所述中心点作为起点,更新所述目标点;将位于所述躯干的边缘上的目标点作为所述脊椎的延长线与所述躯干的边缘的交点。
- 根据权利要求3所述的方法,其中,确定所述目标点与躯干的位置关系的过程包括:如果包括所述目标点的窗口中的点均为目标像素点,则所述目标点在所述躯干之内;如果包括所述目标点的窗口中的点均为背景像素点,则所述目标点在所述躯 干之外;如果包括所述目标点的窗口中的点包括所述背景像素点和所述目标像素点,则所述目标点在所述躯干的边缘上。
- 根据权利要求4所述的方法,其中,在所述按照以下步骤更新目标点,直到所述目标点为所述躯干的边缘上的点之前,还包括:从所述医学图像中分割所述躯干,得到分割图像,所述分割图像中,所述躯干的像素点为所述目标像素点,其它像素点为所述背景像素点。
- 根据权利要求3至5任一项所述的方法,其中,所述依据所述中心点、所述角度、直线的方程和所述医学图像的尺寸,得到参考点,包括:利用所述中心点与所述角度求解直线方程,得到所述延长线的方程;依据所述延长线的方程与所述医学图像的尺寸,得到所述延长线与所述医学图像的边缘的交点,作为所述参考点。
- 根据权利要求1-6任一项所述的方法,其中,所述从医学图像中检测脊椎的位置参数,包括:将所述医学图像输入预设的模型,得到所述模型输出的所述脊椎的位置参数,作为第一参数;依据所述第一参数和第二参数,确定所述医学图像中的所述脊椎的位置参数,所述第二参数为,医学图像序列中、所述医学图像之前的各个医学图像中所述脊椎的位置参数。
- 根据权利要求7所述的方法,其中,所述依据所述第一参数和第二参数,确定所述医学图像中的所述脊椎的位置参数,包括:将所述第一参数与所述第二参数的加权和,作为所述医学图像中的所述脊椎的位置,其中,所述第一参数的权重小于所述第二参数的权重。
- 一种病灶的定位装置,包括:脊椎检测单元,用于从医学图像中检测脊椎的位置参数,所述脊椎的位置参数包括所述脊椎所占区域的中心点;交点确定单元,用于在所述医学图像中,确定所述脊椎的延长线与躯干的边缘的交点,所述脊椎的延长线为以所述中心点为起点,向所述脊椎的前向延伸的直线,所述脊椎的前向为所述医学图像中,与棘突的指向相反的方向,所述棘突从第一端指向第二端,所述棘突上距离所述脊椎的椎体较近的端为所述第一端,另一端为所述第二端;第一分界单元,用于将所述中心点与所述交点的连线,作为所述医学图像的左区域和右区域的分界线;第二分界单元,用于依据所述中心点与所述交点的连线的目标垂线,确定所述医学图像中所述躯干在前后方向的分界线,所述目标垂线为所述中心点与所述 交点的连线的垂线中,经过所述中心点与所述交点的连线上的预设点的垂线;区域确定单元,用于依据所述分界线,确定病灶所在的区域。
- 一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1-8中任一项所述的病灶的定位方法。
- 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行如权利要求1-8中任一项所述的病灶的定位方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911002514.X | 2019-10-21 | ||
CN201911002514.XA CN110752029B (zh) | 2019-10-21 | 2019-10-21 | 一种病灶的定位方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021078040A1 true WO2021078040A1 (zh) | 2021-04-29 |
Family
ID=69279201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/120627 WO2021078040A1 (zh) | 2019-10-21 | 2020-10-13 | 一种病灶的定位方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110752029B (zh) |
WO (1) | WO2021078040A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283361A (zh) * | 2021-06-02 | 2021-08-20 | 广东电网有限责任公司广州供电局 | 一种绝缘层破损识别模型训练方法、识别方法和装置 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110752029B (zh) * | 2019-10-21 | 2020-08-28 | 北京推想科技有限公司 | 一种病灶的定位方法及装置 |
CN113112467B (zh) * | 2021-04-06 | 2023-04-07 | 上海深至信息科技有限公司 | 一种平面图标注系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110164798A1 (en) * | 2008-04-03 | 2011-07-07 | Fujifilm Corporation | Apparatus, method, and program for detecting three dimenmsional abdominal cavity regions |
US20120207268A1 (en) * | 2011-01-04 | 2012-08-16 | Edda Technology (Suzhou) Ltd. | System and methods for functional analysis of soft organ segments in spect-ct images |
CN104582579A (zh) * | 2012-10-23 | 2015-04-29 | 株式会社日立医疗器械 | 图像处理装置及椎管评价方法 |
CN105496563A (zh) * | 2015-12-04 | 2016-04-20 | 上海联影医疗科技有限公司 | 标定医学图像定位线的方法 |
CN107292928A (zh) * | 2017-06-16 | 2017-10-24 | 沈阳东软医疗系统有限公司 | 一种血管定位的方法及装置 |
CN109509186A (zh) * | 2018-11-09 | 2019-03-22 | 北京邮电大学 | 基于大脑ct图像的缺血性脑卒中病灶检测方法及装置 |
CN110752029A (zh) * | 2019-10-21 | 2020-02-04 | 北京推想科技有限公司 | 一种病灶的定位方法及装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8374892B2 (en) * | 2010-01-25 | 2013-02-12 | Amcad Biomed Corporation | Method for retrieving a tumor contour of an image processing system |
WO2015158372A1 (en) * | 2014-04-15 | 2015-10-22 | Elekta Ab (Publ) | Method and system for calibration |
KR102233966B1 (ko) * | 2014-05-12 | 2021-03-31 | 삼성전자주식회사 | 의료 영상 정합 방법 및 그 장치 |
CN106600591B (zh) * | 2016-12-13 | 2019-12-03 | 上海联影医疗科技有限公司 | 一种医学图像方位显示方法及装置 |
CN107808377B (zh) * | 2017-10-31 | 2019-02-12 | 北京青燕祥云科技有限公司 | 一种肺叶中病灶的定位装置 |
-
2019
- 2019-10-21 CN CN201911002514.XA patent/CN110752029B/zh active Active
-
2020
- 2020-10-13 WO PCT/CN2020/120627 patent/WO2021078040A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110164798A1 (en) * | 2008-04-03 | 2011-07-07 | Fujifilm Corporation | Apparatus, method, and program for detecting three dimenmsional abdominal cavity regions |
US20120207268A1 (en) * | 2011-01-04 | 2012-08-16 | Edda Technology (Suzhou) Ltd. | System and methods for functional analysis of soft organ segments in spect-ct images |
CN104582579A (zh) * | 2012-10-23 | 2015-04-29 | 株式会社日立医疗器械 | 图像处理装置及椎管评价方法 |
CN105496563A (zh) * | 2015-12-04 | 2016-04-20 | 上海联影医疗科技有限公司 | 标定医学图像定位线的方法 |
CN107292928A (zh) * | 2017-06-16 | 2017-10-24 | 沈阳东软医疗系统有限公司 | 一种血管定位的方法及装置 |
CN109509186A (zh) * | 2018-11-09 | 2019-03-22 | 北京邮电大学 | 基于大脑ct图像的缺血性脑卒中病灶检测方法及装置 |
CN110752029A (zh) * | 2019-10-21 | 2020-02-04 | 北京推想科技有限公司 | 一种病灶的定位方法及装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283361A (zh) * | 2021-06-02 | 2021-08-20 | 广东电网有限责任公司广州供电局 | 一种绝缘层破损识别模型训练方法、识别方法和装置 |
CN113283361B (zh) * | 2021-06-02 | 2022-08-12 | 广东电网有限责任公司广州供电局 | 一种绝缘层破损识别模型训练方法、识别方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110752029B (zh) | 2020-08-28 |
CN110752029A (zh) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021078040A1 (zh) | 一种病灶的定位方法及装置 | |
CN111046717B (zh) | 眼底图像黄斑中心定位方法、装置、电子设备及存储介质 | |
US8150132B2 (en) | Image analysis apparatus, image analysis method, and computer-readable recording medium storing image analysis program | |
US10332267B2 (en) | Registration of fluoroscopic images of the chest and corresponding 3D image data based on the ribs and spine | |
CN110021025B (zh) | 感兴趣区域的匹配和显示方法、装置、设备及存储介质 | |
JP6564018B2 (ja) | 放射線画像の肺野セグメンテーション技術及び骨減弱技術 | |
WO2021184600A1 (zh) | 一种图像分割方法及装置、设备及计算机可读存储介质 | |
US9299148B2 (en) | Method and system for automatically determining a localizer in a scout image | |
WO2019196099A1 (zh) | 医学图像内目标对象的边界定位方法、存储介质及终端 | |
US20120271162A1 (en) | Constrained Registration for Motion Compensation in Atrial Fibrillation Ablation Procedures | |
US9514384B2 (en) | Image display apparatus, image display method and storage medium storing image display program | |
JP6001783B2 (ja) | 神経繊維構造の定位 | |
CN110969698A (zh) | 颞骨空间坐标系的构建方法、空间定位方法及电子设备 | |
US20210271914A1 (en) | Image processing apparatus, image processing method, and program | |
CN108846830A (zh) | 对ct中腰椎自动定位的方法、装置以及存储介质 | |
JP2015136566A (ja) | 画像処理装置、およびプログラム | |
CN114092475B (zh) | 病灶长径确定方法、图像标注方法、装置及计算机设备 | |
CN113613562A (zh) | 对x射线成像系统进行定位 | |
CN111311655A (zh) | 多模态图像配准方法、装置、电子设备、存储介质 | |
CN113643176A (zh) | 一种肋骨显示方法和装置 | |
WO2011163414A2 (en) | Mechanism for advanced structure generation and editing | |
CN105825519A (zh) | 用于处理医学影像的方法和装置 | |
Jeon et al. | Maximum a posteriori estimation method for aorta localization and coronary seed identification | |
Fatima et al. | Vertebrae localization and spine segmentation on radiographic images for feature‐based curvature classification for scoliosis | |
JP2001118058A (ja) | 画像処理装置及び放射線治療計画システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20878422 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20878422 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/09/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20878422 Country of ref document: EP Kind code of ref document: A1 |