WO2020001236A1 - 提取医学图像标注的方法及装置 - Google Patents

提取医学图像标注的方法及装置 Download PDF

Info

Publication number
WO2020001236A1
WO2020001236A1 PCT/CN2019/089765 CN2019089765W WO2020001236A1 WO 2020001236 A1 WO2020001236 A1 WO 2020001236A1 CN 2019089765 W CN2019089765 W CN 2019089765W WO 2020001236 A1 WO2020001236 A1 WO 2020001236A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
fitting
region
target
edge
Prior art date
Application number
PCT/CN2019/089765
Other languages
English (en)
French (fr)
Inventor
李莹莹
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/623,527 priority Critical patent/US11379989B2/en
Publication of WO2020001236A1 publication Critical patent/WO2020001236A1/zh
Priority to US17/805,236 priority patent/US11783488B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a method and device for extracting medical image annotations.
  • Medical imaging plays an extremely important role in the medical field. For example, doctors can perform pathological analysis, disease diagnosis, and so on through medical images. However, the medical images taken in most hospitals are currently labeled by doctors to mark the lesions. The medical images obtained in this way can only be analyzed and diagnosed by professional medical personnel. For some people without medical experience, pathological analysis and disease diagnosis cannot be performed with only one medical image.
  • an embodiment of the present disclosure provides a method for extracting medical image annotations, including: using an edge detection algorithm to perform edge detection on a medical image to obtain each edge information in the medical image; and determining each edge information location. At least one defined target region; performing a fitting process on the determined target region to obtain a fitted region; and selecting a match from each of the fitted regions by using a preset condition set based on characteristics of the fitted region At least one target fitting region of a predetermined condition is used to extract annotations in a medical image.
  • the method before performing edge detection on a medical image by using an edge detection algorithm, the method further includes: filtering the medical image to obtain a filtered medical image.
  • the edge detection is performed on the filtered medical image.
  • the determining at least one target area defined by each edge information includes: performing an expansion operation on at least one area defined by the edge information, and determining the enlarged area as the target area.
  • the medical image is a fundus image
  • the edge detection algorithm uses double thresholds for detection.
  • a first set threshold value of the two thresholds is 80, and a second set threshold value is 150.
  • the performing a fitting process on the target region includes: classifying the target region, the classification corresponding to different lesion types in the medical image; and performing a fitting process on the target regions belonging to the same class to obtain the target region.
  • the fitted area of the category includes: classifying the target region, the classification corresponding to different lesion types in the medical image; and performing a fitting process on the target regions belonging to the same class to obtain the target region. The fitted area of the category.
  • the classification of the target region further includes: acquiring color characteristics corresponding to the edge information of each target region; and classifying the target region based on the color characteristics.
  • the lesion in the medical image is marked with an ellipse.
  • the edge information includes edge coordinate information.
  • the fitting processing on the target area includes: performing fitting processing on the edge coordinate information of the target area by using an ellipse fitting method to obtain the fitted area.
  • the preset condition is based on at least one of the following: a predetermined ratio of a length-to-length radius ratio of the fitting region; a predetermined ratio of an area-perimeter ratio of the fitting region; and the fitting region is not a nested fitting region .
  • an embodiment of the present disclosure also provides a device for extracting annotations in a medical image.
  • the device includes: an edge information acquisition circuit, a target region determination circuit, a fitting circuit, and a selection circuit.
  • the edge information acquisition circuit is configured to perform edge detection on a medical image by using an edge detection algorithm to obtain each edge information in the medical image.
  • the target area determination circuit is configured to determine at least one target area defined by each edge information.
  • the fitting circuit is used for performing a fitting process on the determined target area to obtain a fitted area.
  • the extraction module is configured to select at least one target fitting region that meets the preset condition from each of the fitting regions according to a preset condition set based on the characteristics of the fitting region to extract the annotation in the medical image.
  • the device further includes: a filtering circuit, configured to filter the medical image to obtain a filtered medical image.
  • the edge information acquisition circuit is configured to perform edge detection on the filtered medical image.
  • the target area determination circuit is further configured to perform an expansion operation on at least one area defined by the edge information, and determine the enlarged area as a target area.
  • the medical image is a fundus image.
  • the edge information acquisition circuit is configured to perform edge detection using a double threshold.
  • a first set threshold value of the dual thresholds is 80 and a second set threshold value is 150.
  • the fitting circuit further includes a classification sub-circuit for classifying the target region based on the edge information.
  • the classification corresponds to different types of lesions in a medical image.
  • the fitting circuit is further configured to perform a fitting process on target regions belonging to the same class to obtain a fitted region of the class.
  • the classification sub-circuit is further configured to obtain a color feature corresponding to the edge information of each target region, and classify the target region based on the color feature.
  • the lesion in the medical image is marked with an ellipse.
  • the edge information includes edge coordinate information.
  • the fitting circuit is configured to perform fitting processing on the edge coordinate information of the target area by using an ellipse fitting method to obtain the fitted area.
  • an embodiment of the present disclosure also provides a storage medium having computer instructions stored thereon. When the computer instructions are executed by the processor, the steps of the method described above are performed.
  • the present disclosure also provides a computer device including a memory, a processor, and computer instructions stored on the memory.
  • the processor is configured to execute computer instructions to perform the steps of the method described above.
  • FIG. 1 shows a flowchart of steps in a method for extracting annotations in a medical image according to an embodiment of the present disclosure
  • FIG. 2a is a schematic diagram of a fundus medical image provided by an embodiment of the present disclosure
  • FIG. 2b is a schematic diagram of a defined edge of each target object in a fundus medical image according to an embodiment of the present disclosure
  • FIG. 2c is a schematic diagram of a fundus medical image after performing a dilation operation according to an embodiment of the present disclosure
  • 2d shows a schematic diagram of extracting a target region from a fundus medical image according to an embodiment of the present disclosure
  • FIG. 2e illustrates a schematic diagram of different masks in a fundus medical image according to an embodiment of the present disclosure
  • FIG. 2f illustrates a schematic diagram of different masks in a fundus medical image according to an embodiment of the present disclosure
  • 2g shows a schematic diagram of a fundus medical image after fitting processing provided by an embodiment of the present disclosure
  • FIG. 2h is a schematic diagram of a fundus medical image after fitting processing according to an embodiment of the present disclosure
  • FIG. 2i shows a schematic diagram of a fundus medical image after fitting processing provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of a device for extracting annotations in a medical image according to an embodiment of the present disclosure.
  • the present disclosure provides a method and a device for extracting an annotation in a medical image.
  • the extracted annotated region in the medical image can be applied to subsequent machine learning, thereby achieving the purpose of detecting / analyzing a lesion.
  • FIG. 1 shows a flowchart of steps in a method for extracting annotations in a medical image according to an embodiment of the present disclosure.
  • an edge detection algorithm is used to perform edge detection on a medical image to obtain each edge information in the medical image.
  • Medical images can be obtained by using medical equipment to interact with the human body by means of a certain medium (such as X-rays, electromagnetic fields, ultrasound, etc.), and reflecting the structure and density of human tissues and organs.
  • a certain medium such as X-rays, electromagnetic fields, ultrasound, etc.
  • the position of the lesion is obtained after adding the annotation.
  • Fig. 2a shows a schematic diagram of a fundus medical image by way of example. As shown in FIG. 2a, the position circled by the ellipse in the fundus medical image is the area marked by the doctor. These labeled regions are also referred to as target regions, which are regions that need to be extracted according to an embodiment of the present disclosure.
  • the edge of an image refers to the part where the brightness of a local area of the image changes significantly.
  • the gray profile of this area can generally be regarded as a step, that is, a sharp change from one gray value to another gray value with a large difference in gray in a small buffer area.
  • the edge detection algorithm is used to perform edge detection on the medical image, and it is possible to obtain each edge information, such as edge coordinates, of the part where the brightness of the local area in the medical image changes significantly.
  • At least one target area can be defined according to each edge information. For example, in the fundus medical image shown in FIG. 2a, the portion circled / labeled by the ellipse is the target area to be extracted. The edge information of the ellipse used for labeling can be obtained.
  • the part defined by the edge of the ellipse is the target area.
  • the graphics used for labeling are not limited to oval shapes, but may include any other suitable graphics to define a certain area, including regular graphics (such as circles, squares, diamonds, etc.) or irregular graphics (such as irregular Polygon or irregular curve).
  • a Canny edge detection algorithm may be used to perform edge detection on the medical image to determine the edge information in the medical image.
  • a dual threshold method (such as using the first set threshold and the second set threshold) can be used to find the edge points in the medical image, and then can be obtained according to the first set threshold and the second set threshold
  • Each edge information in the medical image such as the coordinate information of the boundary points.
  • the part that usually has a higher brightness gradient is more likely to be an edge.
  • a lag threshold is used in the Canny algorithm.
  • the hysteresis threshold requires two thresholds—a high threshold and a low threshold. Assume that the important edges in the image are continuous curves, so that you can track the blurred part of a given curve, and avoid taking noisy pixels that do not make up the curve as edges. So you can start with a larger threshold, which will identify the true edges that are more confident. Using the previously derived orientation information, start tracking these entire edges in the image starting from these real edges. When tracking, use a smaller threshold so that you can track the blurred part of the curve until you return to the starting point. Once this process is complete, a binary image can be obtained, each point representing whether it is an edge point.
  • setting the threshold too small may introduce too much disc information and eyeball edge information, while setting the threshold too large may be lost in the fundus medical image. Information marked by the doctor.
  • a low threshold value of the first set threshold value and a second set threshold value may be set to 80, and a high threshold value may be set to 150.
  • the gradient of the gray intensity change in the smoothed image can be obtained using a Sobel operator, and the kernel size can be set to 3, thereby extracting annotations in the medical image, such as elliptical annotations.
  • FIG. 2b shows a schematic diagram of a defined edge of each target region in a detected fundus medical image. As shown in FIG. 2b, the circled part of the ellipse is the ellipse marked area marked by the doctor, that is, the position where various fundus diseases may have occurred.
  • the method may further include: filtering the medical image to obtain a filtered medical image.
  • a Gaussian filtering method may be used to process the medical image.
  • the raw data of the medical image can be convolved with a Gaussian mask to achieve denoising processing on the medical image.
  • Step 102 Determine at least one target area defined by each edge information.
  • At least one target area defined by each edge information may be determined. For example, referring to FIG. 2b, after obtaining edge information on the fundus medical image in FIG. 2a, according to each edge information, an elliptical area as shown in FIG. 2b can be obtained, that is, a target area defined by each edge information.
  • step 102 may include: performing an expansion operation on at least one area defined by each of the edge information to expand each target area.
  • an expansion operation may be performed on at least one target area defined by each edge information, thereby expanding the edge of the target area to remove the edge of the target area or an internal pit.
  • the expansion operation may include the following execution process.
  • Fig. 2b is used to convolve the kernel B with an arbitrary shape.
  • the selected kernel B is usually a square or a circle (a square kernel is adopted in the embodiment of the present disclosure, and the size is 3 * 3).
  • Kernel B has a definable anchor point, which is usually defined as the kernel center point.
  • the kernel B is drawn through FIG. 2b, the maximum pixel value of the area covered by the kernel B is extracted, and this maximum pixel value is assigned to the pixel at the anchor point position.
  • this maximization operation will cause the bright areas in the image to start "expanding", so that you can get the target area with the edges highlighted and the range extended. For example, FIG.
  • FIG. 2c shows a schematic diagram of a fundus medical image after performing a dilation operation according to an embodiment of the present disclosure. As shown in FIG. 2b and FIG. 2c, after the expansion operation is performed on the target area, the edges of the target area are highlighted and their edges are expanded.
  • the code that performs the expansion operation can look like this:
  • kernel1 cv2.getStructuringElement (cv2.MORPH_RECT, (3,3))
  • dilated cv2.dilate (canny, kernel1).
  • FIG. 2d illustrates a schematic diagram of extracting a target region from a fundus medical image according to an embodiment of the present disclosure. After obtaining FIG. 2c, the circled area in FIG. 2c can be extracted, and an image including only the target area can be obtained.
  • Step 103 Perform fitting processing on the determined target area to obtain a fitted area.
  • the coordinates of the center point of the fitted area and the coordinate information of each edge of the fitted area can be obtained, so as to extract the coordinate information of the target area in the medical image for subsequent machine learning.
  • the target area may be classified, the classification corresponding to different lesion types in the medical image. Perform a fitting process on target regions that belong to the same category to obtain a fitted region of that category
  • step 103 before step 103, the following steps may be further included.
  • Step C1 Obtain a color feature corresponding to the edge information of each target area according to the medical image
  • Step C2 classify the target area based on the color features. For example, the first target regions with the same color characteristics are divided into the same class.
  • doctors respectively mark lesions of different properties with different colors.
  • the label in the fundus medical image can have two colors. Therefore, a desired color feature can be selected according to actual needs, and then a color feature corresponding to each target region can be obtained from the image to be detected according to the color.
  • 4-8 representative pixel values can be extracted.
  • the Euclidean distance from each representative pixel value is calculated. If the Euclidean distance is less than or equal to the set threshold, the pixel value is set to (0, 0, 0), and if it is greater than the threshold, the pixel value is set to (255, 255, 255).
  • FIG. 2e and FIG. 2f there are shown schematic diagrams of different masks in a fundus medical image provided by an embodiment of the present disclosure. After color extraction is performed on FIG. 2d, different masks as shown in FIG. 2e and FIG. 2f can be obtained. After extracting the color characteristics of each target object, target objects with the same color characteristics can be divided into the same class. For example, the callout in the fundus medical image 2a has two colors, white and black.
  • a, b, and c There are three white areas, which can be set as a, b, and c, respectively, and two black areas, which can be set as e and d, respectively. According to the color characteristics, a, b, and c can be divided into the same class, and e and d can be divided into the same class.
  • step 103 may include performing an ellipse fitting method on the edge coordinate information of each target region to obtain a fitted region.
  • an ellipse fitting method may be adopted.
  • the edge information of each target region may include edge coordinate information.
  • the edge coordinate information of each target area constitutes the boundary pixel point set of the target area.
  • the fitting region obtained by fitting each boundary pixel in the boundary pixel set by using the ellipse fitting method may include the coordinates of the center point of the fitting region.
  • FIG. 2g, FIG. 2h, and FIG. 2i show schematic diagrams of a fundus medical image after fitting processing provided by an embodiment of the present disclosure. After fitting processing is performed on the area defined by the ellipse in FIG. 2e, a fitting area as shown in FIG.
  • a fitting area as shown in FIG. 2f can be obtained.
  • the annotations in the fundus medical image can be extracted according to the obtained fitting area to obtain the annotation extraction image shown in FIG. 2i.
  • the center of the fitted area is the center of the fitted area
  • the radius of the fitted circle is the radius of the fitted area, so that the coordinates of the center point of the fitted area, that is, the coordinate information of each boundary point.
  • the medical image from which the coordinate information is extracted can be used for subsequent machine learning to achieve the purpose of detecting a lesion, such as a fundus lesion.
  • the annotation in the medical image is extracted by selecting at least one target fitting region that meets the preset condition from each of the fitting regions according to a preset condition set based on the characteristics of the fitting region.
  • at least one that meets a preset condition may be selected from each fitting area according to a length-to-length radius ratio or area perimeter ratio of each fitting area and a nested fitting area existing in each fitting area.
  • a nested fit region is a fit region in the image that is completely within the range of another fit region.
  • a threshold method may be adopted to remove the noise points according to the characteristics of the medical image.
  • the preset condition may include a predetermined ratio of a length-to-length radius ratio or an area perimeter ratio of the fitted region. For example, for a fitted area where the ratio of the length to the radius of the fitted area is greater than the predetermined ratio of 3 or the fitted area whose ratio of the area to the circumference is greater than the predetermined ratio of 800, most of them are generated by noise points at the edge of the eyeball instead of lesion information. Therefore, it can be considered that it does not meet the preset conditions and needs to be eliminated.
  • the preset condition may be that the fit region is not a nested fit region. In this way, the target fitting region is selected by retaining the outermost fitting region and excluding the fitting region nested in the outermost fitting region.
  • step 104 may include: obtaining at least one first simulation that satisfies a predetermined ratio from each of the fitting regions according to a length-to-length radius ratio or an area circumference ratio of each fitting region. ⁇ ⁇ He area.
  • the predetermined ratio may be set in advance according to actual requirements.
  • the ratio can be set in any suitable way.
  • the magnitude of the ratio can be set experimentally or can be set according to empirical values.
  • the length-to-length radius of each fitting region may be compared with the predetermined ratio to eliminate the fitting region that does not satisfy the predetermined ratio. For example, if the predetermined condition is less than the predetermined ratio and the predetermined ratio is 3, the fit region with a length radius greater than 3 is a fit region that does not satisfy the predetermined ratio, and is therefore eliminated, so that the remaining fit region is the first fitting region. ⁇ ⁇ He area.
  • the area perimeter of each fitted region may be compared with the predetermined ratio to eliminate the fitted regions that do not satisfy the predetermined ratio. For example, if the predetermined condition is less than the predetermined ratio and the predetermined ratio is 800, the fitted area with an area perimeter ratio greater than 800 is a fitted area that does not satisfy the predetermined ratio, and is thus eliminated, so that the remaining fitted area is the first A fitted area.
  • step 104 may include: determining whether a nested fitting region exists in each of the first fitting regions.
  • the first fitting region of the inner layer in the nested fitting region is removed to obtain a target fitting region.
  • first fitting area B belongs to the inner layer and is thus removed.
  • the remaining first fitting region A is used as the target fitting region.
  • the method for extracting annotations in medical images can extract specific annotation areas (such as the location of a lesion marked by a doctor, etc.) from medical images by using edge detection, curve fitting, and annotation color extraction. .
  • the extracted labeled area of the lesion can be applied to subsequent machine learning to achieve the purpose of detecting the lesion by artificial intelligence.
  • FIG. 3 shows a schematic structural diagram of an extraction apparatus for extracting annotations in a medical image according to an embodiment of the present disclosure.
  • the extraction device may include an edge information acquisition circuit 201, a target region determination circuit 202, a fitting circuit 203, and a selection circuit 204.
  • the edge information acquisition circuit 201 is configured to perform edge detection on a medical image by using an edge detection algorithm to obtain each edge information in the medical image.
  • the target area determination circuit 202 is configured to determine at least one target area defined by each edge information.
  • the fitting circuit 203 is configured to perform a fitting process on the target area to obtain a fitted area.
  • the selecting circuit 204 is configured to select at least one target fitting region that meets the preset condition from each of the fitting regions according to a preset condition set based on the characteristics of the fitting region to extract a label in the medical image.
  • the preset condition may be based on at least one of the following: a predetermined ratio of a length-to-length radius ratio of the fitting region, a predetermined ratio of an area perimeter ratio, and the fitting region is not a nested fitting region.
  • the extraction device further includes: a filtering circuit 205, configured to filter the medical image to obtain a filtered medical image.
  • the edge information acquisition circuit 201 is configured to perform edge detection on the filtered medical image.
  • the target area determination circuit 202 is further configured to perform an expansion operation on at least one area defined by the edge information, and determine the enlarged area as a target area.
  • the medical image is a fundus image.
  • the edge information acquisition circuit is configured to use dual thresholds for edge detection, and a first set threshold value of the two thresholds is 80 and a second set threshold value is 150.
  • the fitting circuit 203 further includes: a classification sub-circuit 2031, configured to classify the target region based on the edge information.
  • the classification corresponds to different types of lesions in a medical image.
  • the fitting circuit 203 is further configured to perform a fitting process on target regions belonging to the same class to obtain a fitted region of the class.
  • the classification sub-circuit 2031 is further configured to obtain color characteristics corresponding to the edge information of each target region according to the medical image, and classify the target region based on the color characteristics. For example, the target regions with the same color characteristics are divided into the same class.
  • the edge information includes edge coordinate information.
  • the fitting circuit 203 is further configured to perform an ellipse fitting method on the edge coordinate information of a target region (for example, each target region belonging to the same class) to obtain the fitted region.
  • the selection circuit 204 includes: a first fitting region obtaining sub-circuit 2041, configured to obtain a satisfying predetermined value from each fitting region according to a predetermined ratio of a length-to-short radius ratio or an area perimeter ratio of each fitting region. Ratio of at least one first fitting region; a nested fitting region judging sub-circuit 2042 for judging whether a nested fitting region exists in each first fitting region; a target fitting region obtaining sub-circuit for When a nested fitting region exists in the first fitting region, the first fitting region of the inner layer in the nested fitting region is removed to obtain the target fitting region.
  • the apparatus for extracting an annotation in a medical image provided by the embodiment of the present disclosure can extract specific annotated regions (such as an annotated lesion area, etc.) in a medical image by using edge detection, curve fitting, and annotation color extraction.
  • the medical images extracted from the lesion annotation can be applied to subsequent machine learning, and the purpose of detecting the lesion by artificial intelligence can be achieved in the medical field.
  • An embodiment of the present disclosure also discloses a storage medium having computer instructions stored thereon, wherein when the computer instructions are executed by a processor, the extraction of the annotations in the medical image according to any one of the foregoing embodiments is performed. One or more steps of a method.
  • An embodiment of the present disclosure also discloses a computer device including a memory, a processor, and computer instructions stored on the memory, the processor being configured to execute the computer instructions to perform the extraction medicine according to any one of the above embodiments. Steps in the method of labeling in images.
  • the processor may be a central processing unit (CPU) or a field programmable logic array (FPGA) or a single chip microcomputer (MCU) or a digital signal processor (DSP) or an application specific integrated circuit (ASIC) or a graphics processor (GPU) with data processing A logic operation device capable of and / or program execution.
  • CPU central processing unit
  • FPGA field programmable logic array
  • MCU single chip microcomputer
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • GPU graphics processor
  • Computer instructions include one or more processor operations defined by an instruction set architecture corresponding to a processor. These computer instructions may be logically contained and represented by one or more computer programs.
  • the extraction device can also be connected with various input devices (such as a user interface, a keyboard, etc.), various output devices (such as a speaker, etc.), and a display device to realize the interaction between the computer product and other products or users.
  • various input devices such as a user interface, a keyboard, etc.
  • various output devices such as a speaker, etc.
  • a display device to realize the interaction between the computer product and other products or users.
  • connection, coupling, and the like may be connected through a network, such as a wireless network, a wired network, and / or any combination of a wireless network and a wired network.
  • the network may include a local area network, the Internet, a telecommunications network, the Internet of Things based on the Internet and / or a telecommunications network, and / or any combination of the above networks, and the like.
  • Wired networks can use, for example, twisted pair, coaxial cable, or fiber optic transmission to communicate.
  • Wireless networks can use, for example, 3G / 4G / 5G mobile communication networks, Bluetooth, Zigbee, or Wi-Fi.
  • circuits, sub-circuits, and block diagrams shown in the drawings described above are functional entities and do not necessarily correspond to physically or logically independent entities. These functional entities may be implemented in the form of software, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and / or processor devices and / or microcontroller devices.
  • the example embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a mobile hard disk, a mechanical hard disk, a solid-state hard disk Etc.) or on the network, including several instructions to cause a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiment of the present disclosure.
  • a non-volatile storage medium which may be a CD-ROM, a USB flash drive, a mobile hard disk, a mechanical hard disk, a solid-state hard disk Etc.
  • a computing device which may be a personal computer, a server, a mobile terminal, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

本公开提供了一种提取医学图像中的标注的方法及装置。所述方法包括:采用边缘检测算法对医学图像进行边缘检测以获取所述医学图像中的各边缘信息;确定各边缘信息所界定的至少一个目标区域;对目标区域进行拟合处理,以得到拟合区域;通过依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。摘图:图1

Description

提取医学图像标注的方法及装置
相关申请
本申请要求于2018年6月28日递交的中国专利申请No.201810687804.1的优先权,在此全文引用上述中国专利申请公开的内容作为本申请的一部分。
技术领域
本公开涉及图像处理领域,特别是涉及一种提取医学图像标注的方法及装置。
背景技术
医疗影像在医学领域上起到了极为重要的作用。例如,医生可以通过医疗影像进行病理分析、疾病诊断等等。然而,当前大多数医院内拍摄的医疗影像都是通过医生在医疗影像上添加标注的方式来标注病变位置。此种方式得到的医疗影像只能由专业的医疗人员分析诊断病情。对于一些没有医疗经验的人来说,仅仅通过一张医疗影像无法进行病理分析、疾病诊断等。
发明内容
在第一方面,本公开的实施例提供了一种提取医学图像标注的方法,包括:采用边缘检测算法对医学图像进行边缘检测以获取所述医学图像中的各边缘信息;确定各边缘信息所界定的至少一个目标区域;对所确定的目标区域进行拟合处理,以得到拟合区域;和通过依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。
可选地,在采用边缘检测算法对医学图像进行边缘检测之前,该方法还包括:对所述医学图像进行滤波,以得到滤波后的医学图像。所述边缘检测是对所述滤波后的医学图像进行的。
可选地,所述确定各边缘信息所界定的至少一个目标区域包括:对边缘信息界定的至少一个区域进行膨胀操作,且将扩大的区域确定为目标区域。
可选地,所述医学图像是眼底图像,所述边缘检测算法采用双阈值进行检测。
可选地,所述双阈值中的第一设定阈值为80,第二设定阈值为150。
可选地,所述对目标区域进行拟合处理包括:对目标区域进行分类,所述分类对应于医学图像中不同的病灶类型;和对属于同一类的目标区域进行拟合处理,以得到该类别的拟合区域。
可选地,医学图像中不同性质的病灶被用不同的颜色标注。对目标区域进行分类还包括:获取各目标区域的边缘信息对应的颜色特征;和基于所述颜色特征来划分目标区域的类别。
可选地,医学图像中的病灶被用椭圆标注。所述边缘信息包括边缘坐标信息。所述对目标区域进行拟合处理包括:采用椭圆拟合法对目标区域的边缘坐标信息进行拟合处理,以得到所述拟合区域。
可选地,所述预设条件是基于以下的至少一项:拟合区域的长短半径比的预定比值;拟合区域的面积周长比的预定比值;和拟合区域不是嵌套拟合区域。
在第二方面,本公开的实施例还提供了一种提取医学图像中的标注的装置。该装置包括:边缘信息获取电路、目标区域确定电路、拟合电路和选择电路。边缘信息获取电路用于采用边缘检测算法对医学图像进行边缘检测以获取所述医学图像中的各边缘信息。目标区域确定电路用于确定各边缘信息所界定的至少一个目标区域。拟合电路用于对所确定的目标区域进行拟合处理,以得到拟合区域。提取模块用于依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。
可选地,该装置还包括:滤波电路,用于对所述医学图像进行滤波,以得到滤波后的医学图像。所述边缘信息获取电路被配置为对所述滤波后的医学图像进行边缘检测。
可选地,所述目标区域确定电路还被配置为对边缘信息界定的至少一个区域进行膨胀操作,且将扩大的区域确定为目标区域。
可选地,所述医学图像是眼底图像。所述边缘信息获取电路被配置为采用双阈值进行边缘检测。
可选地,所述双阈值中第一设定阈值为80,第二设定阈值为150。
可选地,拟合电路还包括:分类子电路,用于基于边缘信息对目标区域进行分类。所述分类对应于医学图像中不同的病灶类型。所述拟合电路还被配置为对属于同一类的目标区域进行拟合处理,以得到该类别的拟合区域。
可选地,医学图像中不同类型的病灶被用不同的颜色标注。分类子电路还被配置为获取各目标区域的边缘信息对应的颜色特征,且基于所述颜色特征来划分目标区域的类别。
可选地,医学图像中的病灶被用椭圆标注。所述边缘信息包括边缘坐标信息。所述拟合电路被配置为采用椭圆拟合法对目标区域的边缘坐标信息进行拟合处理,以得到所述拟合区域。
在第三方面,本公开的实施例还提供了一种存储介质,在其上存储有计算机指令。所述计算机指令被处理器运行时执行前文所述的方法的步骤。
在第四方面,本公开还提供了一种计算机设备,包括存储器、处理器和存储在所述存储器上的计算机指令。所述处理器被配置为运行计算机指令以执行前文所述的方法的步骤。
附图说明
图1示出了本公开实施例提供的一种提取医学图像中的标注的方法的步骤流程图;
图2a示出了本公开实施例提供的一种眼底医学图像的示意图;
图2b示出了本公开实施例提供的一种眼底医学图像中各目标对象的界定边缘示意图;
图2c示出了本公开实施例提供的一种进行膨胀操作之后的眼底医学图像的示意图;
图2d示出了本公开实施例提供的一种从眼底医学图像中提取目标区域的示意图;
图2e示出了本公开实施例提供的一种眼底医学图像中不同掩码的示意图;
图2f示出了本公开实施例提供的一种眼底医学图像中不同掩码的示意图;
图2g示出了本公开实施例提供的一种拟合处理后的眼底医学图像的示意图;
图2h示出了本公开实施例提供的一种拟合处理后的眼底医学图像的示意图;
图2i示出了本公开实施例提供的一种拟合处理后的眼底医学图像的示意图;及
图3示出了本公开实施例提供的一种提取医学图像中的标注的装置的结构示意图。
具体实施方式
为使本公开的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本公开作进一步详细的说明。
本公开提供一种提取医学图像中标注的方法及装置,医学图像中被提取的标注区域可以应用于后续的机器学习,从而可以达到检测/分析病灶的目的。
图1示出了本公开实施例提供的一种提取医学图像中的标注的方法的步骤流程图。
在步骤101:采用边缘检测算法对医学图像进行边缘检测以获取医学图像中的各边缘信息。
医学图像可以是采用医疗设备,借助于某种介质(如X射线、电磁场、超声波等)与人体相互作用,而获得的反映人体组织器官结构、密度等的影像,并由专业的医生对图像中的病变位置(例如病灶)添加标注之后得到的。
图2a示例性地示出了一张眼底医学图像的示意图。如图2a所示,眼底医学图像中通过椭圆圈定的位置即为医生标注的区域。这些被标注的区域也称为目标区域,其是按照本公开实施例需要提取的区域。
图像的边缘是指图像局部区域亮度变化显著的部分。该区域的灰度剖面一般可以看作是一个阶跃,即在很小的缓冲区域内从一个灰度值急剧变化到另一个灰度相差较大的灰度值。采用边缘检测算法对医学图像进行边缘检测,可以获取医学图像中存在的局部区域亮度变化显著的部分的各边缘信息,如边缘坐标等。依据各边缘信息可以界定至少一个目标区域。例如,在如图2a所示的眼底医学图像中,椭圆圈定/标注的部分即为需要提取的目标区域。可以对用以标注的椭圆的边缘信息进行获取。椭圆的边缘所界定的部分即为目标区域。
可以理解,标注所用的图形不限于椭圆形,而是可以包括任何其他适用的用以界定某个区域的图形,包括规则图形(比如圆形、方形、菱形等)或者不规则图形(比如不规则多边形或者不规则曲线)。
在一些实施例中,可以采用Canny边缘检测算法对医学图像进行边缘检测,从而确定出医学图像中的边缘信息。在Canny边缘检测算 法中可以采用双阈值法(比如使用第一设定阈值和第二设定阈值)去查找医学图像中的边缘点,进而可以依据第一设定阈值和第二设定阈值获取医学图像中的各边缘信息,如边界点坐标信息等。
在图像中,通常具有较高的亮度梯度的部分比较有可能是边缘。但是由于没有一个确切的值来限定多大的亮度梯度是边缘,所以Canny算法中使用了滞后阈值。滞后阈值需要两个阈值—高阈值与低阈值。假设图像中的重要边缘都是连续的曲线,这样可以跟踪给定曲线中模糊的部分,并且避免将没有组成曲线的噪声像素当成边缘。所以可以从一个较大的阈值开始,这将标识出比较确信的真实边缘。使用前面导出的方向信息,从这些真实边缘开始在图像中跟踪整个边缘。在跟踪的时候,使用一个较小的阈值,这样就可以跟踪曲线的模糊部分直到回到起点。一旦这个过程完成,就可以得到一个二值图像,每点表示是否是一个边缘点。
在实施例中,在使用canny边缘检测算法对眼底医学图像进行边缘检测时,阈值设定过小可能会引入过多视盘信息和眼球边缘信息,而阈值设定过大可能会损失眼底医学图像中医生标注的信息。
在本公开的一个实施例中,第一设定阈值和第二设定阈值中的低阈值可以被设置为80,而高阈值可以被设置为150。示例性地,平滑后的图像中灰度强度变化的梯度可以使用Sobel算子来获得,内核大小可以取3,进而提取出医学图像中的标注,例如椭圆标注。图2b示出了所检测到的眼底医学图像中各目标区域的界定边缘的示意图。如图2b所示,椭圆圈定的部分即为医生标注的椭圆标注区域,也即可能已发生各种眼底疾病的位置。
可以理解地,第一设定阈值和第二设定阈值的具体取值可以通过实验得到。实验的具体过程可以采用本领域技术人员常用的实验方法来进行。
可选地,在上述步骤101之前,该方法还可以包括:对所述医学图像进行滤波,以得到经滤波的医学图像。
由于原始图像中通常存在较多的噪声,需要对医学图像的原始数据进行滤波。可选地,可以采用高斯滤波法对医学图像进行处理。例如,可以将医学图像的原始数据与高斯mask(掩膜)作卷积,以实现对医学图像的去噪处理。举例而言,高斯mask的程序可以为:img= cv2.GaussianBlur(bgr,(3,3),0)。
通过对经滤波的医学图像进行边缘检测,可以避免对眼底图像中的椭圆标注信息检测不准确的问题。
在步骤102:确定各边缘信息所界定的至少一个目标区域。
在依据第一设定阈值和第二设定阈值获取医学图像中的各边缘信息之后,可以确定各边缘信息所界定的至少一个目标区域。例如,参照图2b所示,在对图2a中的眼底医学图像进行边缘信息获取之后,依据各边缘信息可以获取到如图2b所示的椭圆区域,即为各边缘信息所界定的目标区域。
在本公开的一种可选实施例中,步骤102可以包括:对各所述边缘信息界定的至少一个区域进行膨胀操作,以扩大各目标区域。
在本公开实施例中,可以对各边缘信息界定的至少一个目标区域进行膨胀操作,从而将该目标区域的边缘扩大,以去除目标区域的边缘或者是内部的坑。
例如,膨胀操作可以包括如下的执行过程。首先,将图2b与任意形状的内核B进行卷积。选取的内核B通常为正方形或圆形(在本公开实施例中采取正方形内核,大小为3*3)。内核B有一个可定义的锚点,通常定义为内核中心点。在进行膨胀操作时,使内核B划过图2b,将内核B覆盖区域的最大像素值提取出来,并将这个最大像素值赋值给锚点位置的像素。显然,这一最大化操作将会导致图像中的亮区开始“扩展”,从而可以得到边缘高亮显示且范围扩展的目标区域。例如,图2c示出了本公开实施例提供的一种在进行膨胀操作之后的眼底医学图像的示意图。如图2b和图2c所示,在对目标区域进行膨胀操作之后,目标区域的边缘高亮显示,且其边缘被扩展。
进行膨胀操作的代码可以如下所示:
kernel1=cv2.getStructuringElement(cv2.MORPH_RECT,(3,3))
dilated=cv2.dilate(canny,kernel1)。
图2d示出了本公开实施例提供的一种从眼底医学图像中提取目标区域的示意图。在得到图2c之后,可以将图2c中的被圈定区域提取出来,进而得到仅包含目标区域的图像。
步骤103:对所确定的目标区域进行拟合处理,以得到拟合区域。可以得到拟合区域的中心点坐标,以及拟合区域的各边缘坐标信息, 从而提取医学图像中的目标区域的坐标信息,以供后续机器学习使用。在一些实施例中,可以对目标区域进行分类,所述分类对应于医学图像中不同的病灶类型。对属于同一类的目标区域进行拟合处理,以得到该类别的拟合区域
在本公开的一种可选实施例中,在步骤103之前,还可以包括以下步骤。
步骤C1:依据所述医学图像获取各目标区域的边缘信息对应的颜色特征;
步骤C2:基于所述颜色特征来划分目标区域的类别。例如,将所述颜色特征相同的第一目标区域划分为同一类。
在本公开实施例中,在医学图像中医生分别用不同的颜色标注不同性质的病灶。以图2a为例,由于有两种不同的病灶,所以眼底医学图像中的标注可以有两种颜色。由此,可以根据实际需要选择所需的颜色特征,进而根据颜色从待检测图像中获取各目标区域对应的颜色特征。对于每种颜色可以提取出4-8个代表像素值,对于图2d中的每一个像素点,计算其与每个代表像素值的欧式距离。若欧式距离小于等于所设定的阈值,则像素值设为(0,0,0),若大于阈值,则该像素值设为(255,255,255)。由此得到医学图像中对应不同病灶的椭圆标注的掩码,从而可以得到图2e、图2f所示的图像。例如,参照图2e和图2f,示出了本公开实施例提供的一种眼底医学图像中不同掩码的示意图。在对图2d进行色彩提取之后,可以得到如图2e和图2f所示的不同掩码。在提取各目标对象的颜色特征之后,可以将颜色特征相同的目标对象划分为同一类。例如,眼底医学图像2a中的标注有两种颜色,白色和黑色。白色区域有三个,可以分别设为a、b、c,而黑色区域有两个,可以分别设为e和d。则根据颜色特征,可以将a、b、c划分为同一类,并将e、d划分为同一类。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,本公开实施例不限于此。在实际应用中,本领域技术人员还可以采用其它方式(例如边界信息表征的边界线宽度等)对各目标区域进行分类。
在本公开实施例的另一种可选实施例中,步骤103可以包括采用椭圆拟合法对各目标区域的边缘坐标信息进行拟合处理,以得到拟合 区域。
在本公开实施例中,由于医学图像中医生针对病灶添加的标注为椭圆标注,因而可以采用椭圆拟合法。
各目标区域的边缘信息可以包括边缘坐标信息。各目标区域的边缘坐标信息组成了目标区域的边界像素点集。采用椭圆拟合法对边界像素点集中各边界像素点进行拟合得到的拟合区域可以包含拟合区域的中心点坐标。
椭圆拟合法的核心思想在于:对于给定平面上的一组样本点,寻找一个椭圆,使其尽可能靠近这些样本点。也就是说,将医学图像中的一组数据以椭圆方程为模型进行拟合,使某一椭圆方程尽量满足这些数据,并求出该椭圆方程的各个参数。最后确定的最佳椭圆的中心即是要确定的靶心。例如,图2g、图2h、图2i示出了本公开实施例提供的一种拟合处理后的眼底医学图像的示意图。对图2e中的椭圆界定的区域进行拟合处理后可以得到如图2g所示的拟合区域。对图2h中的椭圆界定的区域进行拟合处理后可以得到如图2f所示的拟合区域。对眼底医学图像中所有的目标对象均进行拟合处理之后,可以根据所得到的拟合区域对眼底医学图像中的标注进行提取,得到如图2i所示的标注提取图像。拟合区域的圆心即为拟合区域的中心,拟合出的圆的半径为拟合区域的半径,从而可以获取到拟合区域的中心点坐标,即各边界点的坐标信息。进而提取出坐标信息的医学图像可以供后续的机器学习用,以达到检测病灶,例如眼底病灶的目的。
在步骤104:通过依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。在一些实施例中,可以依据各拟合区域的长短半径比或面积周长比、及各拟合区域中存在的嵌套拟合区域,从各拟合区域中选择符合预设条件的至少一个目标拟合区域。嵌套拟合区域是指图像中完全处于另一个拟合区域的范围内的拟合区域。
在本公开实施例中,对于一些噪音点,可以根据医学图像的特点采取阈值的方法将噪音点剔除。预设条件可以包括拟合区域的长短半径比或面积周长比的预定比值。例如,对于拟合区域的长短半径之比大于预定比值3的拟合区域或拟合区域面积与周长之比大于预定比值800的拟合区域,大多为眼球边缘的噪音点所生成而并非病灶信息,因 此可以认为其不符合预设条件而需要剔除。在另一示例中,预设条件可以是拟合区域不是嵌套拟合区域。这样,通过保留最外层的拟合区域,剔除嵌套于最外层拟合区域中的拟合区域来选择目标拟合区域。
在本公开的一种可选实施例中,步骤104可以包括:依据各拟合区域的长短半径比或面积周长比,从各所述拟合区域中获取满足预定比值的至少一个第一拟合区域。
预定比值可以是根据实际要求来事先设定的。比值的设定可以采用任何适用的方式来进行。比值的大小可以通过实验的方式来设定或者可以按照经验值来设定。
在预定条件基于长短半径比的预定比值设定时,可以将各拟合区域的长短半径与预定比值进行比较以将不满足预定比值的拟合区域剔除。例如,若预定条件为小于预定比值且预定比值为3,则长短半径大于3的拟合区域为不满足预定比值的拟合区域,且因而被剔除,从而剩余的拟合区域即为第一拟合区域。
在预定条件基于面积周长比的预定比值设定时,可以将各拟合区域的面积周长与预定比值进行比较以将不满足预定比值的拟合区域进行剔除。例如,若预定条件为小于预定比值且预定比值为800,则面积周长比大于800的拟合区域为不满足预定比值的拟合区域,且因而被剔除,从而剩余的拟合区域即为第一拟合区域。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,本公开不限于此。在实际应用中,本领域技术人员可以酌情选择各拟合区域的长短半径比和/或面积周长比中的任何合适的比值来确定出第一拟合区域。
附加地,步骤104可以包括:判断各所述第一拟合区域中是否存在嵌套拟合区域。
在获取符合预定比值的第一拟合区域之后,可以判断各第一拟合区域中是否存在嵌套拟合区域。若否,则将所有第一拟合区域作为目标拟合区域。若存在嵌套拟合区域,则去除所述嵌套拟合区域中内层的第一拟合区域,以得到目标拟合区域。
举例而言,若有第一拟合区域A和B,而第一拟合区域B处于第一拟合区域A的区域内,即完全被第一拟合区域A包含,则第一拟合区域B属于内层且因而被去除。剩余的第一拟合区域A被作为目标拟 合区域。
本公开实施例提供的提取医学图像中的标注的方法,通过使用边缘检测、曲线拟合、结合标注颜色提取,从而可以从医学图像中提取出特定的标注区域(如医生标注的病灶位置等)。被提取的病灶标注区域可以应用于后续的机器学习,以达到通过人工智能检测病灶的目的。
图3示出了本公开实施例提供了一种提取医学图像中的标注的提取装置的结构示意图。该提取装置可以包括:边缘信息获取电路201、目标区域确定电路202、拟合电路203和选择电路204。
边缘信息获取电路201用于采用边缘检测算法对医学图像进行边缘检测以获取所述医学图像中的各边缘信息。目标区域确定电路202用于确定各边缘信息所界定的至少一个目标区域。拟合电路203用于对目标区域进行拟合处理,以得到拟合区域。选择电路204用于依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。预设条件可以基于以下的至少一项:拟合区域的长短半径比的预定比值、面积周长比的预定比值、及拟合区域不是嵌套拟合区域。
可选地,提取装置还包括:滤波电路205,用于对所述医学图像进行滤波,以得到滤波后的医学图像。所述边缘信息获取电路201被配置为对所述滤波后的医学图像进行边缘检测。
可选地,所述目标区域确定电路202还被配置为对边缘信息界定的至少一个区域进行膨胀操作,且将扩大的区域确定为目标区域。
可选地,所述医学图像是眼底图像。所述边缘信息获取电路被配置为采用双阈值进行边缘检测,且所述双阈值中第一设定阈值为80,第二设定阈值为150。
可选地,拟合电路203还包括:分类子电路2031,用于基于边缘信息对目标区域进行分类。所述分类对应于医学图像中不同的病灶类型。所述拟合电路203还被配置为对属于同一类的目标区域进行拟合处理,以得到该类别的拟合区域。
分类子电路2031还被配置为依据所述医学图像获取各目标区域的边缘信息对应的颜色特征,且基于所述颜色特征来划分目标区域的类别。例如,将所述颜色特征相同的目标区域划分为同一类。
可选地,所述边缘信息包括边缘坐标信息。所述拟合电路203还 被配置为采用椭圆拟合法对目标区域(例如属于同一类的各目标区域)的边缘坐标信息进行拟合处理,以得到所述拟合区域。
可选地,所述选择电路204包括:第一拟合区域获取子电路2041,用于依据各拟合区域的长短半径比或面积周长比的预定比值,从各拟合区域中获取满足预定比值的至少一个第一拟合区域;嵌套拟合区域判断子电路2042,用于判断各第一拟合区域中是否存在嵌套拟合区域;目标拟合区域获取子电路,用于在各第一拟合区域中存在嵌套拟合区域时,去除所述嵌套拟合区域中内层的第一拟合区域,以得到所述目标拟合区域。
本公开实施例提供的提取医学图像中的标注的装置,通过使用边缘检测、曲线拟合,结合标注颜色提取可以提取出医学图像中的特定标注区域(如所标注的病灶区域等),而通过病灶标注提取的医学图像可以应用于后续的机器学习,在医学领域上可以达到通过人工智能检测病灶的目的。
本公开实施例还公开了一种存储介质,在其上存储有计算机指令,其中,所述计算机指令被处理器运行时执行上述实施例一中任一项所述的提取医学图像中的标注的方法的一个或多个步骤。
本公开实施例还公开了一种计算机设备,包括存储器、处理器和存储在存储器上的计算机指令,所述处理器被配置为运行计算机指令以执行上述实施例中任一项所述的提取医学图像中的标注的方法的步骤。
处理器可以是中央处理单元(CPU)或者现场可编程逻辑阵列(FPGA)或者单片机(MCU)或者数字信号处理器(DSP)或者专用集成电路(ASIC)或者图形处理器(GPU)等具有数据处理能力和/或程序执行能力的逻辑运算器件。
计算机指令包括一个或多个由对应于处理器的指令集架构定义的处理器操作。这些计算机指令可以被一个或多个计算机程序在逻辑上包含和表示。
容易理解,该提取装置还可以连接各种输入设备(例如用户界面、键盘等)、各种输出设备(例如扬声器等)、以及显示设备等,以实现计算机产品与其它产品或用户的交互。
在本公开实施例中,连接、耦接等可以是通过网络连接,例如无线网络、有线网络、和/或无线网络和有线网络的任意组合。网络可以包括局域网、互联网、电信网、基于互联网和/或电信网的物联网(Internet of Things)、和/或以上网络的任意组合等。有线网络例如可以采用双绞线、同轴电缆或光纤传输等方式进行通信,无线网络例如可以采用3G/4G/5G移动通信网络、蓝牙、Zigbee或者Wi-Fi等通信方式。
需要注意的是,上述所描述的电路、子电路以及附图中所示的框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘、机械硬盘、固态硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、移动终端、或者网络设备等)执行根据本公开实施方式的方法。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或 者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上对本公开所提供的一种提取医学图像中的标注的方法和一种提取医学图像中的标注的装置,进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (20)

  1. 一种提取医学图像中的标注的方法,包括:
    采用边缘检测算法对医学图像进行边缘检测以获取所述医学图像中的各边缘信息;
    确定各边缘信息所界定的至少一个目标区域;
    对所确定的目标区域进行拟合处理,以得到拟合区域;和
    通过依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。
  2. 根据权利要求1所述的方法,其中在采用边缘检测算法对医学图像进行边缘检测之前,该方法还包括:对所述医学图像进行滤波,以得到滤波后的医学图像;并且
    所述边缘检测是对所述滤波后的医学图像进行的。
  3. 根据权利要求1或2所述的方法,其中所述确定各边缘信息所界定的至少一个目标区域包括:
    对边缘信息界定的至少一个区域进行膨胀操作,且将扩大的区域确定为目标区域。
  4. 根据权利要求1-3中任一项所述的方法,其中所述医学图像是眼底图像,所述边缘检测算法采用双阈值进行检测。
  5. 根据权利要求4所述的方法,其中,所述双阈值中的第一设定阈值为80,第二设定阈值为150。
  6. 根据权利要求1-5中任一项所述的方法,其中所述对目标区域进行拟合处理包括:
    对目标区域进行分类,所述分类对应于医学图像中不同的病灶类型;和
    对属于同一类的目标区域进行拟合处理,以得到该类别的拟合区域。
  7. 根据权利要求6所述的方法,其中医学图像中不同性质的病灶被用不同的颜色标注,以及对目标区域进行分类还包括:
    获取各目标区域的边缘信息对应的颜色特征,;和
    基于所述颜色特征来划分目标区域的类别。
  8. 根据权利要求1-7中任一项所述的方法,其中医学图像中的病灶被用椭圆标注,所述边缘信息包括边缘坐标信息,以及所述对所确定的目标区域进行拟合处理包括:
    采用椭圆拟合法对目标区域的边缘坐标信息进行拟合处理,以得到所述拟合区域。
  9. 根据权利要求1-8中任一项所述的方法,其中所述预设条件是基于以下的至少一项:
    拟合区域的长短半径比的预定比值;
    拟合区域的面积周长比的预定比值;和
    拟合区域不是嵌套拟合区域。
  10. 一种提取医学图像标注的装置,包括:
    边缘信息获取电路,配置成采用边缘检测算法对医学图像进行边缘检测以获取所述医学图像中的各边缘信息;
    目标区域确定电路,配置成确定各边缘信息所界定的至少一个目标区域;
    拟合电路,配置成对所确定的目标区域进行拟合处理,以得到拟合区域;
    提取电路,配置成依据基于拟合区域的特性设定的预设条件,从各所述拟合区域中选择符合预设条件的至少一个目标拟合区域,来提取医学图像中的标注。
  11. 根据权利要求10所述的装置,其中还包括:
    滤波电路,配置成对所述医学图像进行滤波,以得到滤波后的医学图像;
    所述边缘信息获取电路配置成对所述滤波后的医学图像进行边缘检测。
  12. 根据权利要求10或11所述的装置,其中所述目标区域确定电路还被配置为对边缘信息界定的至少一个区域进行膨胀操作,且将扩大的区域确定为目标区域。
  13. 根据权利要求10-12中任一项所述的装置,其中所述医学图像是眼底图像,所述边缘信息获取电路被配置为采用双阈值进行边缘检测.
  14. 根据权利要求13所述的装置,其中,所述双阈值中第一设定 阈值为80,第二设定阈值为150。
  15. 根据权利要求10-14中任一项所述的装置,其中拟合电路还包括:
    分类子电路,配置成基于边缘信息对目标区域进行分类,所述分类对应于医学图像中不同的病灶类型;且
    所述拟合电路还被配置为对属于同一类的目标区域进行拟合处理,以得到该类别的拟合区域。
  16. 根据权利要求15所述的装置,其中医学图像中不同类型的病灶被用不同的颜色标注,分类子电路还被配置为获取各目标区域的边缘信息对应的颜色特征,且基于所述颜色特征来划分目标区域的类别。
  17. 根据权利要求10-16中任一项所述的装置,其中医学图像中的病灶被用椭圆标注,所述边缘信息包括边缘坐标信息,所述拟合电路被配置为采用椭圆拟合法对目标区域的边缘坐标信息进行拟合处理,以得到所述拟合区域。
  18. 根据权利要求10-17中任一项所述的装置,其中所述预设条件基于以下的至少一项:
    拟合区域的长短半径比的预定比值;
    拟合区域的面积周长比的预定比值;和
    拟合区域不是嵌套拟合区域。
  19. 一种存储介质,在其上存储有计算机指令,其中,所述计算机指令被处理器运行时执行权利要求1至9中任一项所述的提取医学图像标注的方法的一个或多个步骤。
  20. 一种计算机设备,包括存储器、处理器和存储在所述存储器上的计算机指令,所述处理器被配置为运行计算机指令以执行权利要求1至9中任一项所述的方法的步骤。
PCT/CN2019/089765 2018-06-28 2019-06-03 提取医学图像标注的方法及装置 WO2020001236A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/623,527 US11379989B2 (en) 2018-06-28 2019-06-03 Method and device of extracting label in medical image
US17/805,236 US11783488B2 (en) 2018-06-28 2022-06-03 Method and device of extracting label in medical image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810687804.1 2018-06-28
CN201810687804.1A CN108921836A (zh) 2018-06-28 2018-06-28 一种提取眼底图像标注的方法及装置

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/623,527 A-371-Of-International US11379989B2 (en) 2018-06-28 2019-06-03 Method and device of extracting label in medical image
US17/805,236 Continuation US11783488B2 (en) 2018-06-28 2022-06-03 Method and device of extracting label in medical image

Publications (1)

Publication Number Publication Date
WO2020001236A1 true WO2020001236A1 (zh) 2020-01-02

Family

ID=64423239

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089765 WO2020001236A1 (zh) 2018-06-28 2019-06-03 提取医学图像标注的方法及装置

Country Status (3)

Country Link
US (2) US11379989B2 (zh)
CN (1) CN108921836A (zh)
WO (1) WO2020001236A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921836A (zh) * 2018-06-28 2018-11-30 京东方科技集团股份有限公司 一种提取眼底图像标注的方法及装置
CN110880169B (zh) * 2019-10-16 2024-09-06 平安科技(深圳)有限公司 病灶区域标注方法、装置、计算机系统及可读存储介质
CN110992384B (zh) * 2019-11-15 2023-04-11 五邑大学 半自动化图像数据标注方法、电子装置及存储介质
CN111291667A (zh) * 2020-01-22 2020-06-16 上海交通大学 细胞视野图的异常检测方法及存储介质
CN112734775B (zh) * 2021-01-19 2023-07-07 腾讯科技(深圳)有限公司 图像标注、图像语义分割、模型训练方法及装置
CN114332049B (zh) * 2021-12-31 2023-03-03 广东利元亨智能装备股份有限公司 边缘检测方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230564A1 (en) * 2009-11-16 2012-09-13 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
CN103310215A (zh) * 2013-07-03 2013-09-18 天津工业大学 一种环状编码标记点的检测与识别方法
CN106204544A (zh) * 2016-06-29 2016-12-07 南京中观软件技术有限公司 一种自动提取图像中标志点位置及轮廓的方法和系统
CN107578051A (zh) * 2017-09-14 2018-01-12 西华大学 一种环状编码标记点的检测与识别方法
CN108921836A (zh) * 2018-06-28 2018-11-30 京东方科技集团股份有限公司 一种提取眼底图像标注的方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6726734B2 (ja) * 2015-03-26 2020-07-22 アイコー,エルエルシー 画像解析のためのシステム
CN106650596A (zh) * 2016-10-10 2017-05-10 北京新皓然软件技术有限责任公司 一种眼底图像分析方法、装置及系统
JP7250793B2 (ja) * 2017-12-07 2023-04-03 ベンタナ メディカル システムズ, インコーポレイテッド 生体画像における連帯的細胞および領域分類のための深層学習システムならびに方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230564A1 (en) * 2009-11-16 2012-09-13 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
CN103310215A (zh) * 2013-07-03 2013-09-18 天津工业大学 一种环状编码标记点的检测与识别方法
CN106204544A (zh) * 2016-06-29 2016-12-07 南京中观软件技术有限公司 一种自动提取图像中标志点位置及轮廓的方法和系统
CN107578051A (zh) * 2017-09-14 2018-01-12 西华大学 一种环状编码标记点的检测与识别方法
CN108921836A (zh) * 2018-06-28 2018-11-30 京东方科技集团股份有限公司 一种提取眼底图像标注的方法及装置

Also Published As

Publication number Publication date
US20220309676A1 (en) 2022-09-29
US11379989B2 (en) 2022-07-05
CN108921836A (zh) 2018-11-30
US20210407097A1 (en) 2021-12-30
US11783488B2 (en) 2023-10-10

Similar Documents

Publication Publication Date Title
WO2020001236A1 (zh) 提取医学图像标注的方法及装置
CN108510482B (zh) 一种基于阴道镜图像的宫颈癌检测装置
Zhao et al. Retinal vessels segmentation based on level set and region growing
Zhang et al. Detection of microaneurysms using multi-scale correlation coefficients
Sheng et al. Retinal vessel segmentation using minimum spanning superpixel tree detector
WO2020151307A1 (zh) 病灶自动识别方法、装置及计算机可读存储介质
Farnell et al. Enhancement of blood vessels in digital fundus photographs via the application of multiscale line operators
Sopharak et al. Simple hybrid method for fine microaneurysm detection from non-dilated diabetic retinopathy retinal images
Zhu et al. Detection of the optic nerve head in fundus images of the retina using the hough transform for circles
CN107644420B (zh) 基于中心线提取的血管图像分割方法、核磁共振成像系统
WO2020259453A1 (zh) 3d图像的分类方法、装置、设备及存储介质
WO2021114636A1 (zh) 基于多模态数据的病灶分类方法、装置、设备及存储介质
Díaz-Pernil et al. Fully automatized parallel segmentation of the optic disc in retinal fundus images
WO2017017687A1 (en) Automatic detection of cutaneous lesions
Salih et al. Fast optic disc segmentation using FFT-based template-matching and region-growing techniques
CN110473176B (zh) 图像处理方法及装置、眼底图像处理方法、电子设备
Tania et al. Computational complexity of image processing algorithms for an intelligent mobile enabled tongue diagnosis scheme
CN105975955B (zh) 一种图像中文本区域的检测方法
Figueiredo et al. Unsupervised segmentation of colonic polyps in narrow-band imaging data based on manifold representation of images and Wasserstein distance
Nagaraj et al. Carotid wall segmentation in longitudinal ultrasound images using structured random forest
Elbalaoui et al. Exudates detection in fundus images using mean-shift segmentation and adaptive thresholding
Zhou et al. Automated detection of red lesions using superpixel multichannel multifeature
CN114757953B (zh) 医学超声图像识别方法、设备及存储介质
Yang et al. Detection of microaneurysms and hemorrhages based on improved Hessian matrix
Rehman et al. Dermoscopy cancer detection and classification using geometric feature based on resource constraints device (Jetson Nano)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19825993

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19825993

Country of ref document: EP

Kind code of ref document: A1