CN107146231B - Retinal image bleeding area segmentation method and device and computing equipment - Google Patents

Retinal image bleeding area segmentation method and device and computing equipment Download PDF

Info

Publication number
CN107146231B
CN107146231B CN201710308401.7A CN201710308401A CN107146231B CN 107146231 B CN107146231 B CN 107146231B CN 201710308401 A CN201710308401 A CN 201710308401A CN 107146231 B CN107146231 B CN 107146231B
Authority
CN
China
Prior art keywords
image
area
threshold
region
dark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710308401.7A
Other languages
Chinese (zh)
Other versions
CN107146231A (en
Inventor
季鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Quanyi Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710308401.7A priority Critical patent/CN107146231B/en
Publication of CN107146231A publication Critical patent/CN107146231A/en
Application granted granted Critical
Publication of CN107146231B publication Critical patent/CN107146231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a retinal image bleeding area segmentation method which is executed in computing equipment and comprises the following steps: obtaining a retina image to be segmented, and carrying out contrast enhancement on the image to obtain an enhanced image; filtering the enhanced image to extract a background image of the retina image; taking a difference value between the color value of each pixel in the enhanced image and the color value of the corresponding pixel in the background image to obtain a difference value image; obtaining a dark area image containing a dark area in the retina image according to the RGB color value of each pixel in the difference image, wherein the dark area comprises a blood vessel area, a bleeding area and a dark noise area; determining a blood vessel region from the enhanced image, and removing the region from the dark region image to obtain a blood vessel removed image; a dark noise region is determined from the enhanced image and removed from the de-angioed image, resulting in a hemorrhage region of the retinal image. The invention also discloses a corresponding retina image bleeding area segmentation device.

Description

Retinal image bleeding area segmentation method and device and computing equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method for segmenting a bleeding area of a retina image.
Background
Diabetic retinopathy (abbreviated as "sugar net") is an ophthalmic disease widely existing in diabetic patients, and can affect the vision of the patients, and even cause blindness of the patients seriously. Regular screening and early detection of retinopathy can reduce the visual impairment of patients to the maximum extent. Retinal hemorrhage disorder is intraretinal hemorrhage caused by rupture of microaneurysms in the retina, which is one of the early visible signs of the carbohydrate network. Therefore, accurate detection of bleeding points in the retinal image is of great significance for automatic screening of the sugar network, effective assessment and inhibition of disease development.
However, the focus edge of the retinal hemorrhage point is unclear, the contrast with the background is poor, the focus edge is too close to the gray level of a blood vessel, the shape is irregular and the size is different, and the imaging quality of the retinal image is uncertain, so that the difficulty of automatically detecting the retinal hemorrhage area is very high, and the problems of high false detection rate, high omission rate, complex operation, low processing efficiency and the like exist.
Therefore, a new method for accurately and rapidly segmenting the hemorrhage region of the retinal image is needed.
Disclosure of Invention
To this end, the present invention provides a retinal image hemorrhage region segmentation method, apparatus and computing device to solve or at least alleviate the above existing problems.
According to an aspect of the present invention, there is provided a retinal image bleeding area segmentation method, executed in a computing device, the method including: obtaining a retina image to be segmented, and carrying out contrast enhancement on the image to obtain an enhanced image of the retina image; filtering the enhanced image to extract a background image of the retina image; taking a difference value between the RGB color value of each pixel in the enhanced image and the RGB color value of the corresponding pixel in the background image to obtain a difference image; obtaining a dark area image according to the RGB color values of each pixel in the difference image, wherein the dark area image is marked with a dark area in the retina image, and the dark area comprises a blood vessel area, a bleeding area and a dark noise area; determining a blood vessel region from the enhanced image, and removing the region from the dark region image to obtain a blood vessel removed image; and determining a dark noise area from the enhanced image, and removing the area from the de-vascularized image to obtain a bleeding area of the retinal image.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, the step of performing contrast enhancement on the retinal image includes: normalizing the RGB three-channel color values of each pixel in the retina image into a number between 0 and 1; for each color channel of RGB, the following formula is usedDetermining the color value of each pixel in the enhanced image: i is1(x,y)=α·I0(x, y) - β. I (x, y; + gamma), wherein I1(x, y) denotes the color value of the pixel with coordinates (x, y) in the enhanced image, I0(x, y) represents a color value of a pixel of coordinate (x, y) in the retinal image, and I (x, y;) represents a local mean of a pixel of coordinate (x, y) in the retinal image, wherein the local mean is derived by Gaussian filtering with both window size and variance.
Optionally, in the retinal image hemorrhage region segmentation method according to the present invention, the step of performing filtering processing on the enhanced image to extract a background image of the retinal image includes: generating a plurality of filters having different window sizes; filtering three color channels of RGB of each pixel in the enhanced image by adopting a plurality of filters respectively to obtain a plurality of filtering results of each channel; and averaging a plurality of filtering results of each channel to obtain a color value of the channel, thereby obtaining a background image.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, the filtering is wiener filtering, and the calculation formula is:
Figure BDA0001286416010000021
wherein the content of the first and second substances,
Figure BDA0001286416010000022
for the frequency domain transform of the extracted image for wiener filtering, G (u, v) is the frequency domain transform of the currently processed image for wiener filtering, H (u, v) is a degradation function, and K is a fixed constant.
Optionally, in the retinal image hemorrhage region segmentation method according to the present invention, the step of obtaining the dark region image according to the RGB color values of the pixels in the difference image includes: acquiring RGB three-channel color values of each pixel in the difference image, and determining a color threshold of each channel according to the acquired color values of each channel; and marking the color values of all channels of the pixel as 0 or 1 by comparing the color values of all channels of RGB of each pixel in the difference image with the color threshold of the corresponding channel, so that the difference image is converted into a dark area image, wherein the dark area image is a binary image.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, the step of determining a blood vessel region from the enhanced image and removing the region from the dark region image includes: filtering the enhanced image for multiple times by adopting the sizes of multiple windows under different variances to respectively obtain multiple filtering results under each variance, and averaging the multiple filtering results to obtain a filtering mean value under the variance; combining the filtering mean values under each difference, and performing threshold segmentation on the combined image to obtain an intermediate image, wherein the intermediate image comprises a pseudo blood vessel region and a blood vessel region and is a binary image; determining a pseudo-blood vessel region in the intermediate image by performing connected component analysis on the image; removing the pseudo-blood vessel region from the intermediate image to obtain a distribution map of the blood vessel region, and recording the distribution map as a blood vessel distribution map; and taking the difference value between the RGB color value of each pixel of the dark area image and the RGB color value of the corresponding pixel in the blood vessel distribution diagram, thereby removing the blood vessel area from the dark area image.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, the step of determining the pseudo blood vessel region in the intermediate image by performing connected component analysis on the image includes: determining each connected domain in the intermediate image; calculating an attribute value of each connected domain, wherein the attribute value comprises the area and the perimeter of the connected domain, a minimum rectangular frame containing the connected domain, and at least one of eccentricity, major axis length and minor axis length of an ellipse with the same standard second-order central moment as the region; and judging whether the attribute values of the connected domains meet a first preset condition, and if so, marking the connected domains as pseudo-blood vessel regions.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, the first predetermined condition includes any one of: the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is larger than a second threshold, and the ratio of the length of the long axis to the length of the short axis of the connected domain is smaller than a third threshold; or the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is smaller than a fourth threshold, and the ratio of the perimeter is larger than a fifth threshold; or the area of the connected domain is smaller than a sixth threshold, the eccentricity is smaller than a seventh threshold, and the ratio of the length of the long axis to the length of the short axis is smaller than an eighth threshold.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, the step of determining the dark noise region from the enhanced image includes: converting the enhanced image from the RGB color space to the HSV color space; judging whether the HSV value of each pixel in the enhanced image meets a second preset condition or not; if so, the pixel is marked as dark noise.
Optionally, in the retinal image hemorrhage region segmentation method according to the present invention, the step of determining the dark noise region from the enhanced image further includes: calculating the gradient amplitude of each pixel in the enhanced image in the G channel and the mean value of the gradient amplitude in each connected domain, wherein the connected domains are suitable for being determined from the dark domain image; and if the mean value of the gradient amplitudes in a connected domain is smaller than a preset threshold value, marking the connected domain as a dark noise area.
Alternatively, in the retinal image hemorrhage region segmentation method according to the present invention, α is β is 4, γ is 0.5, is an arbitrary integer between 10 to 20, the first threshold range is [200, 5000], the second threshold is 0.35, the third threshold is 2.5, the fourth threshold is 0.25, the fifth threshold is 0.95, the sixth threshold is 600, the seventh threshold is 0.97, and the eighth threshold is 2.
According to an aspect of the present invention, there is provided a retinal image hemorrhage region segmentation apparatus, residing in a computing device, the apparatus comprising: the image preprocessing unit is suitable for acquiring a retina image to be segmented, performing contrast enhancement on the image to obtain an enhanced image of the retina image, and performing filtering processing on the enhanced image to extract a background image of the retina image; the difference image generating unit is suitable for obtaining the difference value of the RGB color value of each pixel in the enhanced image and the RGB color value of the corresponding pixel in the background image to obtain a difference image; the dark area determining unit is suitable for obtaining a dark area image according to the RGB color values of all pixels in the difference image, the dark area image is marked with a dark area in the retina image, and the dark area comprises a blood vessel area, a bleeding area and a dark noise area; a blood vessel removing unit, which is suitable for determining a blood vessel region from the enhanced image and removing the region from the dark region image to obtain a blood vessel removed image; and a dark noise removal unit adapted to determine a dark noise region from the enhanced image and remove the region from the de-angioed image, resulting in a hemorrhage region of the retinal image.
According to an aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions including the retinal image hemorrhage region segmentation apparatus as described above; wherein the processor is configured to be adapted to execute the retinal image hemorrhage area segmentation method as described above according to the retinal image hemorrhage area segmentation device stored in the memory.
According to an aspect of the present invention, there is provided a computer-readable storage medium storing program instructions, the program instructions including the retinal image hemorrhage region segmentation apparatus as described above; when the retinal image hemorrhage region segmentation apparatus stored in the computer-readable storage medium is read by a computing device, the computing device may perform the retinal image hemorrhage region segmentation method as described above.
According to the technical scheme of the invention, the original retina image is subjected to contrast enhancement, the problems of uneven illumination and the like of the original retina image are solved, the contrast between the bleeding area and the background image is enhanced, and the subsequent bleeding area is more accurately segmented. And then, carrying out wiener filtering on the enhanced image to extract a background image of the original retina image, and obtaining a difference image by taking a difference value between the enhanced image and the background image. And then, acquiring a dark area threshold value of the difference image, and converting the difference image into a dark area image containing a dark area in the retina image according to the threshold value, wherein the dark area comprises a blood vessel area, a bleeding area and a dark noise area.
And then, determining a pseudo-blood vessel region by sequentially carrying out Gaussian filtering, threshold segmentation and connected domain analysis on the enhanced image, removing the pseudo-blood vessel region to obtain a distribution map of the blood vessel region, namely a blood vessel distribution map, and further taking a difference value between the dark region image and the blood vessel distribution map to remove the blood vessel region to obtain a blood vessel removed image. Finally, a dark noise area is determined by adopting a color method and a gradient method respectively, and the dark noise area is removed from the blood vessel removing image, so that a final bleeding area is obtained. According to the method, the related interference areas are processed layer by layer for multiple times, so that the segmentation of the bleeding areas is more accurate, the misjudgment of the bleeding areas is avoided, the complexity of post-image processing is reduced to a great extent, and the calculation speed is increased.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1A shows a schematic diagram of a hemorrhage region segmentation system 100a according to one embodiment of the present invention;
FIG. 1B shows a schematic diagram of a hemorrhage region segmentation system 100B according to one embodiment of the present invention;
FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention;
fig. 3 is a block diagram showing a retinal image hemorrhage region segmentation apparatus 300 according to an embodiment of the present invention;
FIG. 4 illustrates a flow diagram of a retinal image hemorrhage region segmentation method 400 according to one embodiment of the invention;
fig. 5A to 5H are diagrams showing effects of an embodiment of segmentation of a retinal image hemorrhage region according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1A shows a schematic diagram of a hemorrhage region segmentation system 100a according to one embodiment of the present invention. The system 100a shown in fig. 1A includes a retinal image capture device 110 and a computing device 200. It should be noted that system 100a of fig. 1A is merely exemplary, and in particular implementations, any number of retinal image capture devices 110 and computing devices 200 may be included in system 100a, and the present invention does not limit the number of retinal image capture devices 110 and computing devices 200 included in system 100 a.
The retinal image capture device 110 may be, for example, any type of fundus camera suitable for capturing retinal images; the computing device 200 may be a device such as a PC, laptop, cell phone, tablet, etc., adapted to perform image processing tasks. In the system 100a, the retinal image capture device 110 and the computing device 200 are relatively close in space, and both of them can perform close-range communication in a wired or wireless manner, for example, the retinal image capture device 110 can establish a wired connection with the computing device 200 through a USB interface, an RJ-45 interface, a BNC interface, etc., or establish a wireless connection with the computing device 200 through a bluetooth, WiFi, ZigBee, ieee802.11x, etc., and the present invention does not limit the connection manner between the retinal image capture device 110 and the computing device 200. The retinal image hemorrhage region segmentation apparatus 300 resides in the computing device 200, the apparatus 300 may be installed in the computing device 200 as a stand-alone software, or may reside in a browser of the computing device 200 as a web application, or may be merely a piece of code located in a memory of the computing device 200, and the present invention is not limited to the existence of the apparatus 300 in the computing device 200. When the retinal image capture device 110 captures a retinal image, the retinal image is sent to the computing device 200. The computing device 200 receives the retinal image and processes the received retinal image by the apparatus 300 to segment the hemorrhage zone in the retinal image.
Fig. 1B shows a schematic diagram of a hemorrhage region segmentation system 100B according to one embodiment of the present invention. The system 100B shown in FIG. 1B includes a retinal image capture device 110, a local client 120, and a computing device 200. It should be noted that system 100B of fig. 1B is merely exemplary, and in particular implementations, any number of retinal image capture devices 110, local clients 120, and computing devices 200 may be included in system 100B, and the present invention is not limited by the number of retinal image capture devices 110, local clients 120, and computing devices 200 included in system 100B.
The retinal image capture device 110 may be, for example, any type of fundus camera suitable for capturing retinal images; local client 120 may be a device such as a PC, laptop, cell phone, tablet, etc., adapted to receive retinal images captured by retinal image capture device 110 and send them to computing device 200 via the internet; the computing device 200 may be implemented as a server, which may be, for example, a WEB server, an application server, or the like, adapted to provide retinal image hemorrhage region segmentation services. In system 100b, retinal image capture device 110 is spatially closer to local client 120, which may perform near field communication in a wired or wireless manner; local client 120 is located a relatively large distance from computing device 200, and both may communicate remotely via the internet in a wired or wireless manner. When the retinal image is captured by the retinal image capture device 110, the retinal image is sent to the local client 120. Subsequently, the local client 120 sends the received retinal image to the computing device 200, and the computing device 200 receives the retinal image, processes the received retinal image by the apparatus 300, segments a bleeding area in the retinal image, and returns the segmentation result to the local client 120. It should be noted that although the retinal image capture device 110 and the local client 120 are shown separately as two devices in the system 100b, one skilled in the art will recognize that in other embodiments, the retinal image capture device 110 and the local client 120 may be integrated into one device that simultaneously performs all of the functions described above as being performed by the device 110 and the local client 120.
FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention. In a basic configuration 202, computing device 200 typically includes a system memory 206 and one or more central processors 204. A memory bus 208 may be used for communication between the central processor 204 and the system memory 206. The central processor 204 is the computational core and control core of the computing device 200, and its primary function is to interpret computer instructions and process data in various software.
The central processor 204 may be any type of processing, including but not limited to a microprocessor (μ P), a microcontroller (μ C), a digital information processor (DSP), or any combination thereof, depending on the desired configuration, the central processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216, the example processor core 214 may include an arithmetic logic unit (A L U), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. In some implementations, the application 222 can be arranged to operate with program data 224 on an operating system. The application 222 is embodied in system memory as a plurality of pieces of program instructions, for example, the application 222 may be an executable program (. exe file) or a piece of JS code in a web page. The central processor 204 may execute these program instructions to implement the functions indicated by the application 222. In the present invention, the application 222 includes a retinal image bleeding region segmentation apparatus 300. The retinal image hemorrhage region segmentation apparatus 300 is an instruction set composed of multiple lines of code, and can instruct the central processor 204 to perform operations related to image processing, so as to realize the hemorrhage region segmentation of the retinal image.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 102 via the bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable storage media as used herein may include both storage media and communication media. According to one embodiment, a computer readable storage medium has program instructions stored therein, which includes a retinal image hemorrhage region segmentation apparatus 300. When the apparatus 300 stored in the computer-readable storage medium is read by the computing device 200, the central processor 204 of the computing device 200 may execute a corresponding retinal image hemorrhage region segmentation method to achieve segmentation of the hemorrhage region in the retinal image.
Fig. 3 shows a block diagram of a retinal image hemorrhage region segmentation apparatus 300 according to an embodiment of the present invention. As shown in fig. 3, the apparatus 300 includes an image preprocessing unit 320, a difference image generating unit 340, a dark region determining unit 360, a blood vessel removing unit 380, and a dark noise removing unit 390.
The image preprocessing unit 320 is adapted to obtain a retina image to be segmented, perform contrast enhancement on the image to obtain an enhanced image of the retina image, and perform filtering processing on the enhanced image to obtain a background image of the retina image.
Wherein, the retinal image to be segmented, i.e. the original retinal image collected by the retinal image collecting device 110, is as shown in fig. 5A, wherein the left side is the retinal image with a circular field angle, and the retinal image is a complete circle; the right side is a retinal image with an irregular field angle, which is missing a portion in each of the upper and lower semicircles. Of course, in order to facilitate the subsequent corresponding processing of the image pixels, the image preprocessing unit 320 may first crop the retina image, and adjust the cropped image size to a predetermined size by using an image interpolation method or other existing methods. Fig. 5B is a diagram of the left and right images in fig. 5A after being cut and resized, and the left and right images appearing in the subsequent 5C-5H are effect diagrams obtained by processing based on the two diagrams in fig. 5B, respectively, fig. 5C is an enhanced image obtained by contrast enhancement of fig. 5B, and fig. 5D is a background image obtained by filtering fig. 5C. It should be understood that, in practice, various processes (such as contrast enhancement) can be directly performed on the basis of fig. 5A without performing clipping and resizing operations (i.e., omitting fig. 5B), which does not affect the determination result of the final bleeding area.
According to one embodiment, the image pre-processing unit 320 may perform contrast enhancement on the original retinal image according to the following method: normalizing the RGB three-channel color values of each pixel in the retina image into a number between 0 and 1; for each color channel of RGB, determining the color value of each pixel in the enhanced image according to the following formula:
I1(x,y)=α·I0(x,y)-β·I(x,y;)+γ
wherein, I1(x, y) denotes the color value of the pixel with coordinates (x, y) in the enhanced image, I0According to one embodiment, α - β -4, γ -0.5 are arbitrary integers between 10 and 20, although these values are merely illustrative and other values may be set as needed in actual operation, and the invention is not limited thereto1(x, y) may be less than 0 or greater than 1, and for convenience of processing, values less than 0 may be set to 0, and values greater than 1 may be set to 1.
According to another embodiment, the image preprocessing unit 320 may perform the filtering process on the enhanced image according to the following method: generating a plurality of filters (e.g., wiener filtered filters) having different window sizes; filtering the RGB three color channels of each pixel in the enhanced image by adopting the plurality of filters respectively to obtain a plurality of filtering results of each channel; and averaging a plurality of filtering results of each channel to obtain a color value of each pixel point, wherein the pixel points form the background image.
There are various methods for extracting the image background, such as wiener filtering, median filtering, etc., but it is considered that the calculation speed of wiener filtering is not affected by the size of the filter window, and the larger the filter window of median filtering is, the slower the calculation speed is, and the lower the efficiency is. Therefore, the filtering method in the invention selects wiener filtering, and the calculation formula is as follows:
Figure BDA0001286416010000101
wherein the content of the first and second substances,
Figure BDA0001286416010000102
for the frequency domain transform of the extracted background image for wiener filtering, G (u, v) is the frequency domain transform of the enhanced image, H (u, v) is the degradation function, and K is a fixed constant. H (u, v) and K may be set to any value as required, for H (u, v), u and v need to be sampled to generate a discrete filter in the calculation, and the size of the filter window may be set to any value as required. Is derived from the above formula
Figure BDA0001286416010000103
Then, to
Figure BDA0001286416010000104
And performing Fourier inversion to obtain a background image of the spatial domain.
Here, three filter window sizes of 50 × 50,100 × 100, and 500 × 500 may be selected and used to filter each channel of RGB for each pixel in the enhanced image. Here, for each channel of the RGB three channels, three filters are used for filtering respectively, that is, the above wiener filtering formula is used for calculation, and then an average value of filtering results of the three filters is used as a color value of the channel. And obtaining the value of each channel of each pixel point by the method, and obtaining the final background image.
After the enhanced image and the background image are generated, the difference image generating unit 340 obtains a difference image by taking a difference between the RGB color value of each pixel in the enhanced image and the RGB color value of the corresponding pixel in the background image. Specifically, the RGB color value of each pixel in the enhanced image may be subtracted from the RGB color value of the corresponding pixel in the background image.
Subsequently, the dark area determining unit 360 obtains a dark area image in which a dark area in the original retina image is marked, the dark area including a blood vessel area, a bleeding area, and a dark noise area, from the RGB color values of the pixels in the difference image.
Specifically, the dark area determining unit 360 is adapted to obtain color values of RGB three channels of each pixel in the difference image, and determine a color threshold of each channel according to the obtained color value of the channel; and marking each channel color value of each pixel as 0 or 1 by comparing each channel color value of each pixel in the difference image with the color threshold of the corresponding channel, so that the difference image is converted into a dark area image, wherein the dark area image is a binary image.
Further, the average color value of each channel of all pixels may be used as a color threshold (which may be regarded as a bright area threshold) of the corresponding channel, and if the RGB values of a certain pixel point are all greater than the color threshold of the corresponding channel, the RGB values of the pixel point are all set to 1; otherwise, setting the values to 0 to obtain a binary image (i.e. a black-and-white image), where the dark area is the set of pixels in the binary image whose RGB values are all set to 1. Of course, another assignment may be adopted: setting the RGB values of the former to be 0 and the RGB values of the latter to be 1, wherein the dark area is the set of pixels with the RGB values of 0 in the binary image. Fig. 5E shows a bright area image, i.e., a binary image, obtained according to the first assignment method, wherein a white area is a dark area (RGB values are all 1), which corresponds to a darker area in the original retina image.
Subsequently, the blood vessel removing unit 380 determines a blood vessel region from the enhanced image and removes the region from the dark region image, resulting in a blood vessel removed image.
In particular, the vessel removal unit 380 may determine the vessel region from the enhanced image according to the following method: and filtering the enhanced image for multiple times by adopting the sizes of the windows under different variances to respectively obtain multiple filtering results under each variance, and averaging the multiple filtering results to obtain a filtering mean value under the variance. And then, combining the filtering mean values under the difference of each party, and performing threshold segmentation on the combined image to obtain an intermediate image, wherein the intermediate image is a binary image and comprises a pseudo blood vessel region and a blood vessel region. Finally, the intermediate image is analyzed by the connected component to determine the pseudo blood vessel region in the image, and the pseudo blood vessel region is removed from the intermediate image to obtain the distribution map of the blood vessel region, which is recorded as a blood vessel distribution map.
Specifically, the blood vessel removing unit 380 may adopt gaussian filtering with multiple window sizes in the process of filtering the enhanced image for multiple times with multiple window sizes under different variances, mainly considering that the local direction and curvature change of the blood vessel in the retina image are small, and the gray scale change of the cross section is approximate to a gaussian curve. According to one embodiment, two variances can be used, and 19 window sizes can be selected for each variance, such as:
1) 1-5, and respectively selecting 19 window sizes of 2 × 2-20 × 20 to perform Gaussian filtering on the enhanced image to obtain 19 filtering results;
2) and 2-1.8, and selecting 19 window sizes of 2 × 2-20 × 20 respectively to perform Gaussian filtering on the enhanced image to obtain 19 filtering results.
Then, the average values of the 19 kinds of filtering results under the two variances are obtained, and the obtained two filtering average values are combined, so that a graph corresponding to the combined result can be obtained. There are various algorithms for performing threshold segmentation on the merged image, such as OTSU metrix algorithm, maximum entropy method, iterative method, and so on, and the method for generating the dark region image may also be referred to, that is, the color value of each channel and the color threshold of each channel in the merged image are obtained first, and the merged image is converted into an intermediate image by comparing the color value of each channel with the color threshold of the channel, where the intermediate image is a binary image marked with a blood vessel region and a pseudo blood vessel region.
Further, the blood vessel removal unit 380 may determine the pseudo-blood vessel region in the intermediate image according to the following method: determining each connected domain in the intermediate image by adopting an existing arbitrary connected domain determining algorithm, and then calculating an attribute value of each connected domain, wherein the attribute value comprises at least one of the area, the perimeter and the smallest rectangular frame containing the connected domain of the connected domain, and the eccentricity, the major axis length and the minor axis length of an ellipse with the same standard second-order central moment as the region. And finally, judging whether the attribute values of the connected domains meet a first preset condition, and if so, marking the connected domains as pseudo-blood vessel regions. Wherein, the first predetermined condition may be any one of the following conditions:
1) the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is larger than a second threshold, and the ratio of the length of the long axis to the length of the short axis of the connected domain is smaller than a third threshold;
2) the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is smaller than a fourth threshold, and the ratio of the perimeter is larger than a fifth threshold; or
3) The area of the connected domain is smaller than a sixth threshold, the eccentricity is smaller than a seventh threshold, and the ratio of the length of the long axis to the length of the short axis is smaller than an eighth threshold.
Here, the first threshold value range may be [200, 5000], the second threshold value may be 0.35, the third threshold value may be 2.5, the fourth threshold value may be 0.25, the fifth threshold value may be 0.95, the sixth threshold value may be 600, the seventh threshold value may be 0.97, and the eighth threshold value may be 2. Of course, other values may be set as needed, and the present invention is not limited to these specific values.
After the pseudo-blood vessel region is determined, the blood vessel removing unit 380 removes the region from the intermediate image to obtain a distribution map of the blood vessel region, which is denoted as a blood vessel distribution map. Considering that the intermediate image is a black-and-white binary image, and the RGB values of the pixels in the dark region are all set to 1 in the process of generating the dark region image, a method of setting the RGB values of the pixels in the pseudo-blood vessel region in the intermediate image to 0 may be adopted to remove the pseudo-blood vessel region, and the obtained blood vessel distribution map is shown in fig. 5F, where the white region is the blood vessel region. In addition, it is also possible to record the coordinate values of each pixel in the pseudo blood vessel region, and set the RGB values of the pixels at the corresponding coordinates to 0 in the intermediate image. Of course, it should be understood that if the pixel color values of the dark regions are all set to 0 in the foregoing dark region image generation process, the pixel color values of the pseudo-blood vessel regions may be all set to 1 here.
After the blood vessel region is determined, the blood vessel removing unit 380 may take a difference between the RGB color value of each pixel in the dark region image and the RGB color value of the corresponding pixel in the blood vessel distribution map, so as to remove the blood vessel region from the dark region image, and obtain a blood vessel removed image as shown in fig. 5G. Here, the difference value obtaining algorithm may be to subtract the RGB color value of each pixel in the dark region image from the RGB color value of the corresponding pixel in the blood vessel distribution map, and certainly, the difference value may also be obtained after performing some weight calculations.
And (3) removing the white area in the blood vessel image, wherein the white area corresponds to the bleeding area and the dark noise area in the original retina image, and removing the dark noise area to obtain the bleeding area. Accordingly, the dark noise removing unit 390 determines a dark noise area from the enhanced image and removes the area from the angioed image, thereby obtaining a bleeding area of the retina image.
According to one embodiment, the dark noise removing unit 390 may determine the dark noise area according to a method of color. Specifically, the enhanced image is converted from the RGB color space to the HSV color space, whether the HSV value of each pixel meets a second preset condition is judged, and if yes, the pixel is marked as dark noise. Wherein the second predetermined condition may be any one of the following conditions: h is outside the first interval, S is outside the second interval, or V is outside the third interval. Wherein the first interval is [0.45, 1], the second interval is [0.15, 0.75], and the third interval is [0.45, 0.75 ]. Here, when the values of HSV are all within the corresponding interval range, the pixel may be regarded as a bleeding point. It should be noted that, when determining whether the HSV value satisfies the second predetermined condition, the HSV value used herein is a normalized HSV value, that is, the HSV value is first normalized to a number between 0 and 1, and then it is determined whether the normalized HSV value satisfies the second predetermined condition. In addition, it should be understood that these values are merely exemplary, and other values may be set as required in actual operation, and the present invention is not limited thereto.
After determining the dark noise based on the color method, the dark noise can be removed from the de-angioed image. Here, reference may be made to a method for removing the pseudo blood vessel region, and a method for setting a pixel color value of the dark noise to 0 or 1 is also adopted to remove the dark noise. In addition, the coordinate value of the pixel marked as dark noise may also be recorded, and then the RGB value of the pixel at the coordinate is set in the de-angioed image, which is not described herein again.
According to another embodiment, the dark noise removing unit 390 may also determine the dark noise area according to a gradient method. Specifically, the characteristic that the color contrast of the bleeding point and the background area in the G channel is high can be utilized to calculate the gradient amplitude of each pixel in the enhanced image in the G channel and the mean value of the gradient amplitudes in each connected domain, wherein the connected domain can be determined from the dark area image. If the mean value of the gradient amplitudes in a connected domain is smaller than a preset threshold value, marking the connected domain as a dark noise area; otherwise, the connected domain is considered as a bleeding area. Wherein the predetermined threshold is a mean of gradient magnitudes of all other pixels excluding all connected regions in the enhanced image. After the dark noise area is determined according to the gradient method, the marked dark noise area can be removed from the blood vessel removed image by referring to the removal method of the pseudo blood vessel area, which is not described herein again.
In practice, the method of arbitrarily selecting the color or gradient may be selected to determine the dark noise region. However, in order to achieve better dark noise removal effect, the dark noise regions determined in the two ways may be collected. When a combined method is adopted, the two methods can be implemented in any order, and the invention does not limit the sequence of the methods. The dark noise regions marked by these two methods are then removed from the de-angioed image, resulting in the final hemorrhage region. In the combination method of the two methods adopted herein, the step of determining the dark noise comprises two steps, each step determines the dark noise on the basis of the previous step, and the region which has been determined as the dark noise in the previous step is not repeatedly determined in the subsequent step, so that the dark noise region can be accurately determined, omission is avoided, and unnecessary calculation is reduced, thereby increasing the calculation speed. The bleeding area obtained by the combined treatment of these two ways is shown as a black spot area in fig. 5H.
Fig. 4 shows a flowchart of a retinal image hemorrhage region segmentation method 400 according to one embodiment of the invention. The method 400 is suitable for being performed in the retinal image hemorrhage region segmentation apparatus 300 shown in fig. 3. As shown in fig. 4, the method 400 begins at step S420.
In step S420, a retinal image to be segmented is acquired, and contrast enhancement is performed on the image to obtain an enhanced image of the retinal image; and in step S440, the enhanced image is subjected to a filtering process to extract a background image of the retina image. The specific processes of the two steps can refer to the foregoing description of the image preprocessing unit 320, and are not described herein again.
Subsequently, in step S460, the RGB color value of each pixel in the enhanced image and the RGB color value of the corresponding pixel in the background image are subtracted to obtain a difference image. The specific process of this step may refer to the foregoing description of the difference image generating unit 340, and is not described herein again.
In step S470, a dark area image is obtained according to the RGB color values of the pixels in the difference image, and the dark area image marks a dark area in the retina image, where the dark area includes a blood vessel area, a bleeding area, and a dark noise area. The specific process of this step may refer to the foregoing description of the dark area determining unit 360, and is not described herein again.
Subsequently, in step S480, a blood vessel region is determined from the enhanced image and removed from the dark region image, resulting in a blood vessel-removed image. The detailed process of this step can refer to the foregoing description of the blood vessel removing unit 380, and will not be described herein again.
Subsequently, in step S490, a dark noise area is determined from the enhanced image, and the area is removed from the angioed image to obtain a bleeding area of the retina image. The specific process of this step can refer to the foregoing description of the dark noise removing unit 390, and is not described herein again.
The following is an embodiment of the present invention for segmentation of a retinal image hemorrhage region:
1) acquiring an original retinal image, as shown in fig. 5A;
2) cutting and size-adjusting the original retina image to obtain a graph 5B;
3) contrast enhancement is performed on fig. 5B, resulting in an enhanced image-fig. 5C;
4) performing wiener filtering on the image of fig. 5C, and extracting a background image-fig. 5D;
5) subtracting the RGB values of the corresponding pixels of the background image (fig. 5D) from the RGB pixel values of the enhanced image (fig. 5C) to obtain a dark region image, as shown in fig. 5E;
6) performing multiple Gaussian filtering on the image 5C, combining multiple filtering results, converting the combined image into an intermediate image represented by a binary value, performing connected domain analysis on the binary image, determining a pseudo-blood vessel region, and removing the pseudo-blood vessel region from the intermediate image to obtain a blood vessel distribution map-image 5F;
7) subtracting the blood vessel distribution map (fig. 5F) from the dark region image (fig. 5E) to obtain a blood vessel removed image, as shown in fig. 5G;
8) the dark noise area in fig. 5C is determined according to the color and gradient thereof, respectively, and is removed from fig. 5G, resulting in the final bleeding area as shown in fig. 5H.
According to the technical scheme of the invention, an enhanced image is obtained by carrying out contrast enhancement on the retina image, and a background image is obtained by carrying out filtering processing on the enhanced image; obtaining a difference image based on the enhanced image and the background image; a dark region including a blood vessel region, a bleeding region, and a dark noise region is separated from the difference image. Then, locating a blood vessel region and a dark noise region from the enhanced image; and finally, removing the blood vessel area and the dark noise area from the dark area image to obtain a final bleeding area. The method can quickly and comprehensively locate the lesion candidate area and the interference area, and remove the interference area in the lesion candidate area, thereby very accurately locating the bleeding area and preventing misjudgment of results. In addition, the invention also improves the analysis and detection speed of the traditional fundus image, greatly reduces the manpower and material resources during data processing, and can be widely applied to the automatic application of large-scale fundus images.
In addition, the invention can reduce the difference between different fundus images by standardized cutting of the fundus images, and the enhancement processing of the fundus images can reduce the difference caused by uneven illumination of the same fundus image, which improves the processing precision of the fundus images from various details, thereby further improving the accuracy of the judgment result of the bleeding area.
A8, the method as in a7, wherein the first predetermined condition comprises any one of: the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is larger than a second threshold, and the ratio of the length of the long axis to the length of the short axis of the connected domain is smaller than a third threshold; the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is smaller than a fourth threshold, and the ratio of the perimeter is larger than a fifth threshold; or the area of the connected domain is smaller than a sixth threshold, the eccentricity is smaller than a seventh threshold, and the ratio of the length of the long axis to the length of the short axis is smaller than an eighth threshold.
A9, the method of A1, wherein the step of determining the dark noise region from the enhanced image comprises: converting the enhanced image from an RGB color space to an HSV color space; and judging whether the HSV value of each pixel meets a second preset condition, and if so, marking the pixel as dark noise.
A10, the method as claimed in a1 or a9, wherein the step of determining the dark noise region from the enhanced image further comprises: calculating the gradient amplitude of each pixel in the enhanced image in a G channel and the mean value of the gradient amplitude in each connected domain, wherein the connected domains are suitable for being determined from the dark area image; and if the mean value of the gradient amplitudes in a connected domain is smaller than a preset threshold value, marking the connected domain as a dark noise area.
A11, wherein the method is as defined in any one of a1-a10, wherein α is β is 4, γ is 0.5, and is an integer between 10 and 20, the first threshold range is [200, 5000], the second threshold is 0.35, the third threshold is 2.5, the fourth threshold is 0.25, the fifth threshold is 0.95, the sixth threshold is 600, the seventh threshold is 0.97, and the eighth threshold is 2.
B13, the apparatus as in B12, wherein the image pre-processing unit is adapted to contrast-enhance the retinal image according to the following method: normalizing the RGB three-channel color values of each pixel in the retina image into a number between 0 and 1; for each color channel of RGB, determining the color value of each pixel in the enhanced image according to the following formula: i is1(x,y)=α·I0(x, y) - β. I (x, y; + γ wherein, I1(x, y) denotes a color value of a pixel with coordinates (x, y) of the enhanced image, I0(x, y) represents a color value of a pixel of coordinate (x, y) in the retinal image, and I (x, y;) represents a local mean of a pixel of coordinate (x, y) in the retinal image, wherein the local mean is derived by Gaussian filtering with both window size and variance.
B14, the apparatus as defined in B12, wherein the image pre-processing unit is adapted to filter the enhanced image according to: generating a plurality of filters having different window sizes; filtering the RGB three color channels of each pixel in the enhanced image by adopting the plurality of filters respectively to obtain a plurality of filtering results of each channel; and averaging a plurality of filtering results of each channel to obtain a color value of the channel, thereby obtaining the background image.
B15, the device of B12 or B14, wherein the filtering is wiener filtering, of the formula:
Figure BDA0001286416010000171
wherein the content of the first and second substances,
Figure BDA0001286416010000172
for the frequency domain transform of the extracted image for wiener filtering, G (u, v) is the frequency domain transform of the currently processed image for wiener filtering, H (u, v) is a degradation function, and K is a fixed constant.
B16, the apparatus as defined in B12, wherein the dark region determining unit is adapted to obtain a dark region image according to the following method: acquiring RGB three-channel color values of each pixel in the difference image, and determining a color threshold of each channel according to the acquired color values of the channels; and marking the color values of the channels of each pixel as 0 or 1 by comparing the color value of each channel of each pixel in the difference image with the color threshold of the corresponding channel, so that the difference image is converted into a dark area image, and the dark area image is a binary image.
B17, the apparatus as claimed in B12, wherein the vessel removal unit is adapted to determine the vessel region from the enhanced image and remove it from the dark region image according to the following method: filtering the enhanced image for multiple times by adopting the sizes of multiple windows under different variances to respectively obtain multiple filtering results under each variance, and averaging the multiple filtering results to obtain a filtering mean value under the variance; combining the filtering mean values under each difference, and performing threshold segmentation on the combined image to obtain an intermediate image, wherein the intermediate image comprises a pseudo blood vessel region and a blood vessel region and is a binary image; determining a pseudo-vessel region in the intermediate image by performing connected component analysis on the image; removing the pseudo blood vessel region from the intermediate image to obtain a distribution map of the blood vessel region, and recording the distribution map as a blood vessel distribution map; and removing the blood vessel region from the dark region image by taking the difference value between the RGB color value of each pixel of the dark region image and the RGB color value of the corresponding pixel in the blood vessel distribution map.
B18, the apparatus as in B17, wherein the vessel removal unit is further adapted to determine pseudo-vessel regions according to the following method: determining each connected domain in the intermediate image; calculating an attribute value of each connected domain, wherein the attribute value comprises the area and the perimeter of the connected domain, a minimum rectangular frame containing the connected domain, and at least one of eccentricity, major axis length and minor axis length of an ellipse with the same standard second-order central moment as the region; and judging whether the attribute values of the connected domains meet a first preset condition, and if so, marking the connected domains as pseudo-blood vessel regions.
B19, the apparatus as in B18, wherein the first predetermined condition comprises any one of: the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is larger than a second threshold, and the ratio of the length of the long axis to the length of the short axis of the connected domain is smaller than a third threshold; the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is smaller than a fourth threshold, and the ratio of the perimeter is larger than a fifth threshold; or the area of the connected domain is smaller than a sixth threshold, the eccentricity is smaller than a seventh threshold, and the ratio of the length of the long axis to the length of the short axis is smaller than an eighth threshold.
B20, the apparatus as claimed in B12, wherein the dark noise removal unit is adapted to determine the dark noise region from the enhanced image according to the following method: converting the enhanced image from an RGB color space to an HSV color space; and judging whether the HSV value of each pixel meets a second preset condition, and if so, marking the pixel as dark noise.
B21, the apparatus as claimed in B12 or B20, wherein the dark noise removal unit is further adapted to determine the dark noise region from the enhanced image according to the following method: calculating the gradient amplitude of each pixel in the enhanced image in a G channel and the mean value of the gradient amplitude in each connected domain, wherein the connected domains are suitable for being determined from the dark area image; and if the mean value of the gradient amplitudes in a connected domain is smaller than a preset threshold value, marking the connected domain as a dark noise area.
B22, the device according to any one of B12-B21, wherein α is β is 4, γ is 0.5, and is an integer between 10 and 20, the first threshold range is [200, 5000], the second threshold is 0.35, the third threshold is 2.5, the fourth threshold is 0.25, the fifth threshold is 0.95, the sixth threshold is 600, the seventh threshold is 0.97, and the eighth threshold is 2.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the retinal image hemorrhage region segmentation method according to the present invention based on instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.

Claims (18)

1. A retinal image hemorrhage region segmentation method, executed in a computing device, the method comprising:
obtaining a retina image to be segmented, and carrying out contrast enhancement on the image to obtain an enhanced image of the retina image;
filtering the enhanced image to extract a background image of the retina image;
taking a difference value between the RGB color value of each pixel in the enhanced image and the RGB color value of the corresponding pixel in the background image to obtain a difference image;
obtaining a dark area image according to the RGB color values of the pixels in the difference image, wherein the dark area image is marked with a dark area in the retina image, and the dark area comprises a blood vessel area, a bleeding area and a dark noise area;
determining the blood vessel region from the enhanced image, and removing the blood vessel region from the dark region image to obtain a blood vessel removed image, comprising:
filtering the enhanced image for multiple times by adopting the sizes of multiple windows under different variances to respectively obtain multiple filtering results under each variance, and averaging the multiple filtering results to obtain a filtering mean value under the variance;
combining the filtering mean values under each difference, and performing threshold segmentation on the combined image to obtain an intermediate image, wherein the intermediate image comprises a pseudo blood vessel region and a blood vessel region and is a binary image;
determining a pseudo-vessel region in the intermediate image by performing connected component analysis on the image, comprising: determining each connected domain in the intermediate image; calculating an attribute value of each connected domain, wherein the attribute value comprises the area and the perimeter of the connected domain, a minimum rectangular frame containing the connected domain, and at least one of eccentricity, major axis length and minor axis length of an ellipse with the same standard second-order central moment as the region; judging whether the attribute values of the connected domains meet a first preset condition, if so, marking the connected domains as pseudo-blood vessel regions;
removing the pseudo blood vessel region from the intermediate image to obtain a distribution map of the blood vessel region, and recording the distribution map as a blood vessel distribution map;
removing the blood vessel region from the dark region image by taking the difference value between the RGB color value of each pixel of the dark region image and the RGB color value of the corresponding pixel in the blood vessel distribution map; and
determining the dark noise area from the enhanced image, and removing the dark noise area from the blood vessel removing image to obtain a bleeding area of the retina image;
wherein the step of determining the dark noise region from the enhanced image comprises: calculating the gradient amplitude of each pixel in the enhanced image in a G channel and the mean value of the gradient amplitude in each connected domain, wherein the connected domains are suitable for being determined from the dark area image; and if the mean value of the gradient amplitudes in a connected domain is smaller than a preset threshold value, marking the connected domain as a dark noise area.
2. The method of claim 1, wherein the step of contrast enhancing the retinal image comprises:
normalizing the RGB three-channel color values of each pixel in the retina image into a number between 0 and 1;
for each color channel of RGB, determining the color value of each pixel in the enhanced image according to the following formula:
I1(x,y)=α·I0(x,y)-β·I(x,y;)+γ
wherein, I1(x, y) denotes a color value of a pixel with coordinates (x, y) of the enhanced image, I0(x, y) represents a color value of a pixel of coordinate (x, y) in the retinal image, and I (x, y;) represents a local mean of a pixel of coordinate (x, y) in the retinal image, wherein the local mean is derived by Gaussian filtering with both window size and variance.
3. The method of claim 1, wherein the step of performing a filtering process on the enhanced image to extract a background image of the retinal image comprises:
generating a plurality of filters having different window sizes;
filtering the RGB three color channels of each pixel in the enhanced image by adopting the plurality of filters respectively to obtain a plurality of filtering results of each channel; and
and averaging a plurality of filtering results of each channel to obtain a color value of the channel, thereby obtaining the background image.
4. The method of claim 1, wherein the filtering is wiener filtering, which is calculated by:
Figure FDA0002528280610000031
wherein the content of the first and second substances,
Figure FDA0002528280610000032
for the frequency domain transform of the extracted image for wiener filtering, G (u, v) is the frequency domain transform of the currently processed image for wiener filtering, H (u, v) is a degradation function, and K is a fixed constant.
5. The method of claim 1, wherein the step of obtaining the dark region image according to the RGB color values of the pixels in the difference image comprises:
acquiring RGB three-channel color values of each pixel in the difference image, and determining a color threshold of each channel according to the acquired color values of the channels; and
and marking the color values of the channels of each pixel as 0 or 1 by comparing the color value of each channel of each pixel in the difference image with the color threshold of the corresponding channel, so that the difference image is converted into a dark area image, and the dark area image is a binary image.
6. The method of any one of claims 1-5, wherein the first predetermined condition comprises any one of:
the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is larger than a second threshold, and the ratio of the length of the long axis to the length of the short axis of the connected domain is smaller than a third threshold;
the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is smaller than a fourth threshold, and the ratio of the perimeter is larger than a fifth threshold; or
The area of the connected domain is smaller than a sixth threshold, the eccentricity is smaller than a seventh threshold, and the ratio of the length of the long axis to the length of the short axis is smaller than an eighth threshold.
7. The method of claim 1, wherein determining the dark noise region from the enhanced image comprises:
converting the enhanced image from an RGB color space to an HSV color space; and
and judging whether the HSV value of each pixel meets a second preset condition, and if so, marking the pixel as dark noise.
8. The method of claim 6, wherein,
α is β is 4, γ is 0.5, and is an arbitrary integer between 10 and 20;
the first threshold range is [200, 5000], the second threshold is 0.35, the third threshold is 2.5, the fourth threshold is 0.25, the fifth threshold is 0.95, the sixth threshold is 600, the seventh threshold is 0.97, and the eighth threshold is 2.
9. An apparatus for retinal image hemorrhage region segmentation, executed in a computing device, the apparatus comprising:
the image preprocessing unit is suitable for acquiring a retina image to be segmented, performing contrast enhancement on the image to obtain an enhanced image of the retina image, and performing filtering processing on the enhanced image to extract a background image of the retina image;
the difference image generating unit is suitable for obtaining the difference value of the RGB color value of each pixel in the enhanced image and the RGB color value of the corresponding pixel in the background image to obtain a difference image;
the dark area determining unit is suitable for obtaining a dark area image according to the RGB color values of all pixels in the difference image, the dark area image is marked with a dark area in the retina image, and the dark area comprises a blood vessel area, a bleeding area and a dark noise area;
a vessel removing unit adapted to determine the vessel region from the enhanced image and remove the region from the dark region image to obtain a vessel-removed image, and specifically adapted to:
filtering the enhanced image for multiple times by adopting the sizes of multiple windows under different variances to respectively obtain multiple filtering results under each variance, and averaging the multiple filtering results to obtain a filtering mean value under the variance;
combining the filtering mean values under each difference, and performing threshold segmentation on the combined image to obtain an intermediate image, wherein the intermediate image comprises a pseudo blood vessel region and a blood vessel region and is a binary image;
determining a pseudo-vessel region in the intermediate image by performing connected component analysis on the image, comprising: determining each connected domain in the intermediate image; calculating an attribute value of each connected domain, wherein the attribute value comprises the area and the perimeter of the connected domain, a minimum rectangular frame containing the connected domain, and at least one of eccentricity, major axis length and minor axis length of an ellipse with the same standard second-order central moment as the region; judging whether the attribute values of the connected domains meet a first preset condition, if so, marking the connected domains as pseudo-blood vessel regions;
removing the pseudo blood vessel region from the intermediate image to obtain a distribution map of the blood vessel region, and recording the distribution map as a blood vessel distribution map;
removing the blood vessel region from the dark region image by taking the difference value between the RGB color value of each pixel of the dark region image and the RGB color value of the corresponding pixel in the blood vessel distribution map; and
a dark noise removal unit adapted to determine the dark noise region from the enhanced image and remove the region from the de-angioed image, resulting in a hemorrhage region of the retinal image;
wherein the dark noise removal unit is adapted to determine the dark noise region from the enhanced image according to the following steps: calculating the gradient amplitude of each pixel in the enhanced image in a G channel and the mean value of the gradient amplitude in each connected domain, wherein the connected domains are suitable for being determined from the dark area image; and if the mean value of the gradient amplitudes in a connected domain is smaller than a preset threshold value, marking the connected domain as a dark noise area.
10. The apparatus of claim 9, wherein the image pre-processing unit is adapted to contrast-enhance the retinal image according to the following method:
normalizing the RGB three-channel color values of each pixel in the retina image into a number between 0 and 1;
for each color channel of RGB, determining the color value of each pixel in the enhanced image according to the following formula:
I1(x,y)=α·I0(x,y)-β·I(x,y;)+γ
wherein, I1(x, y) denotes a color value of a pixel with coordinates (x, y) of the enhanced image, I0(x, y) denotes a color value of a pixel having coordinates (x, y) in the retinal image, and I (x, y;) denotes a coordinate (x, y) in the retinal imageA local mean of the pixel labeled (x, y), wherein the local mean is derived by gaussian filtering with both window size and variance.
11. The apparatus of claim 9, wherein the image pre-processing unit is adapted to filter the enhanced image according to:
generating a plurality of filters having different window sizes;
filtering the RGB three color channels of each pixel in the enhanced image by adopting the plurality of filters respectively to obtain a plurality of filtering results of each channel; and
and averaging a plurality of filtering results of each channel to obtain a color value of the channel, thereby obtaining the background image.
12. The apparatus of claim 9, wherein the filtering is wiener filtering, which is calculated by:
Figure FDA0002528280610000061
wherein the content of the first and second substances,
Figure FDA0002528280610000062
for the frequency domain transform of the extracted image for wiener filtering, G (u, v) is the frequency domain transform of the currently processed image for wiener filtering, H (u, v) is a degradation function, and K is a fixed constant.
13. The apparatus of claim 9, wherein the dark region determining unit is adapted to obtain the dark region image according to the following method:
acquiring RGB three-channel color values of each pixel in the difference image, and determining a color threshold of each channel according to the acquired color values of the channels; and
and marking the color values of the channels of each pixel as 0 or 1 by comparing the color value of each channel of each pixel in the difference image with the color threshold of the corresponding channel, so that the difference image is converted into a dark area image, and the dark area image is a binary image.
14. The apparatus of any one of claims 9-13, wherein the first predetermined condition comprises any one of:
the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is larger than a second threshold, and the ratio of the length of the long axis to the length of the short axis of the connected domain is smaller than a third threshold;
the area of the connected domain meets a first threshold range, the ratio of the area of the minimum rectangular frame to the area of the connected domain is smaller than a fourth threshold, and the ratio of the perimeter is larger than a fifth threshold; or
The area of the connected domain is smaller than a sixth threshold, the eccentricity is smaller than a seventh threshold, and the ratio of the length of the long axis to the length of the short axis is smaller than an eighth threshold.
15. The apparatus of claim 9, wherein the dark noise removal unit is adapted to determine the dark noise region from the enhanced image according to the following method:
converting the enhanced image from an RGB color space to an HSV color space; and
and judging whether the HSV value of each pixel meets a second preset condition, and if so, marking the pixel as dark noise.
16. The apparatus of claim 14, wherein,
α is β is 4, γ is 0.5, and is an arbitrary integer between 10 and 20;
the first threshold range is [200, 5000], the second threshold is 0.35, the third threshold is 2.5, the fourth threshold is 0.25, the fifth threshold is 0.95, the sixth threshold is 600, the seventh threshold is 0.97, and the eighth threshold is 2.
17. A computing device, comprising:
at least one processor; and
a memory storing program instructions;
wherein the processor is configured to perform the method of any one of claims 1-8 according to program instructions stored in the memory.
18. A computer readable storage medium having program instructions stored thereon that are readable by a computing device to cause the computing device to perform the method of any of claims 1-8.
CN201710308401.7A 2017-05-04 2017-05-04 Retinal image bleeding area segmentation method and device and computing equipment Active CN107146231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710308401.7A CN107146231B (en) 2017-05-04 2017-05-04 Retinal image bleeding area segmentation method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710308401.7A CN107146231B (en) 2017-05-04 2017-05-04 Retinal image bleeding area segmentation method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN107146231A CN107146231A (en) 2017-09-08
CN107146231B true CN107146231B (en) 2020-08-07

Family

ID=59774023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710308401.7A Active CN107146231B (en) 2017-05-04 2017-05-04 Retinal image bleeding area segmentation method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN107146231B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280816B (en) * 2017-12-19 2020-09-18 维沃移动通信有限公司 Gaussian filtering method and mobile terminal
CN108596895B (en) * 2018-04-26 2020-07-28 上海鹰瞳医疗科技有限公司 Fundus image detection method, device and system based on machine learning
CN108577803B (en) * 2018-04-26 2020-09-01 上海鹰瞳医疗科技有限公司 Fundus image detection method, device and system based on machine learning
CN109993731A (en) * 2019-03-22 2019-07-09 依未科技(北京)有限公司 A kind of eyeground pathological changes analysis method and device
CN113822897A (en) * 2021-11-22 2021-12-21 武汉楚精灵医疗科技有限公司 Blood vessel segmentation method, terminal and computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520888B (en) * 2008-02-27 2012-06-27 中国科学院自动化研究所 Method for enhancing blood vessels in retinal images based on the directional field
CN104102899A (en) * 2014-05-23 2014-10-15 首都医科大学附属北京同仁医院 Retinal vessel recognition method and retinal vessel recognition device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520888B (en) * 2008-02-27 2012-06-27 中国科学院自动化研究所 Method for enhancing blood vessels in retinal images based on the directional field
CN104102899A (en) * 2014-05-23 2014-10-15 首都医科大学附属北京同仁医院 Retinal vessel recognition method and retinal vessel recognition device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"基于视频的舌下微循环血流灌注自动评价方法";吕菲,赵兴群;《中国医疗器械杂志》;20141231;第38卷(第4期);第251-254页 *
"彩色眼底图像糖网渗出物的自动检测";吕卫 等;《光电工程》;20161231;第43卷(第12期);第183-192、199页 *
吕卫 等."彩色眼底图像糖网渗出物的自动检测".《光电工程》.2016,第43卷(第12期),第183-192、199页. *

Also Published As

Publication number Publication date
CN107146231A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN107146231B (en) Retinal image bleeding area segmentation method and device and computing equipment
CN107038704B (en) Retina image exudation area segmentation method and device and computing equipment
CN110766736B (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN107123124B (en) Retina image analysis method and device and computing equipment
JP6255486B2 (en) Method and system for information recognition
WO2013168618A1 (en) Image processing device and image processing method
CN110176010B (en) Image detection method, device, equipment and storage medium
CN107292835B (en) Method and device for automatically vectorizing retinal blood vessels of fundus image
CN109509186B (en) Cerebral CT image-based ischemic stroke lesion detection method and device
CN109859217B (en) Segmentation method and computing device for pore region in face image
CN112150371B (en) Image noise reduction method, device, equipment and storage medium
CN111091571A (en) Nucleus segmentation method and device, electronic equipment and computer-readable storage medium
CN110880177A (en) Image identification method and device
CN111275659B (en) Weld image processing method and device, terminal equipment and storage medium
JP6819445B2 (en) Information processing equipment, control methods, and programs
CN113034525A (en) Image edge detection method, device and equipment
CN111915541B (en) Image enhancement processing method, device, equipment and medium based on artificial intelligence
CN107133932B (en) Retina image preprocessing method and device and computing equipment
Mudassar et al. Extraction of blood vessels in retinal images using four different techniques
Antal et al. A multi-level ensemble-based system for detecting microaneurysms in fundus images
CN109716355B (en) Particle boundary identification
CN113296095A (en) Target hyperbolic edge extraction method for pulse ground penetrating radar
CN109658394B (en) Fundus image preprocessing method and system and microangioma detection method and system
Ren et al. Automatic optic disc localization and segmentation in retinal images by a line operator and level sets
CN107038705B (en) Retinal image bleeding area segmentation method and device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220511

Address after: 519000 unit Z, room 615, 6th floor, main building, No. 10, Keji 1st Road, Gangwan Avenue, Tangjiawan Town, Xiangzhou District, Zhuhai City, Guangdong Province (centralized office area)

Patentee after: Zhuhai Quanyi Technology Co.,Ltd.

Address before: 272500 No. 032, juntun Township commercial street, Wenshang County, Jining City, Shandong Province

Patentee before: Ji Xin