CN112102174A - Fundus image processing method, device, equipment and storage medium - Google Patents

Fundus image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112102174A
CN112102174A CN202011021576.8A CN202011021576A CN112102174A CN 112102174 A CN112102174 A CN 112102174A CN 202011021576 A CN202011021576 A CN 202011021576A CN 112102174 A CN112102174 A CN 112102174A
Authority
CN
China
Prior art keywords
foreground
fundus image
radius
original
rotated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011021576.8A
Other languages
Chinese (zh)
Other versions
CN112102174B (en
Inventor
孙钦佩
杨叶辉
许言午
王磊
黄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011021576.8A priority Critical patent/CN112102174B/en
Priority claimed from CN202011021576.8A external-priority patent/CN112102174B/en
Publication of CN112102174A publication Critical patent/CN112102174A/en
Application granted granted Critical
Publication of CN112102174B publication Critical patent/CN112102174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for processing fundus images, relates to the field of artificial intelligence, particularly relates to the field of computer vision and intelligent medical treatment, and can be applied to medical image analysis scenes. One embodiment of the method comprises: rotating the original fundus image to obtain a rotated fundus image; calculating foreground radiuses of the original fundus image and the rotated fundus image respectively; determining a foreground mask based on the calculated foreground radius; based on the foreground mask, a foreground region is extracted from the original fundus image. The embodiment improves the extraction success rate of the foreground area.

Description

Fundus image processing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the field of artificial intelligence, and particularly relates to a method, a device, equipment and a storage medium for processing fundus images.
Background
With the continuous development of informatization, information systems are increasingly applied to daily work, and become indispensable tools and means. Meanwhile, with the further development of the medical informatization system, the informatization greatly improves the analysis efficiency of medical image data and promotes scientific research progress. The preprocessing work surrounding the medical image data can ensure the quality of the data. A fundus image in a medical system includes a foreground region and a background region. The foreground area is a colored circular area of the fundus oculi, and data of the foreground area can be used for analyzing fundus oculi lesions and belongs to an effective area. The background area is a black background area of the fundus oculi, on which white characters including information of the patient's name, sex, age, date of birth, examination number, etc. are provided. The information has little analysis effect on the image data, belongs to sensitive information and is strictly forbidden to be leaked. Therefore, fundus effective region extraction using image processing is required. In the process of data transmission and storage, only the part of data is circulated, so that sensitive information of a patient is prevented from being leaked, and background noise is removed to improve the quality of the data.
At present, the foreground extraction mode of the fundus image mainly comprises the following two modes: firstly, a circle detection method by Hough transform: and detecting circles in the image by using Hough transform to obtain the area with the largest radius, namely the area where the eye fundus image foreground is located, reserving the image in the circle, removing the image outside the circle, and finishing the extraction of the effective area of the image. Secondly, the deep learning segmentation algorithm performs region segmentation: and (3) carrying out pixel-level segmentation on the whole image by using a convolutional neural network, distinguishing a foreground region and a background region, and segmenting out an effective fundus region.
Disclosure of Invention
The embodiment of the application provides a fundus image processing method, a fundus image processing device, a fundus image processing apparatus and a storage medium.
In a first aspect, an embodiment of the present application provides a fundus image processing method, including: rotating the original fundus image to obtain a rotated fundus image; calculating foreground radiuses of the original fundus image and the rotated fundus image respectively; determining a foreground mask based on the calculated foreground radius; based on the foreground mask, a foreground region is extracted from the original fundus image.
In a second aspect, an embodiment of the present application provides an apparatus for fundus image processing, including: a rotation module configured to rotate the original fundus image to obtain a rotated fundus image; a calculation module configured to calculate foreground radii of the original fundus image and the rotated fundus image, respectively; a determination module configured to determine a foreground mask based on the calculated foreground radius; an extraction module configured to extract a foreground region from the original fundus image based on the foreground mask.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
In a fourth aspect, embodiments of the present application propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as described in any one of the implementations of the first aspect.
According to the fundus image processing method, device, equipment and storage medium provided by the embodiment of the application, an original fundus image is rotated to obtain a rotated fundus image; then, foreground radiuses of the original fundus image and the rotating fundus image are respectively calculated; then determining a foreground mask based on the calculated foreground radius; and finally, extracting a foreground region from the original fundus image based on the foreground mask. Effective fundus information is reserved, sensitive information is removed, foreground extraction can be efficiently and quickly carried out on fundus images, and the extraction success rate of a foreground area is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings. The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a fundus image processing method according to the present application;
FIG. 3 is a schematic view of a fundus image;
FIG. 4 is a flow diagram of one embodiment of a fundus image processing method according to the present application;
fig. 5 is a scene diagram in which a fundus image processing method according to an embodiment of the present application can be implemented;
fig. 6 is a schematic configuration diagram of an embodiment of a fundus image processing apparatus according to the present application;
fig. 7 is a block diagram of an electronic apparatus for implementing a fundus image processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which an embodiment of a fundus image processing method or a fundus image processing apparatus of the present application may be applied.
As shown in fig. 1, a system architecture 100 may include a storage device 101, a network 102, and a server 103. Network 102 serves as a medium to provide communication links between storage devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The storage device 101 may interact with a server 103 through a network 102. The raw fundus image may be provided in a storage device 101, including but not limited to a database, user terminal, and the like.
The server 103 may provide various services, and for example, the server 103 may perform processing such as analysis on data such as an original fundus image acquired from the storage device 101, and generate a processing result (e.g., a foreground region of the original fundus image).
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
Note that, the fundus image processing method provided in the embodiment of the present application is generally executed by the server 103, and accordingly, a fundus image processing apparatus is generally provided in the server 103.
It should be understood that the number of storage devices, networks, and servers in FIG. 1 is illustrative only. There may be any number of storage devices, networks, and servers, as desired for an implementation. In the case where the original fundus image is stored in the server 103, the system architecture 100 may not be provided with the storage device 101 and the network 102.
With continued reference to fig. 2, a flow 200 of one embodiment of a fundus image processing method according to the present application is shown. The fundus image processing method includes the steps of:
in step 201, the original fundus image is rotated to obtain a rotated fundus image.
In the present embodiment, the execution subject of the fundus image processing method (e.g., the server 103 shown in fig. 1) may acquire an original fundus image and rotate the original fundus image to obtain a rotated fundus image. Wherein the rotating fundus image is in a different orientation than the original fundus image, but unchanged in content and size. The rotation angle and the rotation direction can be set randomly or by default. For example, the original fundus image is rotated clockwise by 90 degrees, and a rotated fundus image is obtained.
In general, a fundus image is an image obtained by photographing the fundus of the eye. The fundus is the posterior tissue within the eyeball, the inner membrane of the eyeball, including the retina, the optic papilla, the macular area, and the central retinal artery and vein. A fundus image in a medical system includes a foreground region and a background region. The foreground area is a colored circular area of the fundus oculi, and data of the foreground area can be used for analyzing fundus oculi lesions and belongs to an effective area. The background area is a black background area of the fundus oculi, on which white characters are provided, including sensitive information of the patient's name, sex, age, date of birth, examination number, etc.
For ease of understanding, fig. 3 shows a fundus image schematic. The central circular area is a foreground area, the surrounding black area is a background area, and white text information such as the Name (Name: aaaaa), the Birth Date (Birth Date: bbbbbbbb), and the check number (PatientID: cccc) of the patient is included in the background area.
In step 202, foreground radii of the original fundus image and the rotated fundus image are calculated, respectively.
In the present embodiment, the above-described executing body may calculate foreground radii of the original fundus image and the rotated fundus image, respectively.
In general, methods of calculating the foreground radius of the fundus image are various. As an example, the execution body described above may detect a circular fundus contour of a fundus image, and take the radius of the detected circular fundus contour as a foreground radius.
Note that the foreground radii of the original fundus image and the rotated fundus image may be calculated by the same method or by different methods, and are not limited herein.
Step 203, determine foreground mask based on the calculated foreground radius.
In this embodiment, the execution subject may determine the foreground mask based on the calculated foreground radius. The foreground mask may be a matrix for extracting the foreground region. The elements of the foreground mask correspond to the pixels of the fundus image one by one, the value of the element corresponding to the pixel point in the foreground region is 1, and the value of the element corresponding to the pixel point in the background region is 0.
In general, the execution body may determine a boundary of a region of the foreground mask where the element is 1 and a region of the foreground mask where the element is 0 based on the calculated foreground radius. For example, the execution subject may select one foreground radius with the largest numerical value from all foreground radii, set the pixel point in a circle with the center point of the circular fundus contour as the center of the circle and the selected foreground radius as the radius to 1, and set the remaining pixel points to 0, so as to manufacture the foreground mask. For another example, the executing entity may calculate a mean value of all foreground radii, set the pixel points in a circle with the center point of the circular fundus contour as a center of the circle and the mean value as a radius to 1, and set the remaining pixel points to 0, so as to manufacture the foreground mask.
In step 204, a foreground region is extracted from the original fundus image based on the foreground mask.
In this embodiment, the execution subject described above may extract a foreground region from an original fundus image based on a foreground mask. Specifically, the execution body may multiply the foreground mask by the original fundus image, the pixel point multiplied by the element having a value of 1 in the foreground mask is kept unchanged, and the pixel point multiplied by the element having a value of 0 in the foreground mask is set to 0. In this way, the background region of the original fundus image is filtered out, and only the foreground region of the original fundus image is retained.
The fundus image processing method provided by the embodiment of the application comprises the steps of firstly rotating an original fundus image to obtain a rotating fundus image; then, foreground radiuses of the original fundus image and the rotating fundus image are respectively calculated; then determining a foreground mask based on the calculated foreground radius; and finally, extracting a foreground region from the original fundus image based on the foreground mask. A foreground mask is made based on foreground radii of the original fundus image and the rotated fundus image for extracting a foreground region from the original fundus image. Effective fundus information is reserved, sensitive information is removed, foreground extraction can be efficiently and quickly carried out on fundus images, and the extraction success rate of a foreground area is improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of a fundus image processing method according to the present application is shown. The fundus image processing method includes the steps of:
step 401, rotating the original fundus image for a plurality of times continuously at a preset rotation angle to obtain a plurality of rotated fundus images.
In the present embodiment, an execution subject of the fundus image processing method (e.g., the server 103 shown in fig. 1) continuously rotates an original fundus image a plurality of times at a preset rotation angle, resulting in a plurality of rotated fundus images. The rotation times and the rotation direction can be set randomly or by default. For example, the original fundus image is continuously rotated clockwise 3 times at 15 degrees, and 3 rotated fundus images rotated clockwise 15 degrees, 30 degrees, and 45 degrees with respect to the original fundus image are obtained.
Note that the more rotated fundus images obtained by continuously rotating the original fundus image, the higher the accuracy of the extracted foreground region and the larger the calculation amount. Therefore, the accuracy requirement and the calculation amount intensity can be comprehensively considered to determine the appropriate rotation angle and the number of rotations.
Step 402, extracting pixel points on the central line of each fundus image to obtain a pixel point set.
In this embodiment, for each fundus image, the execution subject may extract pixel points on the central line of the fundus image to obtain a pixel point set. Wherein the center line of the image is a straight line passing through the center of the image. The number of centerlines may be one or more. Under the condition that the number of the central lines is multiple, the execution main body can respectively extract pixel points on each central line to correspondingly obtain a plurality of pixel point sets.
It should be noted that the greater the number of center lines, the higher the accuracy of the extracted foreground region, and the greater the calculation amount. Thus, the accuracy requirements and computational intensity can be combined to determine the appropriate number of centerlines.
In some optional implementation manners of this embodiment, the central lines may include a horizontal central line and a vertical central line, and at this time, the pixel points on the horizontal central line and the vertical central line are respectively extracted to obtain a horizontal pixel point set and a vertical pixel point set. Wherein the horizontal center line is a horizontal line at half the height of the fundus image, and the vertical center line is a vertical line at half the width of the fundus image.
Step 403, calculating the pixel value mean value of the pixel point set.
In this embodiment, the execution subject may calculate a mean value of pixel values of the pixel point set.
At step 404, a foreground pixel value boundary value is determined based on the pixel value mean.
In this embodiment, the execution subject may determine the foreground pixel value boundary value based on the pixel value mean. For example, the pixel value average is subtracted by a preset value to obtain the foreground pixel value boundary value. For another example, the foreground pixel value boundary value is obtained by dividing the pixel value mean by a predetermined value.
In practical application, multiple experiments show that the pixel value average value is divided by 10 to be used as a foreground pixel value boundary value, so that pixel points in a foreground area can be just separated from pixel points in a background area. The pixel points in the foreground area are all larger than the pixel value mean divided by 10, and the pixel points in the background area are all smaller than the pixel value mean divided by 10.
Step 405, counting the number of pixel points in the pixel point set which are larger than the boundary value of the foreground pixel value.
In this embodiment, the execution subject may count the number of pixel points in the pixel point set that are greater than the boundary value of the foreground pixel value. The number of pixel points greater than the foreground pixel value boundary value is the number of pixel points of the foreground region on the central line, that is, the foreground diameter of the fundus image.
In step 406, the foreground radius of the fundus image is determined based on the number.
In the present embodiment, the above-described execution subject may determine the foreground radius of the fundus image based on the number. Specifically, the number is divided by 2, and the foreground radius of the fundus image is obtained.
It should be noted that step 302-306 provides a method for efficiently determining the foreground radius. Step 302-. If step 302 and step 306 are performed based on the horizontal centerline, the resulting foreground radius is the horizontal foreground radius. If step 302 and step 306 are performed based on the vertical centerline, the resulting foreground radius is the vertical foreground radius.
The greater the number of center lines, the higher the accuracy of the extracted foreground region and the greater the amount of computation. Thus, the accuracy requirements and computational intensity can be combined to determine the appropriate number of centerlines
Step 407, select the foreground radius with the largest value from the calculated foreground radii as the foreground mask radius.
In this embodiment, the execution subject may select a foreground radius with a largest value from the calculated foreground radii as the foreground mask radius. The maximum foreground radius is used as the foreground mask radius, so that the foreground area in the original fundus image can be almost completely extracted, the influence of problems such as overexposure or underexposure of the original fundus image is avoided, and the data loss of the extracted foreground area is reduced as much as possible.
Step 408, a foreground mask is generated based on the foreground mask radius and the original fundus image.
In this embodiment, the execution body described above may generate a foreground mask from the foreground mask radius and the original fundus image. The foreground mask may be a matrix for extracting the foreground region. The elements of the foreground mask correspond to the pixels of the fundus image one by one, the value of the element corresponding to the pixel point in the foreground region is 1, and the value of the element corresponding to the pixel point in the background region is 0.
In general, the execution body may determine a boundary between a region of an element 1 and a region of an element 0 in the foreground mask based on the foreground mask radius. For example, the executing body may set the center of the center line corresponding to the foreground mask radius in the fundus image as the center of the circle, set the pixel points in the circle with the foreground mask radius as the radius to 1, and set the remaining pixel points to 0, so as to manufacture the foreground mask.
In step 409, a foreground region is extracted from the original fundus image based on the foreground mask.
In this embodiment, the specific operation of step 409 has been described in detail in step 204 in the embodiment shown in fig. 2, and is not described herein again.
As can be seen from fig. 4, the flow 400 of the fundus image processing method in the present embodiment highlights the foreground radius calculation step and the foreground mask generation step as compared with the embodiment corresponding to fig. 2. . Therefore, the scheme described in this embodiment determines the foreground radius according to the pixel value distribution, so that the foreground pixel points are just included and the background pixel points are removed. The maximum foreground radius is used as the foreground mask radius, so that the foreground area in the original fundus image can be almost completely extracted without being influenced by the problems of overexposure or underexposure and the like of the original fundus image, the data loss of the extracted foreground area is reduced as much as possible, and the integrity of the data is ensured.
For ease of understanding, fig. 5 shows a scene diagram in which the fundus image processing method according to the embodiment of the present application can be implemented.
First, the original fundus image is continuously rotated clockwise 3 times at 15 degrees, and 1 original fundus image rotated 0 degrees and 3 rotated fundus images rotated clockwise 15 degrees, 30 degrees, 45 degrees with respect to the original fundus image are obtained.
Thereafter, for each of the 4 fundus images, the following steps are performed:
1) extracting pixel points on the horizontal center line of the fundus image, and recording the pixel points as a pixel point set h _ array;
2) calculating the mean value of the pixel point set h _ array, and recording the mean value as a mean value h _ mean;
3) calculating the number of pixel points with the pixel value larger than h _ mean/10 in the pixel point set h _ array, and recording as the number h _ diameter;
4) the foreground radius h _ radius of the fundus image in the horizontal direction is h _ diameter/2.
5) And 4) calculating the foreground radius of the fundus image in the vertical direction similarly to the steps 1) to 4), and obtaining the foreground radius of the fundus image in the horizontal direction and the vertical direction.
Each fundus image calculates 1 foreground radius in the horizontal direction and 1 foreground radius in the vertical direction, and 4 fundus images calculate 8 foreground radii.
Then, the foreground radius with the largest value is selected from the 8 foreground radii, namely the radius of the mask.
And finally, extracting the original fundus image by using a mask to obtain a foreground region.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present application provides an embodiment of a fundus image processing apparatus, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the fundus image processing apparatus 600 of the present embodiment may include: a rotation module 601 a calculation module 602, a determination module 603, and an extraction module 604. Wherein, the rotating module 601 is configured to rotate the original fundus image to obtain a rotated fundus image; a calculation module 602 configured to calculate foreground radii of the original fundus image and the rotated fundus image, respectively; a determining module 603 configured to determine a foreground mask based on the calculated foreground radius; an extraction module 604 configured to extract a foreground region from the original fundus image based on the foreground mask.
In the present embodiment, in the fundus image processing apparatus 600: the specific processes of the rotation module 601, the calculation module 602, the determination module 603, and the extraction module 604 and the technical effects thereof can refer to the related descriptions of step 201 and step 204 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the calculation module 602 is further configured to: for each eye ground image, extracting pixel points on the central line of the eye ground image to obtain a pixel point set; calculating the pixel value mean value of the pixel point set; determining a foreground pixel value boundary value based on the pixel value mean; counting the number of pixel points which are larger than the boundary value of the foreground pixel value in the pixel point set; the foreground radius of the fundus image is determined based on the number.
In some optional implementations of this embodiment, the centerline includes a horizontal centerline and a vertical centerline, and the foreground radius includes a foreground horizontal radius and a foreground vertical radius.
In some optional implementations of this embodiment, the determining module 603 is further configured to: selecting the foreground radius with the largest value from the calculated foreground radii as a foreground mask radius; and generating a foreground mask according to the foreground mask radius and the original fundus image.
In some optional implementations of the present embodiment, the rotation module 601 is further configured to: and continuously rotating the original fundus images for multiple times at a preset rotation angle to obtain multiple rotated fundus images.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, is a block diagram of an electronic apparatus according to a fundus image processing method of an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the fundus image processing methods provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the fundus image processing method provided by the present application.
The memory 702, as a non-transitory computer-readable storage medium, can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (for example, the rotation module 601, the calculation module 602, the determination module 603, and the extraction module 604 shown in fig. 6) corresponding to the fundus image processing method in the embodiment of the present application. The processor 701 executes various functional applications of the server and data processing, that is, realizes the fundus image processing method in the above-described method embodiment, by executing non-transitory software programs, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic apparatus of the fundus image processing method, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include a memory provided remotely from the processor 701, and these remote memories may be connected to the electronic device of the fundus image processing method through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the fundus image processing method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information, and generate key signal inputs related to user settings and function control of the electronic apparatus of the fundus image processing method, such as an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the application, firstly, an original fundus image is rotated to obtain a rotated fundus image; then, foreground radiuses of the original fundus image and the rotating fundus image are respectively calculated; then determining a foreground mask based on the calculated foreground radius; and finally, extracting a foreground region from the original fundus image based on the foreground mask. Effective fundus information is reserved, sensitive information is removed, foreground extraction can be efficiently and quickly carried out on fundus images, and the extraction success rate of a foreground area is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method of fundus image processing, comprising:
rotating the original fundus image to obtain a rotated fundus image;
calculating foreground radii of the original fundus image and the rotated fundus image respectively;
determining a foreground mask based on the calculated foreground radius;
based on the foreground mask, a foreground region is extracted from the original fundus image.
2. The method as claimed in claim 1, wherein said separately calculating foreground radii of said original fundus image and said rotated fundus image comprises:
for each eye ground image, extracting pixel points on the central line of the eye ground image to obtain a pixel point set;
calculating the pixel value mean value of the pixel point set;
determining a foreground pixel value boundary value based on the pixel value mean;
counting the number of pixel points in the pixel point set, which are greater than the boundary value of the foreground pixel value;
determining a foreground radius of the fundus image based on the number.
3. The method of claim 2, wherein the centerline comprises a horizontal centerline and a vertical centerline, and the foreground radius comprises a foreground horizontal radius and a foreground vertical radius.
4. The method of claim 1, wherein said determining a foreground mask based on the calculated foreground radius comprises:
selecting the foreground radius with the largest value from the calculated foreground radii as a foreground mask radius;
and generating the foreground mask according to the foreground mask radius and the original fundus image.
5. The method according to one of claims 1 to 4, wherein said rotating the original fundus image to obtain a rotated fundus image comprises:
and continuously rotating the original fundus images for multiple times at a preset rotation angle to obtain multiple rotated fundus images.
6. An apparatus for fundus image processing, comprising:
a rotation module configured to rotate the original fundus image to obtain a rotated fundus image;
a calculation module configured to calculate foreground radii of the original fundus image and the rotated fundus image, respectively;
a determination module configured to determine a foreground mask based on the calculated foreground radius;
an extraction module configured to extract a foreground region from the original fundus image based on the foreground mask.
7. The apparatus of claim 6, wherein the computing module is further configured to:
for each eye ground image, extracting pixel points on the central line of the eye ground image to obtain a pixel point set;
calculating the pixel value mean value of the pixel point set;
determining a foreground pixel value boundary value based on the pixel value mean;
counting the number of pixel points in the pixel point set, which are greater than the boundary value of the foreground pixel value;
determining a foreground radius of the fundus image based on the number.
8. The apparatus of claim 7, wherein the centerline comprises a horizontal centerline and a vertical centerline, and the foreground radius comprises a foreground horizontal radius and a foreground vertical radius.
9. The apparatus of claim 6, wherein the determination module is further configured to:
selecting the foreground radius with the largest value from the calculated foreground radii as a foreground mask radius;
and generating the foreground mask according to the foreground mask radius and the original fundus image.
10. The apparatus of one of claims 6-9, the rotation module further configured to:
and continuously rotating the original fundus images for multiple times at a preset rotation angle to obtain multiple rotated fundus images.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202011021576.8A 2020-09-25 Fundus image processing method, fundus image processing device, fundus image processing apparatus, and fundus image storage medium Active CN112102174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011021576.8A CN112102174B (en) 2020-09-25 Fundus image processing method, fundus image processing device, fundus image processing apparatus, and fundus image storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011021576.8A CN112102174B (en) 2020-09-25 Fundus image processing method, fundus image processing device, fundus image processing apparatus, and fundus image storage medium

Publications (2)

Publication Number Publication Date
CN112102174A true CN112102174A (en) 2020-12-18
CN112102174B CN112102174B (en) 2024-05-14

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012180A (en) * 2021-02-08 2021-06-22 北京百度网讯科技有限公司 Image forming device determining method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216957A (en) * 2008-10-09 2011-10-12 埃西斯创新有限公司 Visual tracking of objects in images, and segmentation of images
WO2018023917A1 (en) * 2016-07-30 2018-02-08 上海联影医疗科技有限公司 Method and system for extracting lower limb blood vessel
CN109544580A (en) * 2018-11-15 2019-03-29 武汉大势智慧科技有限公司 One kind is based on background automatic separation method before rotary taking image
CN110717919A (en) * 2019-10-15 2020-01-21 阿里巴巴(中国)有限公司 Image processing method, device, medium and computing equipment
CN111429380A (en) * 2020-04-08 2020-07-17 北京海益同展信息科技有限公司 Image correction method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216957A (en) * 2008-10-09 2011-10-12 埃西斯创新有限公司 Visual tracking of objects in images, and segmentation of images
WO2018023917A1 (en) * 2016-07-30 2018-02-08 上海联影医疗科技有限公司 Method and system for extracting lower limb blood vessel
CN109544580A (en) * 2018-11-15 2019-03-29 武汉大势智慧科技有限公司 One kind is based on background automatic separation method before rotary taking image
CN110717919A (en) * 2019-10-15 2020-01-21 阿里巴巴(中国)有限公司 Image processing method, device, medium and computing equipment
CN111429380A (en) * 2020-04-08 2020-07-17 北京海益同展信息科技有限公司 Image correction method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RUDIGER BOCK: "classifying Glaucoma with Image-Based Features from Fundus Photographs", 《SPRINGER LINK》, 31 December 2007 (2007-12-31) *
蒋先刚;熊娟;丘立;范德营;: "基于Hessian特征的视网膜血管图像的增强滤波算法", 华东交通大学学报, no. 03, 15 June 2013 (2013-06-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012180A (en) * 2021-02-08 2021-06-22 北京百度网讯科技有限公司 Image forming device determining method, device, equipment and storage medium
CN113012180B (en) * 2021-02-08 2023-08-11 北京百度网讯科技有限公司 Image forming apparatus determining method, device, apparatus and storage medium

Similar Documents

Publication Publication Date Title
CN111967538B (en) Feature fusion method, device and equipment applied to small target detection and storage medium
CN111914628B (en) Training method and device of face recognition model
Morales et al. Automatic detection of optic disc based on PCA and mathematical morphology
CN111754481B (en) Fundus image recognition method, fundus image recognition device, fundus image recognition apparatus, and fundus image recognition storage medium
CN112541924B (en) Fundus image generation method, fundus image generation device, fundus image generation apparatus, and fundus image storage medium
WO2019180742A1 (en) System and method for retinal fundus image semantic segmentation
CN112883962B (en) Fundus image recognition method, fundus image recognition apparatus, fundus image recognition device, fundus image recognition program, and fundus image recognition program
CN111914629A (en) Method, apparatus, device and storage medium for generating training data for face recognition
Khandouzi et al. Retinal vessel segmentation, a review of classic and deep methods
CN112184690A (en) Coronary vessel trend prediction method, prediction model training method and device
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
CN112149634A (en) Training method, device and equipment of image generator and storage medium
CN112967355A (en) Image filling method and device, electronic device and medium
CN113553909A (en) Model training method for skin detection and skin detection method
CN106780404A (en) Image enchancing method, device and angiography equipment
CN112116525A (en) Face-changing identification method, device, equipment and computer-readable storage medium
Tavakoli et al. Unsupervised automated retinal vessel segmentation based on Radon line detector and morphological reconstruction
CN111861999A (en) Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium
CN111914630A (en) Method, apparatus, device and storage medium for generating training data for face recognition
CN111523467B (en) Face tracking method and device
CN112508811A (en) Image preprocessing method, device, equipment and storage medium
CN112102174B (en) Fundus image processing method, fundus image processing device, fundus image processing apparatus, and fundus image storage medium
CN111951214A (en) Method and device for segmenting readable area in image, electronic equipment and storage medium
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
CN112102174A (en) Fundus image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant