CN115546883B - Fundus image processing system - Google Patents

Fundus image processing system Download PDF

Info

Publication number
CN115546883B
CN115546883B CN202211381593.1A CN202211381593A CN115546883B CN 115546883 B CN115546883 B CN 115546883B CN 202211381593 A CN202211381593 A CN 202211381593A CN 115546883 B CN115546883 B CN 115546883B
Authority
CN
China
Prior art keywords
fundus
area
groups
group
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211381593.1A
Other languages
Chinese (zh)
Other versions
CN115546883A (en
Inventor
沈婷
郑青青
洪朝阳
王立强
方超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shangmao Photoelectric Technology Co ltd
Zhejiang Provincial Peoples Hospital
Original Assignee
Hangzhou Shangmao Photoelectric Technology Co ltd
Zhejiang Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shangmao Photoelectric Technology Co ltd, Zhejiang Provincial Peoples Hospital filed Critical Hangzhou Shangmao Photoelectric Technology Co ltd
Priority to CN202211381593.1A priority Critical patent/CN115546883B/en
Publication of CN115546883A publication Critical patent/CN115546883A/en
Application granted granted Critical
Publication of CN115546883B publication Critical patent/CN115546883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Abstract

The invention provides a fundus image processing system, and belongs to the technical field of fundus photography. The system comprises: the multiple groups of lighting sources are distributed at intervals and are sequentially turned on and turned off at different time periods; the photographing unit is used for photographing the eyegrounds under different groups of illumination light sources to obtain a plurality of groups of eyeground photos, and each group of the eyeground photos are provided with a first area corresponding to the stray light and the reflected light generated by each group of the illumination light sources; the image processing unit is used for detecting characteristic points of a plurality of groups of fundus pictures and matching the plurality of groups of fundus pictures based on the characteristic points to obtain homography matrixes among the groups of fundus pictures; the method is also used for acquiring the position of a first area on one group of fundus pictures, acquiring the positions of second areas corresponding to the first area of other groups of fundus pictures according to the homography matrix, covering the pixel values of the second areas to the first area to obtain a target fundus picture, and eliminating the influence of cornea reflected light and motion blur on fundus imaging quality.

Description

Fundus image processing system
Technical Field
The invention belongs to the technical field of fundus photography, and particularly relates to a fundus image processing system.
Background
Fundus photography equipment is widely applied to screening of ophthalmic diseases, the imaging quality of the fundus photography equipment directly affects accurate diagnosis of the ophthalmic diseases, and the current fundus photography equipment mainly has the following problems: firstly, the imaging quality is influenced by the reflected light of the illumination light source, and secondly, the imaging quality is influenced by eye movement. The imaging quality is influenced by the reflection of the illumination light source cornea, and the imaging quality is mainly characterized in that the illumination light source can be reflected by the cornea, high bright areas are finally formed on a picture, the high bright areas can cover actual fundus images, and secondly, the illumination light source can be reflected on the surface of the lens, so that ghost images (stray light) are generated. In addition, the imaging quality is mainly influenced by eye movement and is mainly represented by motion blur, which is substantially caused by the superposition of moving exposure of the fundus image during the movement of the eyeball, so that the imaging quality is further influenced.
However, at present, in order to eliminate the influence of stray light on fundus photos, improvement is mostly performed on fundus cameras, for example, an illumination light source and a fixation light source are coaxially arranged, the illumination light source and the fixation light source are both changed into linearly polarized light after passing through an incident polarizer, an emergent polarizer is incident to a contact lens, the stray light on an ocular lens and a cornea is blocked outside a subsequent imaging light path, and the stray light is prevented from entering the subsequent imaging light path. For another example, a plurality of polarizing plates, positioning devices, and the like are provided in the fundus camera, and the distance between the focal group and the polarizing plate is adjusted to reduce stray light generated by the cornea during photographing. In the above solutions, optical elements need to be added in the fundus camera to reduce stray light, but the stray light is not eliminated, and thus, the obtained fundus picture still has a certain influence of the stray light.
Therefore, in order to solve the above technical problems, the present invention proposes a new fundus image processing system without adding an additional optical element.
Disclosure of Invention
The present invention is directed to solve at least one of the problems of the prior art and to provide a fundus image processing system.
The present invention provides a fundus image processing system, comprising:
the plurality of groups of lighting sources are distributed at intervals and are sequentially turned on and turned off at different time periods;
the photographing unit is used for photographing eyegrounds under different groups of illumination light sources to obtain a plurality of groups of eyeground photos, and each group of the eyeground photos is provided with a first area corresponding to stray light and reflected light generated by each group of the illumination light sources;
the image processing unit is used for detecting characteristic points of the multiple groups of fundus pictures and matching the multiple groups of fundus pictures based on the characteristic points so as to obtain homography matrixes among the groups of fundus pictures;
the method is also used for acquiring the position of a first area on one group of fundus pictures, acquiring the positions of second areas corresponding to the first area of other groups of fundus pictures according to the homography matrix, and covering the pixel values of the second areas to the first area to obtain a target fundus picture.
Optionally, the image processing unit includes: a position acquisition module and a synthesis module; wherein the content of the first and second substances,
the position acquisition module is used for acquiring a first coordinate of a first area of one group of fundus pictures and acquiring a second coordinate corresponding to the first coordinate of the first area in other groups of fundus pictures according to a homography matrix;
and the synthesis module is used for carrying out bilinear interpolation processing on the pixel values of the second coordinate, carrying out aliasing fusion on the processed pixel values of the second coordinate and the pixel values of the first coordinate according to weights, and covering the pixel values of the second area of other groups of fundus pictures on the first area.
Optionally, the first region comprises a stray light and reflected light distribution region and an adjacent region of the stray light and reflected light distribution region; and (c) a second step of,
the first coordinates are coordinates of four vertexes of the first area.
Optionally, the image processing unit further includes at least one of a first weight calculation module, a second weight calculation module, and a third weight calculation module; wherein, the first and the second end of the pipe are connected with each other,
the first weight calculation module is used for subtracting the difference of the maximum and minimum values according to the pixel values of the first area of one group of the fundus pictures and the pixel values of the second areas of other groups of the fundus pictures so as to obtain the weight when aliasing occurs;
the second weight calculation module is used for subtracting the brightness of the imaging of a preset calibration plate according to the brightness of each pixel of the fundus picture so as to obtain the weight during aliasing;
the third weight calculation module is configured to obtain a weight during aliasing according to a ratio of a pixel brightness value of an adjacent area in the first area of one of the groups of fundus images to a pixel brightness value of an adjacent area in the second area of the other group of fundus images.
Optionally, the third weight calculating module calculates an aliasing weight by using the following relation:
w(x,y) = sigma(I1(x,y) / I2(x1,y1))/n
in the formula: w (x, y) refers to the weight coefficient of the pixel in x, y coordinates;
n is the number of adjacent area pixels;
i1 (x, y) pixel intensity of the coordinates x, y of the adjacent area in the first area of one of the sets of fundus pictures being fused;
i2 (x 1, y 1) refers to the pixel brightness of coordinates x1, y1 of the corresponding adjacent areas of the fundus pictures of the other fused groups.
Optionally, the image processing unit further includes a detection module and a matching module; wherein the content of the first and second substances,
the detection module detects the feature points by adopting an ORB algorithm or a sift algorithm;
and the matching module adopts knn algorithm or ransac algorithm to match the feature points.
Optionally, the system further includes a plurality of modulation switches corresponding to the plurality of groups of illumination light sources, and the plurality of modulation switches are further connected to the photographing unit;
and the modulation switch is used for modulating the on-off of the illumination light source and synchronously triggering the on-off of the photographing unit.
Optionally, each group of the illumination light sources includes a plurality of sub light sources, and the plurality of sub light sources are distributed annularly at equal intervals.
Optionally, the plurality of sub-light sources of different groups are alternately arranged at intervals.
Optionally, the illumination light source includes a visible light source and an infrared light source.
The invention provides a fundus image processing system, which is characterized in that a plurality of groups of illumination light sources are arranged to modulate different time and different spatial frequencies so as to disperse the energy of illumination light, then a plurality of groups of fundus pictures obtained after modulation are synthesized based on an image processing unit, emitted light on one group of fundus pictures and a stray light region are subjected to aliasing fusion to obtain a target fundus picture, and the influence of cornea reflected light on fundus imaging quality is eliminated.
Drawings
Fig. 1 is a schematic configuration diagram of a fundus image processing system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the spatial distribution of multiple groups of illumination sources according to an embodiment of the present invention;
FIG. 3 is a schematic view of a first set of fundus photographs in accordance with one embodiment of the present invention;
FIG. 4 is a schematic view of a second set of fundus photographs in accordance with one embodiment of the present invention;
FIG. 5 is a schematic diagram of computing homography matrices from matched feature points in two sets of fundus photographs, in accordance with one embodiment of the present invention;
FIG. 6 is a schematic view of a first region in a first set of fundus photographs in accordance with one embodiment of the present invention;
FIG. 7 is a schematic diagram of a second region of a second set of fundus pictures corresponding to the first region of the first set of fundus pictures in accordance with one embodiment of the present invention;
FIG. 8 is a schematic diagram of an aliased fused portion in a first region in a first set of fundus pictures according to an embodiment of the present invention;
fig. 9 is a process diagram of the fundus image processing system according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It should be apparent that the described embodiments are only some of the embodiments of the present invention, and not all of them. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without inventive step, are within the scope of protection of the invention.
Unless otherwise specifically defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "including" or "comprising" and the like in this disclosure does not limit the presence or addition of any number, step, action, operation, component, element, and/or group thereof or does not preclude the presence or addition of one or more other different numbers, steps, actions, operations, components, elements, and/or groups thereof. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number and order of the indicated features.
As shown in fig. 1 to 9, the present invention provides a fundus image processing system 100, comprising: a plurality of groups of illumination light sources 110, a photographing unit 120, and an image processing unit 130; wherein, the plurality of groups of illumination sources 110 are distributed at intervals and are turned on and off in sequence at different time periods; the photographing unit 120 is used for photographing the eyegrounds under different groups of illumination light sources to obtain a plurality of groups of eyeground photos, and each group of the eyeground photos is provided with a first area corresponding to the stray light and the reflected light generated by each group of the illumination light sources; an image processing unit 130 for detecting characteristic points of a plurality of groups of fundus pictures and matching the plurality of groups of fundus pictures based on the characteristic points to obtain homography matrices between the groups of fundus pictures; the method is also used for acquiring the position of a first area on one group of fundus pictures, acquiring the positions of second areas corresponding to the first area of other groups of fundus pictures according to the homography matrix, and covering the pixel values of the second areas to the first areas to obtain the target fundus picture.
The embodiment disperses the energy of the illumination light by modulating a plurality of groups of illumination light sources at different time and different spatial frequencies, and synthesizes a plurality of groups of fundus photos obtained after modulation based on the image processing unit so as to eliminate the influence of cornea reflected light and motion blur on fundus imaging quality without additionally adding optical elements in fundus photographing equipment.
Specifically, in order to image different regions of the fundus, each set of illumination light sources includes a plurality of sub light sources, and the plurality of sub light sources are distributed annularly at equal intervals so as to be spaced apart in the illumination space distribution, so as to obtain fundus pictures with different reflected light regions.
Further, when each group of the illumination light sources includes a plurality of sub light sources, the plurality of sub light sources in different groups are alternately arranged at intervals so as to modulate each group of the illumination light sources in different spaces.
In some preferred embodiments, the number of sets of illumination light sources is two, each set of illumination light sources comprises three sub-light sources, and as shown in fig. 2, the illumination light sources 110 are spatially arranged into two sets, each set comprises three sub-light sources, the circles represent the first set of illumination light sources 111, and the squares represent the second set of illumination light sources 112. Of course, three, four or more groups of illumination light sources may be provided, and each group of illumination light sources further includes other numbers of sub light sources, and the sub light sources of different groups are alternately arranged at intervals.
This embodiment is through opening and close multiunit light source interval distribution on the illumination space and in different time quantums in proper order, like this, under different light source shines, the stray light and the reflective light region position that form on the multiunit eye ground photo that obtains are different, are convenient for to the synthetic processing of different group's eye ground photo to eliminate stray light and reverberation influence.
It should be noted that the illumination light source of the present embodiment includes a visible light source and an infrared light source, that is, the illumination light source may also select an infrared light source at the same time, so as to achieve the functions of automatic pupil alignment and working distance positioning of human eyes, and the infrared light source has very little influence on pupil contraction and eyeball (light shielding) movement, and can obtain an eye fundus picture without mydriasis under dark conditions.
Furthermore, the total illumination time of the multiple groups of illumination light sources is controlled within 0.1 second, namely the illumination time of the embodiment is short, the imaging speed is high, high-speed shooting is adopted, the phenomenon that the image is blurred due to too long shooting time and eyeball motion is avoided, the frame rate is improved to eliminate motion blur, the fundus pictures acquired under different illumination light sources are consistent, and the definition of the image is increased.
Furthermore, as shown in fig. 1, the present embodiment further includes a plurality of modulation switches 140 corresponding to the plurality of groups of illumination light sources 110, one end of each modulation switch is connected to each group of illumination light sources, the other end of each modulation switch is connected to the photographing unit, and the photographing unit is further connected to the image processing unit, wherein each modulation switch is configured to modulate on and off of the corresponding group of illumination light sources 110, and synchronously trigger the photographing unit 120 to photograph under different groups of illumination light sources.
It should be noted that, because the eyeball moves during photographing, the fundus images may also change in position on different groups of fundus pictures, which requires alignment of the positions of the groups of fundus pictures, and alignment may be performed based on the feature points on the fundus pictures, and the specific process may be implemented by the image processing unit.
Specifically, the image processing unit includes: the device comprises a characteristic point detection module, a matching module, a position acquisition module and a synthesis module; the characteristic point detection module is used for detecting characteristic points of a plurality of groups of fundus pictures; the matching module is used for matching the characteristic points of the fundus pictures and calculating homography matrixes among the fundus pictures according to the matched points; the position acquisition module is used for acquiring a first coordinate of a first area of one group of fundus pictures and acquiring a second coordinate corresponding to the first coordinate of the first area in other groups of fundus pictures according to the homography matrix; and the synthesis module is used for carrying out bilinear interpolation processing on the pixel values of the second coordinate, aliasing the processed pixel values of the second coordinate and the pixel values of the first coordinate according to the weight, and covering the pixel values of the second area of other groups of fundus pictures on the first area so as to eliminate stray light and reflected light to obtain the target fundus picture.
It should be understood that, under the influence of light reflected by the illumination light source, a first region corresponding to stray light and emitted light is formed on each group of fundus images, and in order to obtain a more accurate target fundus image, the first region may include a stray light and reflected light distribution region and adjacent regions of the stray light and reflected light distribution region (as shown in fig. 6, a square is the first region, a circle in the square represents the stray light and reflected light distribution region, and regions outside the circle and inside the square are the adjacent regions), that is, the stray light and reflected light distribution region is subjected to an expansion process, the entire region formed after the expansion is taken as the first region, and four vertexes of the first region are taken as first coordinates.
It should be noted that, because the stray light and the illumination cornea reflected light are fixed in the photo coordinate system, the stray light generated by the group of illumination light sources and the feature points of the illumination cornea reflected light region can be excluded when selecting the feature points, and the feature points should be selected by a feature point algorithm of translational-rotational invariance and scaling invariance, for example, the feature point search algorithm has various algorithms such as an ORB algorithm or a sift algorithm.
It should be further noted that the matching module of this embodiment may use knn algorithm or ransac algorithm to match the feature points, where a common matching algorithm is ransac.
Specifically, when one group of fundus pictures is defined as a first group of fundus pictures, the corresponding positions of four vertexes of the first group of fundus pictures on the other group of fundus pictures can be obtained through the homography matrix of the first group of fundus pictures and the other group of fundus pictures, namely, the corresponding areas of the other group of fundus pictures, which correspond to the stray light and the reflection light areas on the first group of fundus pictures, are obtained, and the corresponding areas of the high-quality fundus pictures are covered on the first group of fundus pictures to obtain the target fundus picture.
It should be understood that the target fundus picture of the present embodiment is still the first group of fundus pictures, and only the corresponding regions of the other group of fundus pictures and the first group of fundus pictures where the stray light and the emitted light are generated are fused to the first group of fundus pictures to fuse the regions of the first group of fundus pictures where the stray light and the reflected light are generated, that is, the second region of the other group of fundus pictures is replaced by the first region of the first group of fundus pictures to realize the composition of the multiple groups of fundus pictures, so as to eliminate the stray light and the cornea reflected light generated by the first group of fundus pictures.
Furthermore, the image processing unit of the present embodiment further includes at least one of a first weight calculating module, a second weight calculating module and a third weight calculating module; the first weight calculation module is used for subtracting the difference of the maximum and minimum values from the pixel value of the first area of one group of fundus pictures and the pixel value of the second area of other groups of fundus pictures to obtain the weight during aliasing, and the second weight calculation module is used for subtracting the brightness of imaging of a preset calibration plate from the brightness of each pixel of the fundus pictures to obtain the weight during aliasing. And the third weight calculation module is used for acquiring the weight during aliasing according to the ratio of the pixel brightness values of the adjacent areas in the first area of one group of fundus pictures to the pixel brightness values of the adjacent areas in the second areas of other groups of fundus pictures. That is, the method for calculating the weight is not particularly limited, the camera may be calibrated in advance, the aliasing weight may be obtained in another manner according to the characteristics of the point light source, the aliasing weight may be calculated by any of the methods, or the weight when aliasing is obtained by mixing the methods.
Specifically, the second weight calculation module is used for pre-calibrating the camera, and includes: a calibration plate with uniform brightness is placed in front of the camera, an illumination light source is modulated, a plurality of groups of fundus pictures are synchronously shot, the distribution condition of stray light or corneal reflection light on the pictures is determined, the imaging brightness of the normal calibration plate can be subtracted according to the brightness distribution condition of the stray light of each pixel, and the weight of the pixel when aliasing is carried out on the pixel and the high-quality pixel is obtained.
Further, the process of obtaining the weight by the first weight calculating module includes: corresponding weights may also be obtained using the difference between the pixel values of the first region in the first set of fundus pictures and the corresponding pixel values of the second region obtained for the second set of fundus pictures subtracted by the maximum and minimum values.
When the first group of fundus pictures having the reflected light and the stray light are fused, it is necessary to consider that the illumination condition of the fused group of fundus pictures and the illumination condition of the fused group of fundus pictures are not consistent, and therefore, the final image is displayed as a patch in a convex manner, and therefore, it is necessary to estimate the exposure intensity information of the pixels around the fused region and the exposure intensity information of the pixels around the region to be fused as the fusion weight.
Therefore, the weighting of the alpha aliasing can also be accomplished by adopting the following algorithm, namely the process of acquiring the weighting by the third weighting calculation module comprises the following steps: the ratio of the brightness of the adjacent area pixels (as shown in the shadow of fig. 8) in the first group of fundus pictures, which are adjacent to the area where the stray light is generated and the reflected light distribution area is distributed, and the brightness of the adjacent area pixels corresponding to the second group of fundus pictures are used for estimating the brightness ratio, and then the brightness of the corresponding position of the second group of fundus pictures is adjusted to be consistent with the brightness of the first group of fundus pictures, wherein the specific relation is as follows:
w(x,y) = sigma(I1(x,y) / I2(x1,y1))/n
in the formula: w (x, y) refers to the weight coefficient of the pixel in x, y coordinates;
n is the number of adjacent area pixels;
i1 (x, y) refers to the pixel intensity of the first coordinate x, y in the first area (fused area) of a group of fused fundus pictures;
i2 (x 1, y 1) indicates the pixel brightness of the other group fused fundus picture corresponding to the second coordinate x1, y1 of the second area (fusion area).
The alpha fusion adopted in the embodiment can give different weights to each pixel according to the calibration condition, and through the processing, the brightness information of other fused groups of fundus photos can be consistent with the brightness of the fused first group of fundus photos, and the patch phenomenon cannot be generated.
The embodiment can obtain the corresponding positions of four vertexes of other groups of fundus pictures through the homography matrix, and the fundus pictures can be regarded as high-quality fundus imaging pixels because of no stray light and corneal reflection light generated by one group of fundus illumination. The high-quality image areas are synthesized into a target fundus picture through a warp affine algorithm and alpha fusion, stray light and cornea reflected light generated by one group of fundus pictures can be eliminated, and imaging quality is further improved.
Still further, the photographing unit of the present embodiment may be a camera, and the image processing unit may be implemented in the form of hardware, or may be implemented in the form of a software functional unit, for example, the image processing unit may be a computer of a system or apparatus (or a device such as a CPU or MPU) that reads out and executes a program recorded on a storage device to perform the functions of the above-described embodiments.
It should be noted that the embodiments of the system described in the present invention are merely illustrative, for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The structure and processing method of the fundus image processing system will be further described with reference to specific embodiments:
example 1
The fundus image processing system of the present example includes two sets of illumination light sources, each set including three sub-light sources, a plurality of sets of modulation switches, a photographing unit, and an image processing unit, as shown in fig. 1 to 9.
Specifically, as shown in fig. 2, the spatial arrangement of the illumination light sources is divided into two groups, the circles represent the arrangement of the first group of illumination light sources 111, and the squares represent the arrangement of the second group of illumination light sources 112, when the two groups of illumination light sources are modulated at different time periods, the second group of illumination light sources 112 represented by the squares are turned off, the first group of illumination light sources 111 represented by the circles are turned on, and the photographing unit is triggered to photograph synchronously, so that the photographing unit can synchronously obtain a first group of fundus photos a under the first group of illumination light sources, as shown in fig. 3. Then, the first group of illumination light sources 111 is turned off, the second group of illumination light sources 112 is turned on, and the photographing unit is synchronously triggered to take a picture, so that the photographing unit can synchronously obtain a second group of fundus pictures B under the second group of illumination light sources, as shown in fig. 4.
Specifically, as shown in fig. 3, in the first group fundus image a, the first group illumination light source 111 generates corneal reflection light whose corresponding position is represented by a circle (the circle in the figure represents only the position of the reflection light, not the shape of the reflection light).
Further, as shown in fig. 4, in the second group fundus image B, the second group illumination light sources 112 generate corneal reflection light whose corresponding position is represented by a square (the square in the figure represents only the position of the reflection light, not the shape of the reflection light).
It should be noted that, due to the movement of the eyeball, the imaging positions of the eyegrounds are different, the pixel mapping between the two eyeground pictures needs to be calculated, and specifically, the image processing unit is used for calculating to synthesize the two eyeground pictures into the target eyeground picture.
Specifically, feature point detection is performed on the first group of fundus picture a and the second group of fundus picture B based on the image processing unit, feature points of the two groups of fundus pictures are matched, and homography matrices of the two groups of fundus pictures are calculated according to the matched feature points, as shown in fig. 5.
Furthermore, in the first fundus picture a, the position of the cornea reflected light calibrated in advance is slightly enlarged to form a first area, then the coordinate position of the four vertexes of the first fundus picture, i.e. the first coordinate, is taken, as shown in fig. 6, in the first fundus picture a, the box outside the three cornea reflection points is the first area, the four vertexes corresponding to the first area are the first coordinate, i.e. the three sub-light sources included in the first illumination light source 111 correspond to the three first areas on the first fundus picture, and then the four vertex coordinates of the first area are calculated according to the homography matrix, so that the position of the four vertexes of the second area corresponding to the first area, i.e. the second coordinate, in the second fundus picture B, the second area pixel of the second fundus picture B exactly corresponds to the image content of the first area hidden by the reflected light in the first fundus picture a, as shown by the arrows in fig. the area shown by the second group fundus picture B, i.e. the three larger boxes are the three second areas.
It should be understood that, as long as the pixel contents of the second region formed as described above in the second group fundus picture B are overlaid onto the first region in the first group fundus picture by affine transformation, the highlight region reflected by the cornea in the first group fundus picture a can be eliminated, i.e., the three second regions indicated by the arrows on the second group fundus picture in fig. 7 are overlaid onto the three first regions on the first group fundus picture in fig. 6.
Specifically, the elimination is divided into two parts, first coordinates of pixels are acquired one by one in a first area of a first group of fundus pictures A, second coordinates of the first coordinates corresponding to a second group of fundus pictures B are calculated through a homography matrix, the coordinates may contain decimal parts, and therefore, 4 pixel values around the second coordinates corresponding to the second group of fundus pictures B need to be subjected to bilinear interpolation, and the result can be corresponding to the pixel values of the first group of fundus pictures A.
It should be noted that, because the result of the bilinear interpolation cannot directly cover the value of the corresponding pixel in the first group of fundus photos a, otherwise, a seam splicing phenomenon may occur, and therefore, in this embodiment, alpha aliasing is further adopted for the bilinear interpolation to avoid the seam splicing phenomenon and the brightness inconsistency phenomenon, where the seam splicing phenomenon may be transitioned from the full weight 1 of one group of fundus photos to the full weight of another group of fundus photos according to the distance from the matching rectangle. The aliasing weight can be obtained from the calibration result of the camera calibration, and of course, a plurality of aliasing weights can be obtained in other manners according to the characteristics of the point light source, for example, the difference between the pixel value of the region in the first group fundus picture a and the pixel value of the corresponding position obtained by the second group fundus picture B is subtracted by the maximum and minimum values, and the corresponding weight can also be obtained. Of course, other methods may be used to obtain the aliasing weights.
Specifically, pixels in stray light and reflection light areas generated by the first group of fundus pictures A and pixels without stray reflection in corresponding areas of the second group of fundus pictures B are subjected to alpha aliasing and covered into the first group of fundus pictures A to obtain a target fundus picture.
The weighting of the alpha aliasing can be completed by adopting the following algorithm, the ratio of the brightness of the pixels (such as the shadow in fig. 8) of the adjacent areas outside the stray light generating and reflected light distribution area which are adjacent to the first group of fundus picture A and the brightness of the pixels of the adjacent areas corresponding to the second group of fundus picture B is estimated, then, the brightness of the corresponding position of the second group of fundus picture B is adjusted to be consistent with the brightness of the first group of fundus picture A, so that the target fundus picture is obtained, and the highlight area and the stray light caused by the corneal reflection in the first group of fundus picture A can be eliminated.
The invention provides a fundus image processing system, which has the following beneficial effects:
firstly, a plurality of groups of illumination light sources distributed at intervals are arranged to modulate different time and different spatial frequencies, so that the energy of illumination light is dispersed, and stray light is eliminated or weakened;
secondly, the system can ensure that the image positions of the synthesized target fundus photos are consistent and do not generate distortion through the homography matrix;
thirdly, based on the fact that the image processing unit performs aliasing fusion on the high-quality pixels and the low-quality pixels according to the quality weights of the high-quality pixels and the low-quality pixels, the quality of the image is improved, corneal reflection of an illumination light source is eliminated, and an optical element does not need to be additionally added;
fourthly, the system of the invention has short total photographing time and high imaging speed, and can eliminate motion blur.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (8)

1. A fundus image processing system, comprising:
the plurality of groups of lighting sources are distributed at intervals and are sequentially turned on and turned off at different time periods;
the photographing unit is used for photographing eyegrounds under different groups of illumination light sources to obtain a plurality of groups of eyeground photos, and each group of the eyeground photos is provided with a first area corresponding to stray light and reflected light generated by each group of the illumination light sources;
the image processing unit is used for detecting characteristic points of the multiple groups of fundus pictures and matching the multiple groups of fundus pictures based on the characteristic points so as to obtain homography matrixes among the groups of fundus pictures;
the homography matrix is used for acquiring the position of a first area on one group of fundus pictures, acquiring the positions of second areas corresponding to the first area of other groups of fundus pictures according to the homography matrix, and covering the pixel values of the second areas to the first area to obtain a target fundus picture;
the image processing unit includes: the position acquisition module and the synthesis module, and at least one of a first weight calculation module, a second weight calculation module and a third weight calculation module; wherein, the first and the second end of the pipe are connected with each other,
the position acquisition module is used for acquiring a first coordinate of a first area of one group of the fundus pictures and acquiring a second coordinate corresponding to the first coordinate of the first area in the other groups of the fundus pictures according to a homography matrix;
the synthesis module is used for carrying out bilinear interpolation processing on the pixel values of the second coordinate, carrying out aliasing fusion on the processed pixel values of the second coordinate and the pixel values of the first coordinate according to weights, and covering the pixel values of the second area of other groups of fundus pictures on the first area;
the first weight calculation module is used for subtracting the difference of the maximum and minimum values according to the pixel values of the first area of one group of the fundus pictures and the pixel values of the second areas of other groups of the fundus pictures so as to obtain the weight when aliasing occurs;
the second weight calculation module is used for subtracting the brightness of the imaging of a preset calibration plate according to the brightness of each pixel of the fundus picture so as to obtain the weight during aliasing;
the third weight calculation module is configured to obtain a weight during aliasing according to a ratio of a pixel brightness value of an adjacent area in the first area of one of the groups of fundus images to a pixel brightness value of an adjacent area in the second area of the other group of fundus images.
2. The system of claim 1, wherein the first area comprises a stray light and reflected light distribution area and an adjacent area of the stray light and reflected light distribution area; and the number of the first and second groups,
the first coordinates are coordinates of four vertexes of the first area.
3. The system of claim 1, wherein the third weight calculation module calculates the aliased weight using the following relationship:
w(x,y) = sigma(I1(x,y) / I2(x1,y1))/n
in the formula: w (x, y) refers to the weight coefficient of the pixel in x, y coordinates;
n is the number of pixels in the adjacent area;
i1 (x, y) pixel intensity of the coordinates x, y of the adjacent area in the first area of one of the sets of fundus pictures being fused;
i2 (x 1, y 1) refers to the pixel brightness of coordinates x1, y1 of the corresponding adjacent areas of the fundus pictures of the other fused groups.
4. The system of claim 1, wherein the image processing unit further comprises a detection module and a matching module; wherein the content of the first and second substances,
the detection module detects the feature points by adopting an ORB algorithm or a sift algorithm;
and the matching module adopts knn algorithm or ransac algorithm to match the feature points.
5. The system of claim 1, further comprising a plurality of modulation switches corresponding to the plurality of sets of illumination sources, the plurality of modulation switches further connected to the photographing unit;
and the modulation switch is used for modulating the on-off of the illumination light source and synchronously triggering the photographing unit to be on-off.
6. The system of claim 1, wherein each group of the illumination sources comprises a plurality of sub-light sources, and the plurality of sub-light sources are distributed in an equally spaced ring.
7. The system of claim 6, wherein different groups of the plurality of sub-light sources are alternately spaced.
8. The system of claim 1, wherein the illumination source comprises a visible light source and an infrared light source.
CN202211381593.1A 2022-11-07 2022-11-07 Fundus image processing system Active CN115546883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211381593.1A CN115546883B (en) 2022-11-07 2022-11-07 Fundus image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211381593.1A CN115546883B (en) 2022-11-07 2022-11-07 Fundus image processing system

Publications (2)

Publication Number Publication Date
CN115546883A CN115546883A (en) 2022-12-30
CN115546883B true CN115546883B (en) 2023-02-28

Family

ID=84721000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211381593.1A Active CN115546883B (en) 2022-11-07 2022-11-07 Fundus image processing system

Country Status (1)

Country Link
CN (1) CN115546883B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111166283A (en) * 2019-12-24 2020-05-19 深圳盛达同泽科技有限公司 Fundus shooting system
CN111449620A (en) * 2020-04-30 2020-07-28 上海美沃精密仪器股份有限公司 Full-automatic fundus camera and automatic photographing method thereof
CN112220447A (en) * 2020-10-14 2021-01-15 上海鹰瞳医疗科技有限公司 Fundus camera and fundus image shooting method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6422629B2 (en) * 2012-11-06 2018-11-14 株式会社ニデック Fundus photographing device
WO2014207901A1 (en) * 2013-06-28 2014-12-31 キヤノン株式会社 Image processing device and image processing method
CN108324240A (en) * 2018-01-22 2018-07-27 深圳盛达同泽科技有限公司 Fundus camera
US10582853B2 (en) * 2018-03-13 2020-03-10 Welch Allyn, Inc. Selective illumination fundus imaging
US11179035B2 (en) * 2018-07-25 2021-11-23 Natus Medical Incorporated Real-time removal of IR LED reflections from an image
CN115969309A (en) * 2019-12-01 2023-04-18 深圳硅基智能科技有限公司 Eyeground camera optical system and eyeground camera capable of guiding visual line direction of eye to be detected easily

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111166283A (en) * 2019-12-24 2020-05-19 深圳盛达同泽科技有限公司 Fundus shooting system
CN111449620A (en) * 2020-04-30 2020-07-28 上海美沃精密仪器股份有限公司 Full-automatic fundus camera and automatic photographing method thereof
CN112220447A (en) * 2020-10-14 2021-01-15 上海鹰瞳医疗科技有限公司 Fundus camera and fundus image shooting method

Also Published As

Publication number Publication date
CN115546883A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
US5735283A (en) Surgical keratometer system for measuring surface topography of a cornea during surgery
CN109633907B (en) Method for automatically adjusting brightness of monocular AR (augmented reality) glasses and storage medium
US20200081530A1 (en) Method and system for registering between an external scene and a virtual image
CN106796344B (en) System, arrangement and the method for the enlarged drawing being locked on object of interest
CN105453133B (en) Image processing apparatus and method, eye fundus image processing unit, image capturing method and eye fundus image filming apparatus and method
CN107462932B (en) Visual calculation ghost imaging system and imaging method based on optical calculation
JP6110862B2 (en) Object distance determination from images
CN109377469A (en) A kind of processing method, system and the storage medium of thermal imaging fusion visible images
JP2000316120A (en) Fully focusing image pickup device
US11943563B2 (en) Videoconferencing terminal and method of operating the same
Itoh et al. Vision enhancement: defocus correction via optical see-through head-mounted displays
US20190005701A1 (en) Microscopic imaging system and method with three-dimensional refractive index tomography
CN104000555A (en) Ocular fundus information acquisition device, method and program
JP6942480B2 (en) Focus detector, focus detection method, and focus detection program
JP2015046019A (en) Image processing device, imaging device, imaging system, image processing method, program, and storage medium
CN115049528A (en) Hair image processing method, system, computer device, medium, and program product
US8184149B2 (en) Ophthalmic apparatus and method for increasing the resolution of aliased ophthalmic images
US20180235465A1 (en) Eye gaze tracking
Bimber et al. Closed-loop feedback illumination for optical inverse tone-mapping in light microscopy
CN115546883B (en) Fundus image processing system
CN107765840A (en) A kind of Eye-controlling focus method equipment of the general headset equipment based on binocular measurement
JP7264257B2 (en) Control device, control method and program
JP4095494B2 (en) Ophthalmic image processing apparatus and processing method
CN109963143A (en) A kind of image acquiring method and system of AR glasses
CN109166082A (en) Image processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant