CN115908274A - Device, equipment and medium for detecting focus - Google Patents

Device, equipment and medium for detecting focus Download PDF

Info

Publication number
CN115908274A
CN115908274A CN202211325517.9A CN202211325517A CN115908274A CN 115908274 A CN115908274 A CN 115908274A CN 202211325517 A CN202211325517 A CN 202211325517A CN 115908274 A CN115908274 A CN 115908274A
Authority
CN
China
Prior art keywords
lesion
detection
module
image
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211325517.9A
Other languages
Chinese (zh)
Inventor
王杰
胡建斌
黄海于
杨刚
秦小林
任辉
梁越
陈萌
贾立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu East District Aier Eye Hospital Co ltd
Original Assignee
Chengdu East District Aier Eye Hospital Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu East District Aier Eye Hospital Co ltd filed Critical Chengdu East District Aier Eye Hospital Co ltd
Priority to CN202211325517.9A priority Critical patent/CN115908274A/en
Publication of CN115908274A publication Critical patent/CN115908274A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses device, equipment and medium that focus detected relates to medical technical field. The method comprises the following steps: the acquisition module is used for acquiring an image of the part to be detected of the user through image acquisition equipment; the first acquisition module is used for acquiring an anchor frame containing a first focus and a second focus from the image through a target detection algorithm; the second acquisition module is used for acquiring the position of the blood vessel in the image; and the detection module is used for determining the focus detection result as a first focus or a second focus according to the anchor frame or the relation between the anchor frame and the blood vessel. According to the device, the anchor frame containing the first focus and the second focus is obtained firstly, and then the first focus and the second focus are further determined according to the anchor frame or the relation between the anchor frame and the blood vessel, so that the device realizes the distinguishing of the focuses and improves the accuracy of focus detection.

Description

Device, equipment and medium for detecting focus
Technical Field
The present application relates to the field of medical technology, and in particular, to a device, an apparatus, and a medium for lesion detection.
Background
Diabetic Retinopathy (DR) is one of the most serious microvascular complications of diabetes, and in the early stage, DR has no obvious symptoms and is not easy to attach attention of patients. However, with the development of the disease condition, the eyeground of the patient will have microangiomas, bleeding spots and other lesions, the vision of the patient will gradually begin to decline, and if the patient misses the optimal treatment period, irreversible vision damage will be generated, and blindness may be caused seriously. If DR can be intervened and diagnosed early, complications can be reduced, so that it is important to detect the fundus oculi focus of a DR patient early.
In the actual detection process, as the bleeding point is similar to the tissues of the microangioma and the retina, and the difference between the bleeding point and the tissues of the microangioma and the retina is small, false detection is easily caused between the bleeding point and the retina, and the detection accuracy of the focus is reduced.
How to distinguish focus and improve the accuracy of focus detection is a technical problem that needs to be solved urgently by the people in the field.
Disclosure of Invention
An object of the present application is to provide a lesion detection apparatus, device and medium for distinguishing a lesion, thereby improving accuracy of lesion detection.
In order to solve the above technical problem, the present application provides a focus detection apparatus, including:
the acquisition module is used for acquiring an image of a part to be detected of a user through image acquisition equipment;
a first obtaining module for obtaining an anchor frame containing a first focus and a second focus from the image through a target detection algorithm;
the second acquisition module is used for acquiring the position of the blood vessel in the image;
and the detection module is used for determining that the focus detection result is the first focus or the second focus according to the anchor frame or the relation between the anchor frame and the blood vessel.
Preferably, the first obtaining module includes:
a second obtaining module, configured to obtain all target frame sets and scores corresponding to each target frame in the target frame sets through a fastern cnn detection network;
the selecting module is used for selecting the candidate frame with the largest score from the target frames;
a remove and place module for removing the candidate box with the largest score from the target box set and placing the candidate box into a final detection box result set so as to obtain an anchor frame containing the first lesion and the second lesion from the final detection box result set.
Preferably, the second obtaining module includes:
a third obtaining module, configured to obtain all the target frame sets and initial scores corresponding to the target frame sets through the fasterncn detection network;
a fourth obtaining module, configured to obtain an overlap ratio between each target frame and the candidate frame with the largest score;
a fifth obtaining module, configured to, when an overlap ratio of the target frame and the candidate frame with the largest score is greater than a first threshold, obtain a score corresponding to the target frame according to the initial score corresponding to the target frame and an intersection ratio of the target frame and the candidate frame with the largest score;
and the module is used for taking the initial score corresponding to the target frame as the score of the target frame when the overlapping rate of the target frame and the candidate frame with the maximum score is less than or equal to the first threshold.
Preferably, the detection module comprises:
a sixth obtaining module, configured to obtain an area of the anchor frame;
the judging module is used for judging whether the area is larger than a second threshold value or not;
if yes, triggering a first determining module, and if not, triggering a second determining module;
the first determination module is configured to determine that the result of the lesion detection is the first lesion;
the second determination module is configured to determine that the result of the lesion detection is the second lesion.
Preferably, the detection module comprises:
the first segmentation module is used for segmenting the image through a Res _ Unet network so as to obtain a blood vessel segmentation image;
the first conversion module is used for converting the blood vessel segmentation image into a binary image;
a seventh obtaining module, configured to obtain the number of the binary image pixel values in the anchor frame region being 255;
a third determining module, configured to determine that the result of the lesion detection is the first lesion if the number is 0;
a fourth determination module for determining that the result of the lesion detection is the second lesion if the number is not 0.
Preferably, the detection module comprises:
the second segmentation module is used for segmenting the image through a Res _ Unet network so as to obtain a blood vessel segmentation image;
the second conversion module is used for converting the blood vessel segmentation image into a binary image;
the eighth obtaining module is used for obtaining a first number of the pixel values of the binary image in the current anchor frame area, wherein the first number is 255;
the adjusting and acquiring module is used for adjusting the size of the current anchor frame area according to a preset rule and acquiring the adjusted anchor frame area;
a ninth obtaining module, configured to obtain a second number of the adjusted binary image pixel values in the anchor frame region, where the second number is 255;
a fifth determining module for determining that the outcome of the lesion detection is the first lesion if the second number is greater than the first number;
a sixth determining module for determining that the result of lesion detection is the second lesion if the second number is less than or equal to the first number.
Preferably, the apparatus further comprises:
and the distinguishing module is used for distinguishing the images into images with different lesion periods through a convolutional neural network model.
Preferably, the convolutional neural network model is a ResNet50 network, and the ResNet50 network comprises convolution kernels with 3X3 step sizes of 2 and 2 convolution kernels with 3X3 step sizes of 1; the last layer of the ResNet50 network introduces the attention-focused CBAM model.
In order to solve the above technical problem, the present application further provides a lesion detection apparatus, including:
a memory for storing a computer program;
the processor is used for acquiring the image of the part to be detected of the user through image acquisition equipment when the computer program is executed; obtaining an anchor frame containing a first lesion and a second lesion from the image through a target detection algorithm; acquiring the position of a blood vessel in the image; and determining that the lesion detection result is the first lesion or the second lesion according to the anchor frame or the relationship between the anchor frame and the blood vessel.
In order to solve the above technical problem, the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program realizes that an image of a to-be-detected part of a user is acquired through an image acquisition device; obtaining an anchor frame containing a first lesion and a second lesion from the image through a target detection algorithm; acquiring the position of a blood vessel in the image; and determining that the result of the lesion detection is the first lesion or the second lesion according to the anchor frame or the relation between the anchor frame and the blood vessel.
The device of focus detection that this application provided includes: the acquisition module is used for acquiring an image of the part to be detected of the user through image acquisition equipment; the first acquisition module is used for acquiring an anchor frame containing a first focus and a second focus from the image through a target detection algorithm; the second acquisition module is used for acquiring the position of the blood vessel in the image; and the detection module is used for determining that the focus detection result is the first focus or the second focus according to the anchor frame or the relation between the anchor frame and the blood vessel. According to the device, the anchor frame containing the first focus and the second focus is obtained firstly, and then the first focus and the second focus are further determined according to the anchor frame or the relation between the anchor frame and the blood vessel, so that the device realizes the distinguishing of the focuses and improves the accuracy of focus detection.
In addition, the application also provides a focus detection device and a computer readable storage medium, which have the same or corresponding technical characteristics with the above mentioned focus detection device, and the effects are the same.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, and the drawings in the following description are only some embodiments of the present application, and it will be obvious to those skilled in the art that other drawings can be obtained without inventive effort.
Fig. 1 is a block diagram of an apparatus for lesion detection according to an embodiment of the present disclosure;
fig. 2 is a structural diagram of an improved ResNet50 network according to an embodiment of the present application;
fig. 3 is a block diagram of an apparatus for lesion detection according to an embodiment of the present application;
fig. 4 is a view of an application scenario of a lesion detection apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the present application.
The core of the application is to provide a device, equipment and medium for detecting the focus, which are used for distinguishing the focus, thereby improving the accuracy of focus detection.
It should be noted that the device for distinguishing lesions in the present application may be suitable for detecting lesions in different locations, and may also be suitable for distinguishing a plurality of different lesions, and is not limited to distinguishing two kinds of lesions. The application mainly detects to DR patient's eye ground focus. DR is one of the most serious microvascular complications of diabetes, and in the early stage, DR has no obvious symptoms and is not easy to attach attention of patients. However, as the disease progresses, the fundus of the patient will have lesions such as microangiomas and bleeding spots, and the vision of the patient will gradually start to decline, so that if the best treatment period is missed, the treatment becomes harder, and blindness may be caused seriously. Since the incidence of complications can be reduced if DR can be intervened and diagnosed early, it is important to detect a fundus oculi lesion of a DR patient. Because the bleeding point is similar to the tissues of the microangioma and the retina, and the difference between the bleeding point and the tissues of the microangioma is small, in order to distinguish the two foci with small difference, the fundus image is detected by a deep learning algorithm in the application, and the bleeding point and the microangioma focus are distinguished, so that the accuracy of detecting the fundus image is improved.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. Fig. 1 is a block diagram of an apparatus for lesion detection according to an embodiment of the present application, as shown in fig. 1, the apparatus includes:
the acquisition module 1 is used for acquiring the image of the part to be detected of the user through the image acquisition equipment.
The first acquisition module 2 is used for acquiring an anchor frame containing a first focus and a second focus from the image through a target detection algorithm;
the second acquisition module 3 is used for acquiring the position of a blood vessel in the image;
and the detection module 4 is used for determining the focus detection result as a first focus or a second focus according to the anchor frame or the relation between the anchor frame and the blood vessel.
When detecting the focus of a user, an image of a part to be detected of the user needs to be acquired through an acquisition module, and the part to be detected is a diseased part caused by diseases. The image of the part to be detected is acquired through the image acquisition equipment, and the specific adopted image acquisition equipment is not limited and is determined according to actual conditions. For example, in the case of a DR patient, as the disease condition progresses, a focus such as a bleeding point or microangioma appears on the fundus of the patient, and therefore, an image of the fundus of the patient needs to be acquired. When acquiring an eye fundus image, in order to obtain an eye fundus image with a larger area, an ultra-wide angle eye fundus image is adopted in the embodiment, the imaging technology adopts red-green two-color laser two-channel scanning imaging, and about 80% of the area of the retina can be displayed by one-time shooting. When clinical practice is combined with the ultra-wide-angle fundus image, the field of vision of an ophthalmologist is wider, and the judgment of the illness state of the DR patient by the doctor is facilitated.
After the image of the part to be detected of the user is acquired by the acquisition module, the focus can be detected according to the image. In order to distinguish the first lesion from the second lesion, the present application first needs to detect a target (the first lesion and the second lesion) from an acquired image. When the target is detected, a target detection algorithm is adopted, and an anchor frame containing a first focus and a second focus is obtained from an image through the target detection algorithm. In the case of detecting a fundus image of a DR patient, the first lesion may be regarded as a bleeding point and the second lesion may be regarded as a microangioma.
After acquiring the anchor frame containing the first lesion and the second lesion, the position of the blood vessel in the image is further acquired by a second acquisition module. When the blood vessel image is acquired, the Res _ Unet network can be adopted to segment the image blood vessel to obtain a blood vessel segmentation image. Finally, the first focus and the second focus can be distinguished through the detection module according to the area of the anchor frame, or according to the position relation between the anchor frame and the blood vessel, so that the first focus and the second focus are distinguished.
The apparatus for detecting a lesion provided in this embodiment includes: the acquisition module is used for acquiring an image of the part to be detected of the user through image acquisition equipment; the first acquisition module is used for acquiring an anchor frame containing a first focus and a second focus from the image through a target detection algorithm; the second acquisition module is used for acquiring the position of the blood vessel in the image; and the detection module is used for determining the focus detection result as a first focus or a second focus according to the anchor frame or the relation between the anchor frame and the blood vessel. According to the device, the anchor frame containing the first focus and the second focus is obtained firstly, and then the first focus and the second focus are further determined according to the anchor frame or the relation between the anchor frame and the blood vessel, so that the device can distinguish the focuses and improve the accuracy of focus detection.
In order to improve the accuracy of detection when acquiring an image containing a first lesion and a second lesion by a target detection algorithm, in an implementation, a preferred embodiment is that the first acquiring module includes:
the second acquisition module is used for acquiring all target frame sets and scores corresponding to all target frames in the target frame sets through a FasterRCNN detection network;
the selecting module is used for selecting the candidate frame with the largest score from the target frames;
and the removing and placing module is used for removing the candidate frame with the largest score from the target frame set and placing the candidate frame into a final detection frame result set so as to obtain an anchor frame containing the first focus and the second focus from the final detection frame result set.
The used target detection algorithm is not limited, and because a fast Region convolutional neural Network (fast Regions with CNN Features, fast RCNN) generates a Network (RPN) through a two-order Network and a Region, the object detection performance with higher precision is realized; compared with other first-order networks, the two-order network is more accurate, and particularly aiming at the problems of high precision, multiple scales and small objects, the two-order network has more obvious advantages. The Faster RCNN has a good effect on a plurality of data sets and object tasks, and for an individual data set, a good effect can be achieved after fine tuning, so that the fast RCNN detection algorithm is adopted in the embodiment to perform target detection. The FasterRCNN detection network mainly comprises regression of an extracted feature box, an extracted feature graph, a RoI pooling layer and a target classification and detection box. The Feature extraction Network of FasterRCNN is a Deep Residual neural Network 50 (Deep Residual Network 50, resnet 50) combined with Feature Pyramids (FPN) structure. In particular, the amount of the solvent to be used,
(1) Firstly, all target box sets B (bi epsilon B) generated by the FasterRCNN detection network and corresponding scores s are obtained i
(2) And selecting a candidate box M with the maximum score value from the set B, removing M from the set B and putting the M into a final detection box result set D.
The target frame of this embodiment refers to a feature frame that includes a first lesion and a second lesion. Currently, scoring for the target box is generally done by formula (1).
Figure BDA0003912232270000071
S in formula (1) i The score of the target frame is shown, when IOU shows the intersection ratio of the candidate frame with the maximum score and the target frame, N t A threshold value is indicated. The threshold is not limited and is determined according to actual conditions. When the intersection ratio is smaller than the threshold value, the score of the corresponding target box is recorded as s i (ii) a And when the difference value of the 1 and the intersection ratio is greater than or equal to the threshold value, marking the score of the corresponding target frame as 0.
The accuracy of lesion detection can be improved by acquiring the anchor frame containing the first lesion and the second lesion through the Faster RCNN detection network provided by the embodiment.
In the scoring mechanism of the above embodiment, when the difference between 1 and the intersection ratio is greater than or equal to the threshold, the score of the corresponding target frame is scored as 0. However, in practice, some lesions overlap, and if the area is too large, the frame is directly set to 0, so that the frame including the first lesion and the second lesion cannot be retained as much as possible, and the accuracy of detection is reduced. Therefore, in implementation, a preferred embodiment is that the second obtaining module includes:
a third obtaining module, configured to obtain all target frame sets and initial scores corresponding to all the target frame sets through a fasterncn detection network;
the fourth acquisition module is used for acquiring the overlapping rate of each target frame and the candidate frame with the largest score;
a fifth obtaining module, configured to obtain a score corresponding to the target frame according to the initial score corresponding to the target frame and an intersection ratio between the target frame and the candidate frame with the largest score when an overlapping rate between the target frame and the candidate frame with the largest score is greater than a first threshold;
and the module is used for taking the initial score corresponding to the target frame as the score of the target frame when the overlapping rate of the target frame and the candidate frame with the maximum score is less than or equal to a first threshold value.
In this embodiment, the scoring mechanism in the above embodiment is improved, RPN in the fasterncn network is improved, and sugar net ultra-wide angle fundus map lesion detection is performed by adopting a Non-maximum suppression algorithm (NMS) (Soft-NMS for short). The improved scoring mode is shown as formula (2):
Figure BDA0003912232270000081
in equation (2), in order to distinguish the thresholds described hereinafter, N will be referred to herein t Described as the first threshold. In order to distinguish the score of the corresponding target frame when the intersection ratio is greater than the first threshold value from the score of the corresponding target frame when the intersection ratio is less than the first threshold value, s is used in this embodiment i Referred to as the initial score. When the overlapping rate (also referred to as intersection ratio) of the target frame and the candidate frame with the maximum score is larger than a first threshold value, the score of the target frame is the initial score and the (1-intersection ratio)) The product of (a) and (b).
Anchor box size in RPN of FasterRCNN and ratio as shown in table 2:
TABLE 2 Anchor frame size and proportion in RPN
Figure BDA0003912232270000082
Figure BDA0003912232270000091
Compared with the prior scoring mechanism which directly sets the score of the corresponding target frame to be 0 when the cross-over ratio is greater than the threshold value, that is, the manner of removing the corresponding target frame, in the device of the embodiment, the score is given to the target frame with the cross-over ratio greater than the threshold value, so that the first and second fine lesions can be reserved, and the detection accuracy of the fine features is improved. By taking fundus focus detection as an example, the device of the embodiment can keep fine bleeding points and microangiomas, thereby facilitating subsequent detection and improving the detection precision.
In one embodiment, in order to distinguish between the first lesion and the second lesion, the detection module comprises:
the sixth acquisition module is used for acquiring the area of the anchor frame;
the judging module is used for judging whether the area is larger than a second threshold value or not;
if yes, triggering a first determining module, and if not, triggering a second determining module;
a first determination module for determining a result of lesion detection as a first lesion;
a second determination module for determining a result of the lesion detection as a second lesion.
Taking fundus focus detection as an example, after acquiring an Anchor frame { Anchor | x1, y1, x2, y2} containing a bleeding point and microangioma by adopting a FasterRCNN detection algorithm, wherein the position coordinates of the upper left corner and the lower right corner of the Anchor frame (x 1, y 1), (x 2, y 2) are calculated, the area S surrounded by the Anchor frame { Anchor | x1, y1, x2, y2} is calculated, and if the area S surrounded by { x1, y1, x2, y2} is larger than a second threshold value, the focus is directly determined to be the bleeding point. The second threshold is not limited, and is determined according to actual conditions. After a number of tests in this example, the preferred second threshold is 2000, i.e. the area S is greater than 2000, the lesion is directly determined as a bleeding point, otherwise, the lesion is determined as microangioma.
The present embodiment provides a method for determining a lesion based on the area of an anchor frame containing a first lesion and a second lesion, which can rapidly distinguish the first lesion from the second lesion.
In the above embodiment, the first lesion and the second lesion are distinguished according to the area of the anchor frame, and actually, the first lesion and the second lesion may be distinguished according to the relationship between the anchor frame and the blood vessel. In a preferred embodiment, the detection module comprises:
the first segmentation module is used for segmenting the image through a Res _ Unet network so as to obtain a blood vessel segmentation image;
the first conversion module is used for converting the blood vessel segmentation image into a binary image;
a seventh obtaining module, configured to obtain the number of the binary image pixel values in the anchor frame region that is 255;
a third determining module, configured to determine, in a case where the number is 0, that a result of the lesion detection is the first lesion;
and a fourth determining module for determining the result of the lesion detection as the second lesion in case that the number is not 0.
Here, also taking fundus lesion detection as an example, after acquiring an Anchor frame { Anchor | x1, y1, x2, y2} containing a bleeding point and microangioma by a FasterRCNN detection algorithm, segmenting an image by a Res _ Unet network to acquire a blood vessel segmentation image, wherein the Res _ Unet network is an Unet in which a ResNet34 is used as a feature extraction network; converting the blood vessel segmentation image into a binary image, calculating the SUM (SUM of the number of pixels with the value of 255 in a region surrounded by the binary image) of coordinates, and if the SUM of the number of pixels with the value of 255 in the region surrounded by the { x1, y1, x2, y2} coordinates is 0, determining that the blood vessel is a bleeding point (because microangioma appears on the blood vessel, the blood vessel is bound in the region surrounded by the microangioma, and the blood vessel segmentation result appears white on the binary image); otherwise, it is determined as microangioma.
The present embodiment provides that the first lesion can be distinguished from the second lesion based on the relation between the anchor frame and the blood vessel.
In addition to the manner of distinguishing a first lesion from a second lesion described in the above embodiments, the present embodiment also provides a manner of distinguishing a first lesion from a second lesion, specifically, the detection module includes:
the second segmentation module is used for segmenting the image through a Res _ Unet network so as to obtain a blood vessel segmentation image;
the second conversion module is used for converting the blood vessel segmentation image into a binary image;
the eighth obtaining module is used for obtaining a first number of the pixel values of the binary image in the current anchor frame area, wherein the first number is 255;
the adjusting and acquiring module is used for adjusting the size of the current anchor frame area according to a preset rule and acquiring the adjusted anchor frame area;
a ninth obtaining module, configured to obtain a second number of the adjusted binary image pixel values in the anchor frame region, where the second number is 255;
a fifth determining module for determining the result of lesion detection as the first lesion if the second number is greater than the first number;
a sixth determining module for determining the result of lesion detection as the second lesion if the second number is less than or equal to the first number.
Here, also taking fundus lesion detection as an example, after acquiring an Anchor frame { Anchor | x1, y1, x2, y2} containing a bleeding point and microangioma by a FasterRCNN detection algorithm, segmenting an image by a Res _ Unet network to acquire a vessel segmentation image; converting the blood vessel segmentation image into a binary image; in the present embodiment, the anchor frame region is adjusted in accordance with { x1-i, x2+ i, y1-i, y2+ i |0= < i < =20, i = 5=0}, where i is the step size of the adjustment, and where 0= < i < =20, i = 5=0 is empirically determined. Calculating the number sumi of 255 pixel values in an area surrounded by the binary image and the number of sumi is not continuously increased by the coordinates { x1-i, x2+ i, y1-i, y2+ i |0= < i < =20, i = 5=0}, and determining the blood bleeding point, otherwise, determining the microangioma.
The size of the anchor frame region is adjusted according to the position relationship between the anchor frame region and the blood vessel, so that the first lesion can be distinguished from the second lesion.
In summary, the above describes three ways of distinguishing the first lesion from the second lesion, so that the way of distinguishing the first lesion from the second lesion is more flexible.
Since the focus is different in the images of different periods, in order to be able to distinguish the focus more accurately, it is a preferred embodiment that the apparatus for focus detection further comprises:
and the distinguishing module is used for distinguishing the images into the images with different lesion periods through the convolutional neural network model.
For a DR patient, the present embodiment divides the fundus image into two parts, one part being the 0-3 stage fundus image and the other part being the 4 th stage fundus image. In this example, the lesion in the fundus image in the 0-3 stage was detected, and a bleeding point and microangioma were detected.
The specific convolutional neural network model is not limited, in this embodiment, the convolutional neural network model used is a ResNet50 network, and the ResNet50 network includes a convolution kernel with a 3X3 step size of 2 and 2 convolution kernels with a 3X3 step size of 1; the last layer of the ResNet50 network introduces a Centralized Block Attention Module (CBAM) model. Fig. 2 is a structural diagram of an improved ResNet50 network according to an embodiment of the present application. The difference from the previous ResNet50 network is that the convolution kernel of ResNet50 is modified to modify the convolution kernel with 7X7 steps of 2 to a convolution kernel with 3X3 steps of 2 and 2 convolution kernels with 3X3 steps of 1 are added to make the model more focused on subtle lesions in the image. And a CBAM model is added in the last layer of the ResNet, so that the accuracy of network staging is improved.
Fig. 3 is a block diagram of an apparatus for lesion detection according to an embodiment of the present application. In this embodiment, based on hardware, as shown in fig. 3, the apparatus for lesion detection includes:
a memory 20 for storing a computer program;
a processor 21, configured to implement, when executing the computer program, acquiring an image of a part to be detected of a user through the image acquisition device as mentioned in the above embodiments; acquiring an anchor frame containing a first focus and a second focus from an image through a target detection algorithm; acquiring the position of a blood vessel in an image; and determining the detected result of the focus as a first focus or a second focus according to the anchor frame or the relation between the anchor frame and the blood vessel.
The device for detecting a lesion provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, or a desktop computer.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The Processor 21 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), and a Programmable Logic Array (PLA). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 21 may further include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
The memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 20 is at least used for storing a computer program, wherein the computer program can be loaded and executed by the processor 21 to implement the functions of the apparatus for lesion detection disclosed in any one of the above embodiments. In addition, the resources stored in the memory 20 may also include an operating system, data, and the like, and the storage manner may be a transient storage or a permanent storage. The operating system may include Windows, unix, linux, and the like.
In some embodiments, the lesion detection device may further include a display screen, an input/output interface, a communication interface, a power source, and a communication bus.
Those skilled in the art will appreciate that the configuration shown in fig. 3 does not constitute a limitation of the apparatus for lesion detection and may include more or fewer components than those shown.
The focus detection device provided by the embodiment of the application comprises a memory and a processor, wherein the processor can realize the relevant functions of the focus detection device when executing the program stored in the memory, and the effect is the same as that of the focus detection device.
Finally, the application also provides a corresponding embodiment of the computer readable storage medium. The computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the computer program realizes the acquisition of the image of the part to be detected of the user through the image acquisition equipment; acquiring an anchor frame containing a first focus and a second focus from an image through a target detection algorithm; acquiring the position of a blood vessel in an image; and determining the detected result of the focus as a first focus or a second focus according to the anchor frame or the relation between the anchor frame and the blood vessel.
It is to be understood that if the method in the above embodiments is implemented in the form of software functional units and sold or used as a stand-alone product, it can be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a storage medium and executes all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In order to make the technical field of the present invention better understand, the present application is further described in detail with reference to fig. 4 and the detailed description. Fig. 4 is a view of an application scenario of a lesion detection apparatus according to an embodiment of the present disclosure. As shown in fig. 4, after the ultra-wide-angle fundus image is acquired, the image is subjected to three stages of processing.
Wherein, in a first phase, the image is first preprocessed; dividing the image into a 0-3 phase image and a four-phase image by adopting a modified ResNet 50;
in the second stage, the image is segmented to obtain a segmentation graph, and an improved FasterRCNN algorithm is adopted to obtain a primary focus detection result;
and in the third stage, the blood vessel is segmented to obtain a blood vessel segmentation graph, and a focus detection result is output by a differentiating algorithm of a bleeding point and microangioma and a primary focus detection result.
The super-wide-angle fundus image of the DR patient is obtained, and the bleeding point and microangioma in the image are preliminarily detected by adopting the FasterRCNN network. The position relation between the anchor frame and the blood vessel is calculated by utilizing the preliminarily detected coordinates of the positions of the anchor frame of the bleeding point and the microangioma and the blood vessel segmentation map of the ultra-wide angle fundus image, so that the bleeding point and the microangioma are distinguished, and the accuracy of focus detection is improved.
The device, apparatus and medium for lesion detection provided in the present application are described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It should also be noted that, in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. An apparatus for lesion detection, comprising:
the acquisition module is used for acquiring an image of a part to be detected of a user through image acquisition equipment;
the first acquisition module is used for acquiring an anchor frame containing a first focus and a second focus from the image through a target detection algorithm;
the second acquisition module is used for acquiring the position of the blood vessel in the image;
and the detection module is used for determining that the focus detection result is the first focus or the second focus according to the anchor frame or the relation between the anchor frame and the blood vessel.
2. The apparatus for lesion detection of claim 1, wherein the first obtaining module comprises:
a second obtaining module, configured to obtain all target frame sets and scores corresponding to each target frame in the target frame sets through a fasterncn detection network;
the selecting module is used for selecting the candidate frame with the largest score from the target frames;
a remove and put module for removing the candidate box with the largest score from the target box set and putting into a final detection box result set so as to obtain an anchor box containing the first lesion and the second lesion from the final detection box result set.
3. The apparatus for lesion detection of claim 2, wherein the second acquisition module comprises:
a third obtaining module, configured to obtain all the target frame sets and initial scores corresponding to the target frame sets through the fasterncn detection network;
a fourth obtaining module, configured to obtain an overlap ratio between each target frame and the candidate frame with the largest score;
a fifth obtaining module, configured to, when an overlap ratio of the target frame and the candidate frame with a largest score is greater than a first threshold, obtain a score corresponding to the target frame according to the initial score corresponding to the target frame and an intersection ratio of the target frame and the candidate frame with the largest score;
and the module is used for taking the initial score corresponding to the target frame as the score of the target frame when the overlapping rate of the target frame and the candidate frame with the maximum score is less than or equal to the first threshold.
4. The apparatus for lesion detection according to claim 3, wherein the detection module comprises:
a sixth obtaining module, configured to obtain an area of the anchor frame;
the judging module is used for judging whether the area is larger than a second threshold value or not;
if yes, triggering a first determining module, and if not, triggering a second determining module;
the first determination module is configured to determine that the result of the lesion detection is the first lesion;
the second determination module is configured to determine that the result of the lesion detection is the second lesion.
5. The apparatus for lesion detection as claimed in claim 3, wherein the detection module comprises:
the first segmentation module is used for segmenting the image through a Res _ Unet network so as to obtain a blood vessel segmentation image;
the first conversion module is used for converting the blood vessel segmentation image into a binary image;
a seventh obtaining module, configured to obtain the number of the binary image pixel values in the anchor frame region being 255;
a third determining module, configured to determine a result of the lesion detection as the first lesion if the number is 0;
a fourth determination module for determining that the result of the lesion detection is the second lesion if the number is not 0.
6. The apparatus for lesion detection according to claim 3, wherein the detection module comprises:
the second segmentation module is used for segmenting the image through a Res _ Unet network so as to obtain a blood vessel segmentation image;
the second conversion module is used for converting the blood vessel segmentation image into a binary image;
the eighth obtaining module is used for obtaining a first number of the pixel values of the binary image in the current anchor frame area, wherein the first number is 255;
the adjusting and acquiring module is used for adjusting the size of the current anchor frame area according to a preset rule and acquiring the adjusted anchor frame area;
a ninth obtaining module, configured to obtain a second number of the adjusted binary image pixel values in the anchor frame region, where the second number is 255;
a fifth determining module for determining that the outcome of the lesion detection is the first lesion if the second number is greater than the first number;
a sixth determining module for determining that the result of lesion detection is the second lesion if the second number is less than or equal to the first number.
7. The apparatus for lesion detection according to any one of claims 1 to 6, further comprising:
and the distinguishing module is used for distinguishing the images into the images with different lesion periods through a convolutional neural network model.
8. The apparatus for lesion detection according to claim 7, wherein the convolutional neural network model is a ResNet50 network, and the ResNet50 network comprises convolution kernels with 3X3 step sizes of 2 and 2 convolution kernels with 3X3 step sizes of 1; the last layer of the ResNet50 network introduces a CBAM model with attention mechanism.
9. An apparatus for lesion detection, comprising:
a memory for storing a computer program;
the processor is used for acquiring the image of the part to be detected of the user through image acquisition equipment when the computer program is executed; obtaining an anchor frame containing a first lesion and a second lesion from the image through a target detection algorithm; acquiring the position of a blood vessel in the image; and determining that the result of the lesion detection is the first lesion or the second lesion according to the anchor frame or the relation between the anchor frame and the blood vessel.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program enables acquiring an image of a part to be detected of a user by an image acquisition device; obtaining an anchor frame containing a first lesion and a second lesion from the image through a target detection algorithm; acquiring the position of a blood vessel in the image; and determining that the lesion detection result is the first lesion or the second lesion according to the anchor frame or the relationship between the anchor frame and the blood vessel.
CN202211325517.9A 2022-10-27 2022-10-27 Device, equipment and medium for detecting focus Pending CN115908274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211325517.9A CN115908274A (en) 2022-10-27 2022-10-27 Device, equipment and medium for detecting focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211325517.9A CN115908274A (en) 2022-10-27 2022-10-27 Device, equipment and medium for detecting focus

Publications (1)

Publication Number Publication Date
CN115908274A true CN115908274A (en) 2023-04-04

Family

ID=86490248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211325517.9A Pending CN115908274A (en) 2022-10-27 2022-10-27 Device, equipment and medium for detecting focus

Country Status (1)

Country Link
CN (1) CN115908274A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563237A (en) * 2023-05-06 2023-08-08 大连工业大学 Deep learning-based chicken carcass defect hyperspectral image detection method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563237A (en) * 2023-05-06 2023-08-08 大连工业大学 Deep learning-based chicken carcass defect hyperspectral image detection method
CN116563237B (en) * 2023-05-06 2023-10-20 大连工业大学 Deep learning-based chicken carcass defect hyperspectral image detection method

Similar Documents

Publication Publication Date Title
US20220076420A1 (en) Retinopathy recognition system
CN109784337B (en) Method and device for identifying yellow spot area and computer readable storage medium
WO2018201632A1 (en) Artificial neural network and system for recognizing lesion in fundus image
CN108198184B (en) Method and system for vessel segmentation in contrast images
Giancardo et al. Automatic retina exudates segmentation without a manually labelled training set
US10952604B2 (en) Diagnostic tool for eye disease detection using smartphone
Palavalasa et al. Automatic diabetic retinopathy detection using digital image processing
CN112017185B (en) Focus segmentation method, device and storage medium
CN112384127A (en) Eyelid droop detection method and system
Dimauro et al. Anaemia detection based on sclera and blood vessel colour estimation
CN115908274A (en) Device, equipment and medium for detecting focus
CN112927228A (en) Image evaluation method and device, and model training method and device
Zhang et al. Hierarchical detection of red lesions in retinal images by multiscale correlation filtering
JP7197708B2 (en) Preprocessing method and storage device for fundus image quantitative analysis
JP2018023602A (en) Fundus image processing device
CN116030042A (en) Diagnostic device, method, equipment and storage medium for doctor&#39;s diagnosis
Kocejko et al. Using convolutional neural networks for corneal arcus detection towards familial hypercholesterolemia screening
KR102380560B1 (en) Corneal Ulcer Region Detection Apparatus Using Image Processing and Method Thereof
CN111754452B (en) Detection method, medium and terminal for lower limb deep venous thrombosis based on deep learning
Mittal et al. Optic disk and macula detection from retinal images using generalized motion pattern
CN112734701A (en) Fundus focus detection method, fundus focus detection device and terminal equipment
TW202007353A (en) Methods and system for detecting blepharoptosis
Mohammed et al. Diagnosis of Retinopathy in Patients Diabetes
CN112767375B (en) OCT image classification method, system and equipment based on computer vision characteristics
JP7334922B1 (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination