CN112435256A - CNV active focus detection method and device based on image and electronic equipment - Google Patents
CNV active focus detection method and device based on image and electronic equipment Download PDFInfo
- Publication number
- CN112435256A CN112435256A CN202011464273.3A CN202011464273A CN112435256A CN 112435256 A CN112435256 A CN 112435256A CN 202011464273 A CN202011464273 A CN 202011464273A CN 112435256 A CN112435256 A CN 112435256A
- Authority
- CN
- China
- Prior art keywords
- image
- lesion
- sample
- cnv
- sample image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 230000003902 lesion Effects 0.000 claims abstract description 105
- 238000012014 optical coherence tomography Methods 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 28
- 230000000694 effects Effects 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims description 21
- 230000008859 change Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 208000005590 Choroidal Neovascularization Diseases 0.000 description 38
- 206010060823 Choroidal neovascularisation Diseases 0.000 description 38
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 4
- 208000000208 Wet Macular Degeneration Diseases 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000006002 Pepper Substances 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 150000002632 lipids Chemical class 0.000 description 1
- 208000015122 neurodegenerative disease Diseases 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000037390 scarring Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
The application provides a CNV active focus detection method and device based on an image and an electronic device, wherein the detection method comprises the following steps: acquiring an optical coherence tomography image to be detected; extracting a plurality of feature maps with different sizes of the optical coherence tomography image through a feature pyramid network; obtaining a plurality of lesion candidate regions with different sizes and different aspect ratios in each feature map through a region generation network, and determining a lesion region representing the CNV activity according to the plurality of lesion candidate regions through a classification regression network. The method and the device improve the detection rate and the detection accuracy of the characteristic focus representing the CNV.
Description
Technical Field
The invention relates to the field of image processing, in particular to a CNV active lesion detection method and device based on an image, electronic equipment and a computer storage medium.
Background
Wet Age-related Macular Degeneration (wAMD) is an Age-related degenerative disease that occurs in the Macular region due to the formation of Choroidal Neovascularization (CNV) that can cause sub-Macular fluid and lipid leakage, fibrous scarring. At present, the judgment of the disease is based on manual judgment based on experience, and the method has the disadvantages of inconsistent standards, low efficiency and low accuracy.
Along with the popularization of artificial intelligence technology, an intelligent detection technology is urgently needed to replace manual identification, and accuracy and efficiency are improved.
Disclosure of Invention
The embodiment of the application provides an image-based CNV active lesion detection method, which improves the detection rate and efficiency of CNV characteristic lesions.
The embodiment of the application provides a CNV active lesion detection method based on an image, which comprises the following steps:
acquiring an optical coherence tomography image to be detected;
extracting a plurality of feature maps with different sizes of the optical coherence tomography image through a feature pyramid network;
obtaining a plurality of lesion candidate regions with different sizes and different aspect ratios in each feature map through a region generation network;
a lesion region representing CNV activity is determined from the plurality of lesion candidate regions by a classification regression network.
In an embodiment, before the extracting a plurality of feature maps of different sizes from the optical coherence tomography image through the feature pyramid network, the method further includes:
and according to a preset scanning area, cutting the optical coherence tomography image.
In one embodiment, the different sizes include 128 × 128, 256 × 256, 512 × 512; the different aspect ratios include 1:2,1:1,2: 1; the number of the lesion candidate regions is 9.
In one embodiment, the feature pyramid network, the region generation network and the classification regression network together form a target detection model; before the acquiring the optical coherence tomography image to be detected, the method further comprises:
acquiring an original scanning image of a known lesion coordinate;
intercepting a scanning area from the original scanning image to serve as a first sample image, and updating a lesion coordinate in the first sample image;
carrying out image enhancement on the first sample image to obtain a second sample image, and updating the lesion coordinates of the second sample image;
the target detection model is trained using a first sample image and a second sample image of known lesion coordinates.
In one embodiment, image enhancing the first sample image comprises:
and carrying out image brightness change or random rotation on the first sample image.
In one embodiment, after said determining a lesion region representing CNV activity from a plurality of lesion candidate regions by a classification regression network, the method further comprises:
mapping to obtain the position of the focus in the optical coherence tomography image according to the coordinate of the focus area representing the CNV activity;
and outputting an optical coherence tomography image for marking the position of the lesion.
The embodiment of the present application further provides an image-based CNV active lesion detection apparatus, including:
the image acquisition module is used for acquiring an optical coherence tomography image to be detected;
the characteristic extraction module is used for extracting a plurality of characteristic graphs with different sizes of the optical coherence tomography image through a characteristic pyramid network;
a candidate obtaining module, configured to obtain, in each feature map, a plurality of lesion candidate regions with different sizes and different aspect ratios through a region generation network;
and the classification regression module is used for determining a lesion region representing the CNV activity according to the plurality of lesion candidate regions through a classification regression network.
In one embodiment, the feature pyramid network, the region generation network, and the classification regression network together form a target detection model, and the apparatus further includes:
the sample acquisition module is used for acquiring an original scanning image of a known lesion coordinate;
the sample intercepting module is used for intercepting a scanning area from the original scanning image to serve as a first sample image and updating the lesion coordinates in the first sample image;
the sample change module is used for carrying out image enhancement on the first sample image to obtain a second sample image and updating the lesion coordinates of the second sample image;
and the model training module is used for training the target detection model by utilizing the first sample image and the second sample image of the known lesion coordinates.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the above image-based CNV active lesion detection method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program executable by a processor to perform the above image-based CNV active lesion detection method.
According to the technical scheme provided by the embodiment of the application, the optical coherence tomography image to be detected is obtained, a plurality of feature maps with different sizes of the optical coherence tomography image are extracted through the feature pyramid network, a plurality of focus candidate regions with different sizes and different length-width ratios are obtained in each feature map through the region generation network, and the classification regression network is used for determining the focus region representing the CNV activity according to the focus candidate regions, so that the detection rate and the detection efficiency of the characteristic focus representing the CNV are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of a CNV active lesion detection method based on an image according to an embodiment of the present application;
fig. 2 is a schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an image-based CNV active lesion detection method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a training process of a target detection model according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a complete training process of a target detection model according to an embodiment of the present application;
FIG. 6 is a pre-processing of a sample image provided by an embodiment of the present application;
fig. 7 is a block diagram of an image-based CNV active lesion detection apparatus according to an embodiment of the present application;
fig. 8 is a block diagram of an image-based CNV active lesion detection apparatus according to another embodiment of the present application based on the corresponding embodiment of fig. 7.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic view of an application scenario of the image-based CNV active lesion detection method according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a client 110 and a server 120, where the client 110 may send an optical coherence tomography image (referred to as an original scanning image) with the lesion coordinates to the server 110, and then the server 120 may intercept a scanning area from the original scanning image, obtain an image referred to as a first sample image, update the lesion coordinates of the first sample image, perform data enhancement on the original scanning image by using multiple data enhancement methods, obtain an image referred to as a second sample image (there may be multiple second sample images), update the lesion coordinates of the second sample image, and combine the first sample image and the second sample image with known lesion coordinates to obtain a new data set. The image-based CNV active lesion detection model is trained using the new dataset. After the image-based CNV active lesion detection model is trained, the server 120 may identify a lesion region in the optical coherence tomography image to be detected by using the image-based CNV active lesion detection model according to the method provided in the embodiment of the present application.
Fig. 2 is a schematic view of an electronic device provided in an embodiment of the present application. The electronic device 210 can be used as the server 120, and the electronic device 210 includes: a processor 230; a memory 220 for storing instructions executable by processor 230; wherein the processor 230 is configured to execute the image-based CNV active lesion detection method provided by the embodiment of the present application. A communication interface 240 for communication between the electronic device 210 and an external device; communication bus 250 provides for communication among memory 220, processor 230, and communication interface 240.
The Memory 220 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The memory also stores a plurality of modules, which are respectively executed by the processor to complete the steps of the image-based CNV active lesion detection method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program executable by processor 230 to perform the image-based CNV active lesion detection method described below.
Fig. 3 is a flowchart illustrating an image-based CNV active lesion detection method according to an embodiment of the present application. As shown in fig. 3, the detection method may include the following steps S310 to S340.
Step S310: and acquiring an optical coherence tomography image to be detected.
For the purpose of discrimination, the optical coherence tomography image to be detected is an optical coherence tomography image in which it is unknown whether a lesion region representing CNV activity exists. An optical coherence tomography image with known lesion coordinates may be referred to as an original scan image, a scan region cut from the original scan image may be referred to as a first sample image, and an image obtained by image enhancement of the first sample image may be referred to as a second sample image.
The image capture refers to capturing an OCT B-scan image from an original OCT (Optical Coherence Tomography) image according to a predetermined cutting size, filtering out a left en-face OCT image and underlying basic information, and updating a lesion coordinate according to the cutting size.
The image enhancement, also called image expansion and image augmentation, is realized by changing the first sample image by adopting methods such as brightness change, distortion, clipping, horizontal/vertical turning, Gaussian/salt-and-pepper noise addition, zooming and the like. Therefore, the original data set is expanded, and the generalization capability of the model is increased.
In an embodiment, after the optical coherence tomography image to be detected is acquired, the server may crop the optical coherence tomography image to be detected according to a preset scanning area.
The preset scanning area may be an area with the left en-face OCT image and the lower basic information filtered out. And intercepting a preset scanning area from an optical coherence tomography image to be detected as the input of the model.
Step S320: and extracting a plurality of characteristic graphs with different sizes of the optical coherence tomography image through a characteristic pyramid network.
The feature map is used to characterize the features of the image. As the volume block increases, the feature map decreases, and the information on the feature map of the small object on the original map becomes smaller or disappears. In the embodiment of the application, a plurality of Feature maps with different sizes can be output through a Feature Pyramid Network (FPN), i.e., a backbone Network. That is, the feature map output by the last convolutional layer of the feature pyramid is used for prediction, and the feature map output by the middle convolutional layer is also used for prediction. The focus has diversity in scale and shape between different examples of same focus, and has some focus and has less size, for further promotion model effect, has added FPN in this application, realizes detecting under the multiscale. If the FPN is not used, only the feature map output by the last convolutional layer is used for prediction, and the missing detection is likely to occur.
The feature pyramid network can be trained in advance. Backbone networks (backbones) can use ResNet50, ResNet101, ResNeXt50_32x4d, and ResNeXt101_32x8d for feature extraction.
Step S330: and obtaining a plurality of lesion candidate regions with different sizes and different aspect ratios in each feature map through a region generation network.
The lesion candidate region refers to a candidate box (referred to as anchor) that may belong to the lesion region. According to the embodiment of the application, a Two-stage type target detection algorithm can be adopted, a series of focus candidate regions with different sizes and different length-width ratios are generated through the algorithm, and then sample classification is carried out through a convolution neural network. Target detection algorithms for the Two-stage class may employ Mask R-CNN, Cascade Mask R-CNN, GCNet, HRNetv2p, and Scratch. The area generation network may be an rpn (region protocol network), which is the first stage of two-stage target detection, and is used to detect a target area.
The area generation network generates 9 lesion candidate areas with different sizes and different aspect ratios by sliding a network of 3x3 on each feature map output by the feature pyramid network, and taking the window center as a reference for each sliding window. Wherein the different sizes include 128 × 128, 256 × 256, 512 × 512; the different aspect ratios include 1:2,1:1 and 2:1, so that a total of 9 lesion candidate regions can be obtained. Through statistical analysis of 4 characteristic lesion information representing CNV activity, the aspect ratio range of each lesion is found to be large (from 3:1 to 1:10) and exceeds the conventional anchor ratio (1:2,1:1,2:1), which causes the RPN network to miss detection, and for the situation, the scheme adds 3:1,1:3,1:5,1:7 and 1:9 on the basis of the conventional anchor ratio to increase the detection rate of the RPN network.
Step S340: a lesion region representing CNV activity is determined from the plurality of lesion candidate regions by a classification regression network.
The feature pyramid network, the area generation network and the classification regression network can be obtained through training of a new data set formed by combining the first sample image and the second sample image. Based on the output of the classification regression network using the image features of the lesion candidate region as input, it can be determined whether the lesion candidate region belongs to a lesion region representing CNV activity.
In an embodiment, after the step S340, the method provided in the embodiment of the present application further includes: mapping to obtain the position of the focus in the optical coherence tomography image according to the coordinate of the focus area representing the CNV activity; and outputting an optical coherence tomography image for marking the position of the lesion.
For example, according to the mapping relationship between the pixel coordinates in the feature map and the corresponding pixel coordinates in the optical coherence tomography image, the coordinates of the lesion region representing the CNV activity can be mapped to obtain the lesion position in the optical coherence tomography image (i.e., the original image), so that the original image with the detection result (i.e., the unknown lesion) can be output.
In the embodiment, the feature pyramid network, the area generation network and the classification regression network form a target detection model together, and the target detection model can be obtained by preprocessing the sample data set and then performing model training, so that the target detection model is deployed. In an embodiment, the target detection model may be obtained by performing parameter tuning on a pre-training model.
The pre-training model is trained by adopting a natural image, the natural image and the medical image have different target sizes, heights, widths and semantic information, different network structures are trained according to a specific data set when the medical image is detected and segmented, and related parameters are adjusted at the same time. The natural image is a non-complete medical image.
Fig. 4 is a schematic diagram of a training process of a target detection model according to an embodiment of the present application. As shown in fig. 4, the process may include the following steps S410 to S440.
Step S410: an original scan image of known lesion coordinates is acquired.
The original scan image is an optical coherence tomography image with known lesion coordinates. The original scanning image can be stored locally in advance, and can also be acquired from a client.
Step S420: and intercepting a scanning area from the original scanning image to serve as a first sample image, and updating the lesion coordinates in the first sample image.
The first sample image is an image obtained by cutting a scanning area from an original OCT image. For example, one vertex of the image may be used as the origin of coordinates, and the length and width may be used as the x-axis and the y-axis, and since the original scanned image is cropped, the origin of coordinates changes, and a coordinate system may be established based on the cropped image to obtain new coordinates of the lesion. The image cutting can focus the model detection on the interest area, and further improves the detection effect.
Step S430: and performing image enhancement on the first sample image to obtain a second sample image, and updating the lesion coordinates of the second sample image.
The second sample image is an image obtained by performing image enhancement on the first sample image by adopting multiple enhancement modes.
The image enhancement means that the first sample image is changed in a brightness change and random rotation mode, the changed image is called a second sample image, and meanwhile, a new coordinate system is established based on the second sample image, and the lesion coordinates are updated. Data enhancement is generally used under the condition of less data quantity, so that the generalization capability of the model can be effectively increased, and overfitting is prevented.
Step S440: the target detection model is trained using a first sample image and a second sample image of known lesion coordinates.
The first sample image and the second sample image with known lesion coordinates are merged into a new data set, and the target detection model is trained by using the new data set.
Fig. 5 is a schematic diagram of a complete training process of the target detection model according to an embodiment of the present application.
Step S510: an original image is acquired.
The original image refers to an original scanned image with known lesion coordinates.
Step S520: and (4) preprocessing data.
FIG. 6 is a data pre-processing process, as shown in FIG. 6, including image cropping and image enhancement. And cutting a scanning area of original data with a label, and transforming a label coordinate to obtain a first sample image. And performing image enhancement in various modes on the first sample image, such as image brightness change and image random rotation, directly copying the original label after the image brightness change, and performing coordinate transformation on the original label after the image random rotation. All images resulting from the image enhancement may be collectively referred to as a second sample image. And combining the first sample image and the second sample image to form a new data set, and performing model training.
Step S530: and (5) training and testing the model.
Wherein, model training is mainly performed by performing parameter tuning in a pre-training model.
The pre-training model is trained by using a natural image, the natural image and the medical image have different target sizes, heights, widths and semantic information, and when the medical image is detected and segmented, different network structures are trained according to the data set obtained in the step S520, and related parameters are adjusted.
The natural image is a non-complete medical image.
And carrying out model test after the relevant parameters are optimized to obtain the accuracy of the model detection rate.
Step S540: and (6) deploying the model.
In step S530, the OCT B scan image is used as an input of the model during model training, and both the input image and the output image are original OCT images during actual application.
When the system is deployed, firstly, an original OCT image is subjected to OCT B scanning area cutting, then the cutting area is sent to model detection, after the detection, the detection result coordinates are mapped back to an original image, and the original OCT image with the detection result is returned according to the mapped original image.
Fig. 7 is a block diagram of an image-based CNV active lesion detection apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: an image acquisition module 710, a feature extraction module 720, a candidate acquisition module 730, and a classification regression module 740.
An image acquisition module 710, configured to acquire an optical coherence tomography image to be detected;
a feature extraction module 720, configured to extract a plurality of feature maps with different sizes from the optical coherence tomography image through a feature pyramid network;
a candidate obtaining module 730, configured to obtain a plurality of lesion candidate regions with different sizes and different aspect ratios in each feature map through a region generation network;
a classification regression module 740 for determining a lesion region representing CNV activity from the plurality of lesion candidate regions through a classification regression network.
Fig. 8 is a block diagram of an image-based CNV active lesion detection apparatus according to another embodiment of the present application based on the corresponding embodiment of fig. 7. As shown in fig. 8, the apparatus includes: a sample acquisition module 810, a sample truncation module 820, a sample variation module 830, and a model training module 840.
A sample acquiring module 810, configured to acquire an original scanning image of a known lesion coordinate;
a sample intercepting module 820, configured to intercept a scanning region from the original scanning image as a first sample image, and update coordinates of a lesion in the first sample image;
a sample change module 830, configured to perform image enhancement on the first sample image to obtain a second sample image, and update a lesion coordinate of the second sample image;
a model training module 840 for training the target detection model using the first sample image and the second sample image with known lesion coordinates.
The implementation processes of the functions and actions of the modules in the device are specifically described in the implementation processes of the corresponding steps in the image-based CNV active lesion detection method, and are not described herein again.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Claims (10)
1. An image-based CNV active lesion detection method, comprising:
acquiring an optical coherence tomography image to be detected;
extracting a plurality of feature maps with different sizes of the optical coherence tomography image through a feature pyramid network;
obtaining a plurality of lesion candidate regions with different sizes and different aspect ratios in each feature map through a region generation network;
a lesion region representing CNV activity is determined from the plurality of lesion candidate regions by a classification regression network.
2. The method of claim 1, wherein before the extracting a plurality of feature maps of different sizes from the optical coherence tomography image through the feature pyramid network, the method further comprises:
and according to a preset scanning area, cutting the optical coherence tomography image.
3. The method of claim 1, wherein the different sizes include 128 x 128, 256 x 256, 512 x 512; the different aspect ratios include 1:2,1:1,2: 1; the number of the lesion candidate regions is 9.
4. The method of claim 1, wherein the feature pyramid network, the region generation network, and the classification regression network together comprise a target detection model; before the acquiring the optical coherence tomography image to be detected, the method further comprises:
acquiring an original scanning image of a known lesion coordinate;
intercepting a scanning area from the original scanning image to serve as a first sample image, and updating a lesion coordinate in the first sample image;
carrying out image enhancement on the first sample image to obtain a second sample image, and updating the lesion coordinates of the second sample image;
the target detection model is trained using a first sample image and a second sample image of known lesion coordinates.
5. The method of claim 4, wherein image enhancing the first sample image comprises:
and carrying out image brightness change or random rotation on the first sample image.
6. The method of claim 1, wherein after determining a lesion region representing CNV activity from a plurality of lesion candidate regions by a classification regression network, the method further comprises:
mapping to obtain the position of the focus in the optical coherence tomography image according to the coordinate of the focus area representing the CNV activity;
and outputting an optical coherence tomography image for marking the position of the lesion.
7. An image-based CNV active lesion detection apparatus, comprising:
the image acquisition module is used for acquiring an optical coherence tomography image to be detected;
the characteristic extraction module is used for extracting a plurality of characteristic graphs with different sizes of the optical coherence tomography image through a characteristic pyramid network;
a candidate obtaining module, configured to obtain, in each feature map, a plurality of lesion candidate regions with different sizes and different aspect ratios through a region generation network;
and the classification regression module is used for determining a lesion region representing the CNV activity according to the plurality of lesion candidate regions through a classification regression network.
8. The apparatus of claim 7, wherein the feature pyramid network, the region generation network, and the classification regression network together form a target detection model, the apparatus further comprising:
the sample acquisition module is used for acquiring an original scanning image of a known lesion coordinate;
the sample intercepting module is used for intercepting a scanning area from the original scanning image to serve as a first sample image and updating the lesion coordinates in the first sample image;
the sample change module is used for carrying out image enhancement on the first sample image to obtain a second sample image and updating the lesion coordinates of the second sample image;
and the model training module is used for training the target detection model by utilizing the first sample image and the second sample image of the known lesion coordinates.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image-based CNV active lesion detection method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the image-based CNV active lesion detection method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011464273.3A CN112435256A (en) | 2020-12-11 | 2020-12-11 | CNV active focus detection method and device based on image and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011464273.3A CN112435256A (en) | 2020-12-11 | 2020-12-11 | CNV active focus detection method and device based on image and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112435256A true CN112435256A (en) | 2021-03-02 |
Family
ID=74692154
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011464273.3A Pending CN112435256A (en) | 2020-12-11 | 2020-12-11 | CNV active focus detection method and device based on image and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112435256A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113598703A (en) * | 2021-07-06 | 2021-11-05 | 温州医科大学附属眼视光医院 | Choroidal neovascularization activity quantification method based on boundary fuzzy degree |
CN114463323A (en) * | 2022-02-22 | 2022-05-10 | 数坤(北京)网络科技股份有限公司 | Focal region identification method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107451998A (en) * | 2017-08-08 | 2017-12-08 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image method of quality control |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
CN110148111A (en) * | 2019-04-01 | 2019-08-20 | 江西比格威医疗科技有限公司 | The automatic testing method of a variety of retina lesions in a kind of retina OCT image |
CN110490860A (en) * | 2019-08-21 | 2019-11-22 | 北京大恒普信医疗技术有限公司 | Diabetic retinopathy recognition methods, device and electronic equipment |
WO2020143309A1 (en) * | 2019-01-09 | 2020-07-16 | 平安科技(深圳)有限公司 | Segmentation model training method, oct image segmentation method and apparatus, device and medium |
CN111667468A (en) * | 2020-05-28 | 2020-09-15 | 平安科技(深圳)有限公司 | OCT image focus detection method, device and medium based on neural network |
-
2020
- 2020-12-11 CN CN202011464273.3A patent/CN112435256A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107451998A (en) * | 2017-08-08 | 2017-12-08 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image method of quality control |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
WO2020143309A1 (en) * | 2019-01-09 | 2020-07-16 | 平安科技(深圳)有限公司 | Segmentation model training method, oct image segmentation method and apparatus, device and medium |
CN110148111A (en) * | 2019-04-01 | 2019-08-20 | 江西比格威医疗科技有限公司 | The automatic testing method of a variety of retina lesions in a kind of retina OCT image |
CN110490860A (en) * | 2019-08-21 | 2019-11-22 | 北京大恒普信医疗技术有限公司 | Diabetic retinopathy recognition methods, device and electronic equipment |
CN111667468A (en) * | 2020-05-28 | 2020-09-15 | 平安科技(深圳)有限公司 | OCT image focus detection method, device and medium based on neural network |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113598703A (en) * | 2021-07-06 | 2021-11-05 | 温州医科大学附属眼视光医院 | Choroidal neovascularization activity quantification method based on boundary fuzzy degree |
CN113598703B (en) * | 2021-07-06 | 2024-02-20 | 温州医科大学附属眼视光医院 | Choroidal neovascularization activity quantification method based on boundary blurring degree |
CN114463323A (en) * | 2022-02-22 | 2022-05-10 | 数坤(北京)网络科技股份有限公司 | Focal region identification method and device, electronic equipment and storage medium |
CN114463323B (en) * | 2022-02-22 | 2023-09-08 | 数坤(上海)医疗科技有限公司 | Focal region identification method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111046880B (en) | Infrared target image segmentation method, system, electronic equipment and storage medium | |
CN114418957B (en) | Global and local binary pattern image crack segmentation method based on robot vision | |
CN107480649B (en) | Fingerprint sweat pore extraction method based on full convolution neural network | |
Hosseini et al. | Encoding visual sensitivity by maxpol convolution filters for image sharpness assessment | |
CN113011385B (en) | Face silence living body detection method, face silence living body detection device, computer equipment and storage medium | |
CN108830225B (en) | Method, device, equipment and medium for detecting target object in terahertz image | |
CN111368758A (en) | Face ambiguity detection method and device, computer equipment and storage medium | |
CN112750121B (en) | System and method for detecting digital image quality of pathological slide | |
CN112435256A (en) | CNV active focus detection method and device based on image and electronic equipment | |
CN112101386B (en) | Text detection method, device, computer equipment and storage medium | |
CN114170227B (en) | Product surface defect detection method, device, equipment and storage medium | |
CN114041163A (en) | Method, processing system and computer program product for restoring high resolution images | |
JP2009134587A (en) | Image processing device | |
CN112380926A (en) | Weeding path planning system of field weeding robot | |
CN115100104A (en) | Defect detection method, device and equipment for glass ink area and readable storage medium | |
CN113780110A (en) | Method and device for detecting weak and small targets in image sequence in real time | |
CN112164030A (en) | Method and device for quickly detecting rice panicle grains, computer equipment and storage medium | |
CN116052105A (en) | Pavement crack identification classification and area calculation method, system, equipment and terminal | |
WO2023019793A1 (en) | Determination method, cleaning robot, and computer storage medium | |
CN105205485B (en) | Large scale image partitioning algorithm based on maximum variance algorithm between multiclass class | |
CN115995078A (en) | Image preprocessing method and system for plankton in-situ observation | |
WO2014001157A1 (en) | Efficient scanning for em based target localization | |
CN112818983A (en) | Method for judging character inversion by using picture acquaintance | |
CN116258643A (en) | Image shadow eliminating method, device, equipment and storage medium | |
JPH1125222A (en) | Method and device for segmenting character |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |