CN114092464B - OCT image processing method and device - Google Patents

OCT image processing method and device Download PDF

Info

Publication number
CN114092464B
CN114092464B CN202111435331.4A CN202111435331A CN114092464B CN 114092464 B CN114092464 B CN 114092464B CN 202111435331 A CN202111435331 A CN 202111435331A CN 114092464 B CN114092464 B CN 114092464B
Authority
CN
China
Prior art keywords
layering
image
pixel points
target
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111435331.4A
Other languages
Chinese (zh)
Other versions
CN114092464A (en
Inventor
叶重荣
区初斌
安林
秦嘉
韦喜飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Weiren Medical Technology Co ltd
Weizhi Medical Technology Foshan Co ltd
Original Assignee
Guangdong Weiren Medical Technology Co ltd
Weizhi Medical Technology Foshan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Weiren Medical Technology Co ltd, Weizhi Medical Technology Foshan Co ltd filed Critical Guangdong Weiren Medical Technology Co ltd
Priority to CN202111435331.4A priority Critical patent/CN114092464B/en
Publication of CN114092464A publication Critical patent/CN114092464A/en
Application granted granted Critical
Publication of CN114092464B publication Critical patent/CN114092464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a processing method and a processing device of OCT images, wherein the method comprises the following steps: obtaining a B-Scan image corresponding to a target feature, performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result, inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for representing the probability that each pixel point belongs to interlayer boundaries of certain two adjacent layering layers included in the initial layering result, the target area is an area comprising the target feature, and interlayer boundary information corresponding to the target feature in the B-Scan image is determined according to the probability corresponding to all the pixel points. Therefore, the method and the device can be used for executing image layering processing on the acquired B-Scan image by combining the deep neural network model, and are beneficial to improving the image layering efficiency and improving the accuracy of the image layering result.

Description

OCT image processing method and device
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a processing method and device of an OCT image.
Background
With the rapid development of computer and medical technology, OCT (Optical Coherence Tomography ) technology has been widely used in diagnostic apparatuses for fundus diseases, and has important significance for detection, experiment and writing of teaching material data for fundus diseases. OCT belongs to a high-sensitivity, high-resolution, high-speed, non-invasive tomographic imaging mode, and uses coherence of light to image an eye fundus, each scan is called an a-scan, and adjacent consecutive multiple scans are combined together to form a B-scan image, which is also commonly seen as an OCT cross-sectional view (which can also be understood as an OCT image), and is the most important imaging mode of OCT in medical diagnosis.
In practical applications, diagnosis of fundus diseases by a diagnostic apparatus generally depends on a target feature delamination result after OCT image delamination, such as a retinal delamination result. However, practice finds that, in the OCT image layering method based on the conventional layering technologies such as histogram, boundary layering and region layering, the layering speed of the target feature layering result obtained by layering the OCT image is slower, although the algorithm performance is more robust. Therefore, it is important to provide an OCT image layering algorithm to improve the layering efficiency of the OCT image on the premise of ensuring the performance of the algorithm.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a processing method and a processing device for OCT images, which can be combined with a deep neural network model, and improve the layering efficiency of OCT images and the accuracy of layering results on the premise of ensuring the layering algorithm performance of OCT images.
To solve the above technical problem, a first aspect of the present invention discloses a method for processing an OCT image, the method comprising:
Acquiring a B-Scan image corresponding to the target feature;
Performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
Inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises the probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to interlayer boundaries of two adjacent layering included in the initial layering result, and the target area is an area comprising the target characteristics;
and determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
In a first aspect of the present invention, the performing, by a preset image processing algorithm, image layering processing on the B-Scan image to obtain an initial layering result includes:
performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
calculating a positive gradient of the filtered image in the vertical direction of the image, and constructing a first cost function according to the positive gradient;
determining a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and the first cost function, and obtaining a first layering line;
Determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line;
calculating the negative gradient of the filtered image in the vertical direction of the image, and constructing a second cost function according to the negative gradient;
determining a search area, wherein the search area is a lower area corresponding to a layering line which is relatively lower in position between the first layering line and the second layering line;
determining a third minimum cost path from the left edge of the area to the right edge of the area of the search area according to the path algorithm and the second cost function, and executing smooth filtering operation on the third minimum cost path to obtain a third layering line;
determining the first layering line, the second layering line and the third layering line as initial layering results;
and determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function, and before obtaining a second layering line, the method further includes:
The first path is marked as an unreachable path in the filtered image.
As an optional implementation manner, in the first aspect of the present invention, before the inputting the B-Scan image into the pre-trained deep neural network model to obtain the output result, the method further includes:
determining a target area comprising the target feature in the B-Scan image;
Wherein the determining the target area including the target feature in the B-Scan image includes:
shifting the layering lines which are positioned relatively above the first layering line and the second layering line upwards by a first preset distance in the vertical direction of the image to obtain a first boundary line;
Shifting the third layering line downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining the area below the first boundary line and above the second boundary line as a target area comprising the target feature in the B-Scan image.
In an optional implementation manner, in a first aspect of the present invention, after the inputting the B-Scan image into a pre-trained deep neural network model and obtaining an output result, before determining interlayer boundary information corresponding to the target feature in the B-Scan image according to probabilities corresponding to all the pixels, the method further includes:
judging whether target pixel points falling outside the target area exist in all the pixel points or not;
When judging that the target pixel points falling outside the target area do not exist in all the pixel points, triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points;
When judging that the target pixel points falling outside the target area exist in all the pixel points, executing probability updating operation on all the target pixel points falling outside the target area so as to update the probabilities corresponding to all the target pixel points, and triggering and executing the operation of determining interlayer boundary information corresponding to the target features in the B-Scan image according to the probabilities corresponding to all the pixel points.
In an optional implementation manner, in the first aspect of the present invention, the determining whether there is a target pixel point that falls outside the target area in all the pixel points includes:
For each column of pixel points in all the pixel points, judging whether target pixel points falling outside the target area exist in the column of pixel points;
And performing a probability update operation on all the target pixel points falling outside the target area to update the probabilities corresponding to all the target pixel points falling outside the target area, including:
And for each column of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the column of pixel points, multiplying each target pixel point falling outside the target area in the column of pixel points by a preset value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
In an optional implementation manner, in a first aspect of the present invention, the determining, according to probabilities corresponding to all the pixel points, interlayer boundary information corresponding to the target feature in the B-Scan image includes:
for each column of pixel points in all the pixel points, carrying out normalization processing on the probability distribution of the column of pixel points to obtain normalized probability distribution of the column of pixel points;
For each row of pixel points in all the pixel points, carrying out dot product operation on the normalized probability distribution of the pixel points in the row and the line number distribution corresponding to the pixel points in the row to obtain an interlayer distribution result corresponding to the pixel points in the row;
and determining interlayer boundary information corresponding to the target feature in the B-Scan image according to interlayer distribution results corresponding to each column of pixel points in all the pixel points.
As an alternative embodiment, in the first aspect of the present invention, the deep neural network model is trained by:
Acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to the target feature and demarcation information corresponding to the target feature;
Dividing the B-Scan image set to obtain a training set and a testing set, wherein the training set is used for training a deep neural network model, and the testing set is used for verifying the reliability of the trained deep neural network model;
Performing target processing operations on all B-Scan images included in the training set to obtain processing results, wherein the target processing operations comprise at least one of up-down movement processing, left-right overturning processing, up-down overturning processing and contrast adjustment processing;
Inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
according to the output result, the B-Scan image included in the training set and the boundary information, analyzing and calculating joint loss to obtain a joint loss value;
The joint loss value is reversely propagated in the deep neural network model, and iterative training with a preset period length is carried out to obtain a trained deep neural network model;
the test set is used for verifying the reliability of the trained deep neural network model.
As an alternative embodiment, in the first aspect of the present invention, the target feature is a retinal feature.
The second aspect of the present invention discloses an OCT image processing apparatus, the apparatus comprising:
The acquisition module is used for acquiring the B-Scan image corresponding to the target feature;
The first processing module is used for executing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
The second processing module is used for inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises a probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to an interlayer boundary of a certain two adjacent layering included in the initial layering result, and the target area is an area comprising the target feature;
and the first determining module is used for determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
As an alternative embodiment, in a second aspect of the present invention, the first processing module includes:
The filtering sub-module is used for executing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
The function construction submodule is used for calculating positive gradients of the filtered image in the vertical direction of the image and constructing a first cost function according to the positive gradients;
The first determining submodule determines a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and the first cost function to obtain a first layering line;
The first determining submodule is further used for determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line;
The function construction submodule is further used for calculating the negative gradient of the filtered image in the vertical direction of the image and constructing a second cost function according to the negative gradient;
The second determining submodule is used for determining a search area, and the search area is a lower area corresponding to a layering line which is relatively lower in position between the first layering line and the second layering line;
the first determining submodule is further used for determining a third minimum cost path from the left edge of the area to the right edge of the area of the search area according to the path algorithm and the second cost function, and executing smooth filtering operation on the third minimum cost path to obtain a third layering line;
the second determining submodule is further used for determining the first layering line, the second layering line and the third layering line as initial layering results;
And, the first processing module further comprises:
And the marking sub-module is used for marking the first path as an unreachable path in the filtered image before the first determining sub-module determines a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second layering line.
As an alternative embodiment, in the second aspect of the invention.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
The second determining module is used for determining a target area comprising the target feature in the B-Scan image before the second processing module inputs the B-Scan image into a pre-trained deep neural network model to obtain an output result;
The second determining module determines a target area including the target feature in the B-Scan image specifically includes:
shifting the layering lines which are positioned relatively above the first layering line and the second layering line upwards by a first preset distance in the vertical direction of the image to obtain a first boundary line;
Shifting the third layering line downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining the area below the first boundary line and above the second boundary line as a target area comprising the target feature in the B-Scan image.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
The judging module is used for judging whether target pixel points falling outside the target area exist in all the pixel points or not according to the probability corresponding to all the pixel points in the B-Scan image after the second processing module inputs the B-Scan image into a pre-trained deep neural network model to obtain an output result and before the first determining module determines the interlayer demarcation information corresponding to the target feature in the B-Scan image, and triggering the first determining module to execute the operation of determining the interlayer demarcation information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points when it is judged that the target pixel points falling outside the target area do not exist in all the pixel points;
And the third processing module is used for executing probability updating operation on all the target pixel points falling outside the target area when the judging module judges that the target pixel points falling outside the target area exist in all the pixel points, so as to update the probabilities corresponding to all the target pixel points, triggering the first determining module to execute the operation of determining interlayer demarcation information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
In a second aspect of the present invention, the determining means for determining whether there is a target pixel point falling outside the target area in all the pixel points specifically includes:
For each column of pixel points in all the pixel points, judging whether target pixel points falling outside the target area exist in the column of pixel points;
And the third processing module performs a probability updating operation on all the target pixel points falling outside the target area, so as to update the probabilities corresponding to all the target pixel points falling outside the target area, wherein the method specifically comprises the following steps:
And for each column of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the column of pixel points, multiplying each target pixel point falling outside the target area in the column of pixel points by a preset value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
In a second aspect of the present invention, the method for determining interlayer boundary information corresponding to the target feature in the B-Scan image by the first determining module according to probabilities corresponding to all the pixel points specifically includes:
for each column of pixel points in all the pixel points, carrying out normalization processing on the probability distribution of the column of pixel points to obtain normalized probability distribution of the column of pixel points;
For each row of pixel points in all the pixel points, carrying out dot product operation on the normalized probability distribution of the pixel points in the row and the line number distribution corresponding to the pixel points in the row to obtain an interlayer distribution result corresponding to the pixel points in the row;
and determining interlayer boundary information corresponding to the target feature in the B-Scan image according to interlayer distribution results corresponding to each column of pixel points in all the pixel points.
As an alternative embodiment, in the second aspect of the present invention, the deep neural network model is trained by:
Acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to the target feature and demarcation information corresponding to the target feature;
Dividing the B-Scan image set to obtain a training set and a testing set, wherein the training set is used for training a deep neural network model, and the testing set is used for verifying the reliability of the trained deep neural network model;
Performing target processing operations on all B-Scan images included in the training set to obtain processing results, wherein the target processing operations comprise at least one of up-down movement processing, left-right overturning processing, up-down overturning processing and contrast adjustment processing;
Inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
according to the output result, the B-Scan image included in the training set and the boundary information, analyzing and calculating joint loss to obtain a joint loss value;
The joint loss value is reversely propagated in the deep neural network model, and iterative training with a preset period length is carried out to obtain a trained deep neural network model;
the test set is used for verifying the reliability of the trained deep neural network model.
As an alternative embodiment, in the second aspect of the present invention, the target feature is a retinal feature.
A third aspect of the present invention discloses another OCT image processing apparatus, including:
a memory storing executable program code;
a processor coupled to the memory;
an input interface coupled to the processor and an output interface;
The processor invokes the executable program code stored in the memory to perform the OCT image processing method disclosed in the first aspect of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
The embodiment of the invention provides a processing method and a processing device of an OCT image, wherein the method comprises the following steps: obtaining a B-Scan image corresponding to a target feature, performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result, inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for representing the probability that each pixel point belongs to interlayer boundaries of certain two adjacent layering layers included in the initial layering result, the target area is an area comprising the target feature, and interlayer boundary information corresponding to the target feature in the B-Scan image is determined according to the probability corresponding to all the pixel points. Therefore, the B-Scan image including the target feature can be intelligently acquired by implementing the method, which is beneficial to improving the classification efficiency of the B-Scan image; and the interlayer boundary information corresponding to the target feature in the B-Scan image can be intelligently determined by combining the deep neural network model and the initial layering result, so that the image layering efficiency and the accuracy of the image layering result are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a processing method of an OCT image according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for processing OCT images according to an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of an OCT image processing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another OCT image processing device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another OCT image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a processing method and a processing device of an OCT image, which are beneficial to improving the classification efficiency of the B-Scan image by intelligently acquiring the B-Scan image comprising target characteristics; and the interlayer boundary information corresponding to the target feature in the B-Scan image can be intelligently determined by combining the deep neural network model and the initial layering result, so that the image layering efficiency and the accuracy of the image layering result are improved. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a processing method of an OCT image according to an embodiment of the present invention. The OCT method described in fig. 1 may be applied to layering processing of a retinal B-Scan image, where the layering result obtained by processing the method may be applied to writing of medical teaching materials, and may also be used as auxiliary data for retinal study, which is not limited by the embodiment of the present invention. As shown in fig. 1, the OCT image processing method may include the following operations:
101. And acquiring a B-Scan image corresponding to the target feature.
In an embodiment of the present invention, the target feature may include a retinal feature of a human eye (hereinafter abbreviated as a retinal feature), and the B-Scan image may include a B-Scan image including a retinal feature obtained by OCT processing. The B-Scan image may be obtained by a method of directly acquiring a B-Scan image including a retinal feature acquired by an apparatus carrying an OCT scanning technique, or by a method of acquiring a B-Scan image including a retinal feature stored in a system database, which is not limited in the embodiment of the present invention.
102. And executing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result.
In the embodiment of the present invention, when the B-Scan image includes the above-mentioned retinal feature, the initial layering result correspondingly obtained is a layering result corresponding to the retinal tissue layer. The preset image processing algorithm comprises an algorithm (such as a modified minimum cost path algorithm based on a gradient cost map) improved by traditional B-Scan image layering processing.
In the OCT image processing method provided by the present invention, since the retina includes a large number of tissue layers, when the image layering processing is performed on the retina B-Scan image by the preset image processing algorithm, all the tissue layers included in the retina are not layered, and only the ILM (inner limiting membrane) layer, the ios (inner and outer segments of photoreceptor cells) layer, and the BM (bruch membrane) layer with more obvious boundaries are divided from the retina tissue layers, so that the layering efficiency of the retina tissue layers of the B-Scan image and the accuracy of the layering result are improved on the premise of ensuring the performance robustness of the OCT image processing algorithm.
Therefore, in the embodiment of the invention, the data amount required to be operated when the B-Scan image is processed is reduced by reducing the layering layer number of the retina tissue layer in the B-Scan image, so that the purposes of improving the layering efficiency of the B-Scan image and improving the accuracy of the layering result are realized.
103. And inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result.
In the embodiment of the invention, the output result comprises the probability corresponding to each pixel point corresponding to the target area of the B-Scan image, and the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to the interlayer boundary of some two adjacent layering included in the initial layering result; the target region is a region including target features.
Further, when the target feature is the above-mentioned retina feature, the target region is a region including a retina tissue layer; assuming that the initial layering result includes three tissue layers, and in the vertical direction of the image, the ILM layer, the ios layer and the BM layer are sequentially formed from top to bottom, the two adjacent layering includes the ILM layer and the ios layer, and the ios layer and the BM layer, where the two adjacent layering refers to two tissue layers that are adjacent in position and do not refer to two adjacent tissue layers in all the tissue layers included in the actual retina, among the tissue layers obtained by the algorithm or the artificial division.
104. And determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
In the embodiment of the present invention, when the target feature is the above-mentioned retina feature, the probability corresponding to each pixel point refers to the probability that the pixel point belongs to an interlayer boundary of some two adjacent retina layers; and determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points, wherein the method specifically comprises the following steps:
for each column of pixel points in all pixel points, carrying out normalization processing on the probability distribution of the column of pixel points to obtain normalized probability distribution of the column of pixel points;
for each row of pixel points in all pixel points, carrying out dot product operation on the normalized probability distribution of the pixel points in the row and the line number distribution corresponding to the pixel points in the row to obtain an interlayer distribution result corresponding to the pixel points in the row;
And determining interlayer boundary information corresponding to the target feature in the B-Scan image according to interlayer distribution results corresponding to each column of pixel points in all the pixel points.
The function used to perform normalization processing on the probability distribution of the pixel points in the column may include a Softmax function.
Therefore, by implementing the processing method of the OCT image described in fig. 1, image layering processing can be performed in a targeted manner on the acquired B-Scan image including the target feature, and by reducing the layering number of the tissue layers of the retina in the B-Scan image, the data amount required to be operated when the B-Scan image is processed is reduced, and the layering efficiency of the B-Scan image is improved; and the interlayer boundary information corresponding to the target features in the B-Scan image can be determined by combining the pre-trained deep neural network model, so that the layering efficiency is improved, and meanwhile, the accuracy of the layering result is further improved.
In an alternative embodiment, the image layering process is performed on the B-Scan image by using a preset image processing algorithm to obtain an initial layering result, which specifically may include the following steps:
performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
calculating positive gradients of the filtered image in the vertical direction of the image, and constructing to obtain a first cost function according to the positive gradients;
determining a first minimum cost path of the filtered image from the left edge to the right edge according to a predetermined path algorithm and a first cost function to obtain a first layering line;
determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second layering line;
Calculating the negative gradient of the filtered image in the vertical direction of the image, and constructing a second cost function according to the negative gradient;
Determining a search area, wherein the search area is a lower area corresponding to a layering line which is relatively lower in position between the first layering line and the second layering line;
Determining a third minimum cost path from the left edge of the area to the right edge of the area according to the path algorithm and the second cost function, and performing smooth filtering operation on the third minimum cost path to obtain a third layering line;
determining the first layering line, the second layering line and the third layering line as initial layering results;
Further, before determining a second minimum cost path of the filtered image from the left edge to the right edge according to the path algorithm and the first cost function, the method further comprises the following steps:
The first path is marked as an unreachable path in the filtered image.
In this alternative embodiment, the predetermined path algorithm may be a minimum cost path algorithm modified based on Dijkstra or Bellman-Ford algorithm, which is not limited in the embodiment of the present invention; and, the function expression corresponding to the first Cost function may be Cost1=a×exp (-G) or Cost1=a× (-G), and the function expression corresponding to the second Cost function may be Cost2=a×exp (-G) or Cost2=a× (-G), where G is a gradient value.
In this alternative embodiment, the first path is marked as an unreachable path in the filtered image, and the second layer line different from the first layer line is obtained after the minimum cost path is obtained again through a predetermined path algorithm according to the marked unreachable path after the first layer line is obtained, so that two different layer lines can be obtained.
In this alternative embodiment, the filtering process is performed on the B-Scan image by a preset filtering function, where the preset filtering function may be a median filtering function and a mean filtering function; when the smoothing filtering operation is performed on the third minimum cost path, the median filtering and mean filtering smoothing processing can be performed on the coordinate points included in the third minimum cost path during actual processing.
Therefore, the alternative embodiment provides a minimum cost path algorithm, which can divide the required first layering line, second layering line and third layering line in the B-Scan image, and reduces the data amount required to be operated when processing the B-Scan image by the layering layer number of the retina tissue layers in the B-Scan image, thereby improving the layering efficiency of the image layering algorithm and the accuracy of the layering result.
In another alternative embodiment, after inputting the B-Scan image into the pre-trained deep neural network model to obtain an output result and before determining interlayer boundary information corresponding to the target feature in the B-Scan image according to probabilities corresponding to all pixel points, the method may further include the following steps:
Judging whether target pixel points falling outside a target area exist in all the pixel points;
when judging that the target pixel points falling outside the target area do not exist in all the pixel points, triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points;
when judging that the target pixel points falling outside the target area exist in all the pixel points, executing probability updating operation on all the target pixel points falling outside the target area so as to update the probabilities corresponding to all the target pixel points, and triggering and executing operation of determining interlayer boundary information corresponding to target features in the B-Scan image according to the probabilities corresponding to all the pixel points.
In this alternative embodiment, determining whether there is a target pixel point that falls outside the target area in all the pixel points includes:
And judging whether the coordinate value corresponding to each pixel point in all the pixel points exceeds the coordinate interval included in the target area.
Therefore, the optional embodiment can execute probability updating operation for all target pixel points falling outside the target area, which is beneficial to improving the accuracy of the determined interlayer boundary information when determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
In this optional embodiment, the determining whether the target pixel point falling outside the target area exists in all the pixel points may further include:
Judging whether the probability corresponding to each pixel point in all the pixel points is within a preset probability threshold, wherein the preset probability threshold is a predetermined probability threshold (e.g., (0.5, 1)) corresponding to the pixel point falling in the target area, and the two endpoint values are not included.
Therefore, in this alternative embodiment, another method for determining whether the pixel point falls into the target area is provided, which is different from analyzing coordinate values corresponding to each pixel point, including coordinate data on x-axis, y-axis and even z-axis, and only analyzes probability values corresponding to the pixel points, so that the amount of data in the analysis processing is reduced, the method for determining whether the pixel point falls into the target area is expanded, and meanwhile, the efficiency of determining and obtaining the result is improved.
Example two
Referring to fig. 2, fig. 2 is a flowchart illustrating another OCT image processing method according to an embodiment of the present invention. The OCT method described in fig. 2 may be applied to layering processing of a retinal B-Scan image, where the layering result obtained by processing the method may be applied to writing of medical teaching materials, and may also be used as auxiliary data for retinal study, which is not limited by the embodiment of the present invention. As shown in fig. 2, the OCT image processing method may include the following operations:
201. and acquiring a B-Scan image corresponding to the target feature.
202. And executing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result.
203. And inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result.
204. And determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
In the embodiment of the present invention, the other descriptions of step 201 to step 204 refer to the other specific descriptions of step 101 to step 104 in the first embodiment, and the description of the embodiment of the present invention is omitted.
205. A target region in the B-Scan image including the target feature is determined.
In the embodiment of the present invention, before inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, determining a target area including a target feature in the B-Scan image may specifically include the following steps:
shifting the layering lines which are relatively positioned on the first layering line and the second layering line upwards by a first preset distance in the vertical direction of the image to obtain a first boundary line;
Shifting the third layering line downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
The region below the first boundary line and above the second boundary line is determined as a target region in the B-Scan image that includes the target feature.
Therefore, by implementing the processing method of the OCT image described in fig. 2, image layering processing can be performed on the acquired B-Scan image including the target feature, and by reducing the layering number of the tissue layers of the retina in the B-Scan image, the amount of data that needs to be calculated when processing the B-Scan image is reduced, and the layering efficiency of the B-Scan image is improved; the interlayer boundary information corresponding to the target features in the B-Scan image can be determined by combining the pre-trained deep neural network model, so that the layering efficiency is improved, and meanwhile, the accuracy of the layering result is further improved; in addition, the target area comprising the target features can be intelligently determined, so that the data volume required to be processed by an image processing algorithm is reduced when the image processing operation is executed on the target area, and the layering efficiency of the image is improved to a certain extent; meanwhile, after the target area including the target feature is definitely included, interference of the redundant area which does not include the target feature on image layering processing is reduced, and accuracy of an image layering result is improved.
In an alternative embodiment, determining whether there is a target pixel point that falls outside the target area in all the pixel points includes:
Judging whether target pixel points falling outside a target area exist in each column of pixel points in all pixel points;
And performing a probability update operation on all target pixels falling outside the target area to update probabilities corresponding to all target pixels falling outside the target area, including:
For each row of pixel points in all pixel points, if a target pixel point falling outside a target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
In this alternative embodiment, the preset value may include a coefficient epsilon (e.g. 0.0001) that is fixed and has a very small value, and the coefficient epsilon may also be an attenuation value that varies with the position, for example, the further the position of the pixel point corresponding to the coefficient epsilon is from the first boundary line or the second boundary line, the value of the coefficient epsilon is smaller, and the further the region of the target region that is located at the center of the target region and is equal to the distance of the first boundary line and the distance of the second boundary line is from the reference line, the further the region of the reference line is from the reference line is.
It can be seen that in this alternative embodiment, the step of processing the pixel points in the unit of columns is provided, so that the pixel points can be processed one by one, and further, the pixel points can be processed in the unit of columns, so that the processing efficiency for the pixel points is improved, and the layering efficiency of the OCT image is improved to a certain extent.
In another alternative embodiment, the deep neural network model is trained by:
Acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to a target feature and demarcation information corresponding to the target feature;
Dividing the B-Scan image set to obtain a training set and a testing set, wherein the training set is used for training the deep neural network model, and the testing set is used for verifying the reliability of the trained deep neural network model;
Performing target processing operations on all the B-Scan images included in the training set to obtain processing results, wherein the target processing operations comprise at least one of up-down movement processing, left-right overturning processing, up-down overturning processing and contrast adjustment processing;
inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
according to the output result, the B-Scan image and the boundary information contained in the training set, analyzing and calculating the joint loss to obtain a joint loss value;
The joint loss value is reversely transmitted in the deep neural network model, and iterative training with a preset period length is carried out to obtain a trained deep neural network model;
the test set is used for verifying the reliability of the trained deep neural network model.
In this alternative embodiment, the analytically calculating the joint loss, the deriving the joint loss value includes:
according to first data (marked as M0) included in the output result and the marked pixel-level label, calculating to obtain label loss L label_dice, wherein the pixel-level label is a label coded by a preset coding mode (one-hot);
Calculating to obtain cross entropy loss L label_ce according to the first data (M0) and the pixel level label;
Calculating to obtain boundary cross entropy loss L bd_ce according to second data (B0) and boundary information included in the output result;
Calculating a smoothing loss L bd_l1 according to third data (B2) and boundary information included in the mathematical operation result;
The label loss L label_dice, the cross entropy loss L label_ce, the demarcation cross entropy loss L bd_ce and the smoothing loss L bd_l1 are multiplied by a coefficient to obtain respective product results, and all the product results are summed to obtain a numerical result corresponding to the joint loss as a joint loss value.
The calculation formula of the joint loss in practical application is as follows:
L=λlabel_diceLlabel_dicelabel_ceLlabel_cebd_ceLbd_cebd_l1Lbd_l1
It can be seen that, by training the obtained deep neural network model and then combining the probability distribution corresponding to each column of pixel points with all the pixel points as units to execute related processing operations (including normalization processing and dot product operation), the interlayer boundary information corresponding to the target feature in the B-Scan image is determined, thereby achieving the purposes of improving the layering efficiency of the OCT image and improving the accuracy of the layering result
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an OCT image processing apparatus according to an embodiment of the present invention. The processing device of the OCT image may be a processing terminal of the OCT image, a processing device of the OCT image, a processing system of the OCT image, or a processing server of the OCT image, where the processing server of the OCT image may be a local server, or a remote server, or a cloud server (also called cloud server), and when the processing server of the OCT image is a non-cloud server, the non-cloud server may be in communication connection with the cloud server. As shown in fig. 3, the processing apparatus for OCT image may include an acquisition module 301, a first processing module 302, a second processing module 303, and a first determination module 304, where:
The acquiring module 301 is configured to acquire a B-Scan image corresponding to the target feature.
The first processing module 302 is configured to perform image layering processing on the B-Scan image acquired by the acquiring module 301 through a preset image processing algorithm, so as to obtain an initial layering result.
The second processing module 303 is configured to input the B-Scan image acquired by the acquiring module 301 into a pre-trained deep neural network model, and obtain an output result, where the output result includes a probability corresponding to each pixel point corresponding to a target area of the B-Scan image, and the probability corresponding to each pixel point is used to indicate a possibility that each pixel point belongs to an interlayer boundary of some two adjacent layers included in the initial layering result, and the target area is an area including a target feature.
The first determining module 304 is configured to determine interlayer boundary information corresponding to the target feature in the B-Scan image acquired by the acquiring module 301 according to the probabilities corresponding to all the pixel points acquired by the second processing module 303.
Therefore, the processing device for implementing the OCT image described in fig. 3 can perform image layering processing on the acquired B-Scan image including the target feature in a targeted manner, and by reducing the layering number of the tissue layers of the retina in the B-Scan image, the amount of data required to be calculated when processing the B-Scan image is reduced, and the layering efficiency of the B-Scan image is improved; and the interlayer boundary information corresponding to the target features in the B-Scan image can be determined by combining the pre-trained deep neural network model, so that the layering efficiency is improved, and meanwhile, the accuracy of the layering result is further improved.
In an alternative embodiment, as shown in fig. 4, the first processing module 302 may include a filtering submodule 3021, a function building submodule 3022, a first determining submodule 3023, and a second determining submodule 3024, where:
And the filtering submodule 3021 is used for performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image.
The function construction submodule 3022 is used for calculating a positive gradient of the filtered image obtained by the filtering submodule 3021 in the vertical direction of the image, and constructing a first cost function according to the positive gradient.
The first determining submodule 3023 determines a first minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering submodule 3021 according to a predetermined path algorithm and a first cost function obtained by the function building submodule 3022, and obtains a first layering line.
The first determining submodule 3023 is further configured to determine a second minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering submodule 3021 according to the predetermined path algorithm and the first cost function obtained by the function building submodule 3022, so as to obtain a second hierarchical line.
The function construction submodule 3022 is further configured to calculate a negative gradient of the filtered image obtained by the filtering submodule 3021 in the vertical direction of the image, and construct a second cost function according to the negative gradient.
The second determining submodule 3024 is configured to determine a search area, where the search area is a lower area corresponding to a first hierarchical line obtained by the first determining submodule 3023 and a hierarchical line located relatively lower in the second hierarchical line.
The first determining submodule 3023 is further configured to determine a third minimum cost path from the left edge of the area to the right edge of the area of the search area determined by the second determining submodule 3024 according to a predetermined path algorithm and a second cost function obtained by the function building submodule 3022, and perform a smoothing filtering operation on the third minimum cost path to obtain a third hierarchical line.
The second determining submodule 3024 is further configured to determine the first hierarchical line, the second hierarchical line, and the third hierarchical line obtained by the first determining submodule 3023 as an initial hierarchical result.
Further, the first processing module 302 may further include a marking sub-module 3025, wherein:
The marking submodule 3025 is configured to mark the first path as an unreachable path in the filtered image obtained by the filtering submodule 3021 before the first determining submodule 3023 determines a second minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering submodule 3021 according to a predetermined path algorithm and the first cost function to obtain the second hierarchical line.
Therefore, the alternative embodiment provides a minimum cost path algorithm, which can divide the required first layering line, second layering line and third layering line in the B-Scan image, and reduces the data amount required to be operated when processing the B-Scan image by the layering layer number of the retina tissue layers in the B-Scan image, thereby improving the layering efficiency of the image layering algorithm and the accuracy of the layering result.
In another alternative embodiment, as shown in fig. 4, the OCT image processing device further includes a second determination module 305, where:
the second determining module 305 is configured to determine a target area including a target feature in the B-Scan image before the second processing module 303 inputs the acquired B-Scan image into the pre-trained deep neural network model to obtain an output result.
The second determining module 305 determines a target area including a target feature in the B-Scan image specifically includes:
Shifting the layering lines which are relatively positioned on the first layering line and the second layering line upwards by a first preset distance in the vertical direction of the image to obtain a first boundary line;
Shifting the third layering line downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
The region below the first boundary line and above the second boundary line is determined as a target region in the B-Scan image that includes the target feature.
Therefore, the optional embodiment can intelligently determine the target area comprising the target feature, which is beneficial to reducing the data volume required to be processed by an image processing algorithm when the image processing operation is executed for the target area, and improves the layering efficiency of the image to a certain extent; meanwhile, after the target area including the target feature is definitely included, interference of the redundant area which does not include the target feature on image layering processing is reduced, and accuracy of an image layering result is improved.
In yet another alternative embodiment, as shown in fig. 4, the OCT image processing device further includes a determining module 306 and a third processing module 307, where:
The judging module 306 is configured to judge whether target pixel points falling outside the target area exist in all the pixel points according to the probabilities corresponding to the target features in the B-Scan image after the second processing module 303 inputs the B-Scan image into the pre-trained deep neural network model to obtain the output result and before the first determining module 304 determines the interlayer boundary information corresponding to the target features in the B-Scan image according to the probabilities corresponding to all the pixel points, and trigger the first determining module 304 to execute the operation of determining the interlayer boundary information corresponding to the target features in the B-Scan image according to the probabilities corresponding to all the pixel points when it is determined that no target pixel points falling outside the target area exist in all the pixel points.
And a third processing module 307, configured to, when the judging module 306 judges that there are target pixel points falling outside the target area in all the pixel points, perform a probability update operation on all the target pixel points falling outside the target area, so as to update probabilities corresponding to all the target pixel points, and trigger the operation performed by the first determining module 304 to determine interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
Therefore, the optional embodiment can execute probability updating operation for all target pixel points falling outside the target area, which is beneficial to improving the accuracy of the determined interlayer boundary information when determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
In this alternative embodiment, the determining module 306 determines whether the target pixel points falling outside the target area exist in all the pixel points specifically includes:
For each column of pixel points in all pixel points, judging whether target pixel points falling outside a target area exist in the column of pixel points.
And the third processing module performs probability updating operation on all the target pixel points falling outside the target area, so as to update the probabilities corresponding to all the target pixel points falling outside the target area, wherein the method specifically comprises the following steps:
For each row of pixel points in all pixel points, if a target pixel point falling outside a target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
In this optional embodiment, the step of processing the pixel points in the unit of columns is provided, so that the pixel points can be processed one by one, and the pixel points can be processed in the unit of columns, so that the processing efficiency for the pixel points is improved, and the layering efficiency of the image is improved to a certain extent; in addition, the probability that the probability corresponding to all the pixel points comprises abnormal probability when the operation of determining the interlayer boundary information corresponding to the target feature in the B-Scan image is carried out later according to the probability corresponding to all the pixel points can be reduced through the probability updating operation.
In yet another alternative embodiment, the first determining module 304 determines the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points specifically includes:
for each column of pixel points in all pixel points, carrying out normalization processing on the probability distribution of the column of pixel points to obtain normalized probability distribution of the column of pixel points;
for each row of pixel points in all pixel points, carrying out dot product operation on the normalized probability distribution of the pixel points in the row and the line number distribution corresponding to the pixel points in the row to obtain an interlayer distribution result corresponding to the pixel points in the row;
And determining interlayer boundary information corresponding to the target feature in the B-Scan image according to interlayer distribution results corresponding to each column of pixel points in all the pixel points.
And training the deep neural network model by the following modes:
Acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to a target feature and demarcation information corresponding to the target feature;
Dividing the B-Scan image set to obtain a training set and a testing set, wherein the training set is used for training the deep neural network model, and the testing set is used for verifying the reliability of the trained deep neural network model;
Performing target processing operations on all the B-Scan images included in the training set to obtain processing results, wherein the target processing operations comprise at least one of up-down movement processing, left-right overturning processing, up-down overturning processing and contrast adjustment processing;
inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
according to the output result, the B-Scan image and the boundary information contained in the training set, analyzing and calculating the joint loss to obtain a joint loss value;
The joint loss value is reversely transmitted in the deep neural network model, and iterative training with a preset period length is carried out to obtain a trained deep neural network model;
the test set is used for verifying the reliability of the trained deep neural network model.
Therefore, through the deep neural network model obtained through training, and by combining the probability distribution corresponding to each column of pixel points of all pixel points in a column unit, relevant processing operations (including normalization processing and dot product operation) are executed, interlayer boundary information corresponding to the target feature in the B-Scan image is determined, and the purposes of improving the layering efficiency of the OCT image and improving the accuracy of layering results are achieved.
Example IV
Referring to fig. 5, fig. 5 is a schematic structural diagram of another OCT image processing apparatus according to an embodiment of the present invention. As shown in fig. 5, the OCT image processing apparatus includes:
A memory 401 storing executable program codes;
a processor 402 coupled with the memory 401;
further, an input interface 403 and an output interface 404 coupled to the processor 402 may be included;
wherein the processor 402 invokes executable program codes stored in the memory 401 to execute steps in the OCT image processing method described in the first or second embodiment of the present invention.
Example five
An embodiment of the present invention discloses a computer program product comprising a non-transitory computer storage medium storing a computer program, and the computer program is operable to cause a computer to execute steps in the OCT image processing method described in the first or second embodiment.
The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium readable by a computer that can be used to carry or store data.
Finally, it should be noted that: the disclosed processing method and device of OCT image are only the preferred embodiments of the present invention, and are only used for illustrating the technical scheme of the present invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. A method of processing an OCT image, the method comprising:
Acquiring a B-Scan image corresponding to the target feature;
Performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
Inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises the probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to interlayer boundaries of two adjacent layering included in the initial layering result, and the target area is an area comprising the target characteristics;
Determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points;
The image layering processing is executed on the B-Scan image through a preset image processing algorithm to obtain an initial layering result, which comprises the following steps:
performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
calculating a positive gradient of the filtered image in the vertical direction of the image, and constructing a first cost function according to the positive gradient;
determining a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and the first cost function, and obtaining a first layering line;
Determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line;
calculating the negative gradient of the filtered image in the vertical direction of the image, and constructing a second cost function according to the negative gradient;
determining a search area, wherein the search area is a lower area corresponding to a layering line which is relatively lower in position between the first layering line and the second layering line;
determining a third minimum cost path from the left edge of the area to the right edge of the area of the search area according to the path algorithm and the second cost function, and executing smooth filtering operation on the third minimum cost path to obtain a third layering line;
determining the first layering line, the second layering line and the third layering line as initial layering results;
and determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function, and before obtaining a second layering line, the method further includes:
Marking the first least costly path as an unreachable path in the filtered image.
2. The method of claim 1, wherein before inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, the method further comprises:
determining a target area comprising the target feature in the B-Scan image;
Wherein the determining the target area including the target feature in the B-Scan image includes:
shifting the layering lines which are positioned relatively above the first layering line and the second layering line upwards by a first preset distance in the vertical direction of the image to obtain a first boundary line;
Shifting the third layering line downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining the area below the first boundary line and above the second boundary line as a target area comprising the target feature in the B-Scan image.
3. The method according to claim 2, wherein after the B-Scan image is input into a pre-trained deep neural network model to obtain an output result, before determining interlayer boundary information corresponding to the target feature in the B-Scan image according to probabilities corresponding to all the pixels, the method further comprises:
judging whether target pixel points falling outside the target area exist in all the pixel points or not;
When judging that the target pixel points falling outside the target area do not exist in all the pixel points, triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points;
When judging that the target pixel points falling outside the target area exist in all the pixel points, executing probability updating operation on all the target pixel points falling outside the target area so as to update the probabilities corresponding to all the target pixel points, and triggering and executing the operation of determining interlayer boundary information corresponding to the target features in the B-Scan image according to the probabilities corresponding to all the pixel points.
4. The OCT image processing method of claim 3, wherein determining whether there is a target pixel that falls outside the target region among all the pixels comprises:
For each column of pixel points in all the pixel points, judging whether target pixel points falling outside the target area exist in the column of pixel points;
And performing a probability update operation on all the target pixel points falling outside the target area to update the probabilities corresponding to all the target pixel points falling outside the target area, including:
And for each column of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the column of pixel points, multiplying each target pixel point falling outside the target area in the column of pixel points by a preset value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
5. The method according to claim 3 or 4, wherein determining interlayer boundary information corresponding to the target feature in the B-Scan image according to probabilities corresponding to all the pixel points comprises:
for each column of pixel points in all the pixel points, carrying out normalization processing on the probability distribution of the column of pixel points to obtain normalized probability distribution of the column of pixel points;
For each row of pixel points in all the pixel points, carrying out dot product operation on the normalized probability distribution of the pixel points in the row and the line number distribution corresponding to the pixel points in the row to obtain an interlayer distribution result corresponding to the pixel points in the row;
and determining interlayer boundary information corresponding to the target feature in the B-Scan image according to interlayer distribution results corresponding to each column of pixel points in all the pixel points.
6. The method of processing OCT images according to any one of claims 1 to 4, wherein the deep neural network model is trained by:
Acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to the target feature and demarcation information corresponding to the target feature;
Dividing the B-Scan image set to obtain a training set and a testing set, wherein the training set is used for training a deep neural network model, and the testing set is used for verifying the reliability of the trained deep neural network model;
Performing target processing operations on all B-Scan images included in the training set to obtain processing results, wherein the target processing operations comprise at least one of up-down movement processing, left-right overturning processing, up-down overturning processing and contrast adjustment processing;
Inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
according to the output result, the B-Scan image included in the training set and the boundary information, analyzing and calculating joint loss to obtain a joint loss value;
The joint loss value is reversely propagated in the deep neural network model, and iterative training with a preset period length is carried out to obtain a trained deep neural network model;
the test set is used for verifying the reliability of the trained deep neural network model.
7. The method of processing OCT images of any one of claims 1-4, wherein the target feature is a retinal feature.
8. An OCT image processing apparatus, the apparatus comprising:
The acquisition module is used for acquiring the B-Scan image corresponding to the target feature;
The first processing module is used for executing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
The second processing module is used for inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises a probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to an interlayer boundary of a certain two adjacent layering included in the initial layering result, and the target area is an area comprising the target feature;
the first determining module is used for determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points;
The first processing module includes:
The filtering sub-module is used for executing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
The function construction submodule is used for calculating positive gradients of the filtered image in the vertical direction of the image and constructing a first cost function according to the positive gradients;
The first determining submodule is used for determining a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and the first cost function to obtain a first layering line;
The first determining submodule is further used for determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line;
The function construction submodule is further used for calculating the negative gradient of the filtered image in the vertical direction of the image and constructing a second cost function according to the negative gradient;
The second determining submodule is used for determining a search area, and the search area is a lower area corresponding to a layering line which is relatively lower in position between the first layering line and the second layering line;
the first determining submodule is further used for determining a third minimum cost path from the left edge of the area to the right edge of the area of the search area according to the path algorithm and the second cost function, and executing smooth filtering operation on the third minimum cost path to obtain a third layering line;
the second determining submodule is further used for determining the first layering line, the second layering line and the third layering line as initial layering results;
And, the first processing module further comprises:
and the marking sub-module is used for marking the first minimum cost path as an unreachable path in the filtered image before the first determination sub-module determines the second minimum cost path from the left edge to the right edge of the filtered image to obtain a second layering line according to the path algorithm and the first cost function.
9. An OCT image processing apparatus, the apparatus comprising:
a memory storing executable program code;
a processor coupled to the memory;
an input interface coupled to the processor and an output interface;
The processor invokes the executable program code stored in the memory to perform the OCT image processing method of any one of claims 1-7.
CN202111435331.4A 2021-11-29 2021-11-29 OCT image processing method and device Active CN114092464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111435331.4A CN114092464B (en) 2021-11-29 2021-11-29 OCT image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111435331.4A CN114092464B (en) 2021-11-29 2021-11-29 OCT image processing method and device

Publications (2)

Publication Number Publication Date
CN114092464A CN114092464A (en) 2022-02-25
CN114092464B true CN114092464B (en) 2024-06-07

Family

ID=80305758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111435331.4A Active CN114092464B (en) 2021-11-29 2021-11-29 OCT image processing method and device

Country Status (1)

Country Link
CN (1) CN114092464B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374028A (en) * 2015-10-12 2016-03-02 中国科学院上海光学精密机械研究所 Optical coherence tomography retina image layering method
CN110390650A (en) * 2019-07-23 2019-10-29 中南大学 OCT image denoising method based on intensive connection and generation confrontation network
CN111462160A (en) * 2019-01-18 2020-07-28 北京京东尚科信息技术有限公司 Image processing method, device and storage medium
CN112330638A (en) * 2020-11-09 2021-02-05 苏州大学 Horizontal registration and image enhancement method for retina OCT (optical coherence tomography) image
CN112700390A (en) * 2021-01-14 2021-04-23 汕头大学 Cataract OCT image repairing method and system based on machine learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070109A1 (en) * 2015-10-19 2017-04-27 The Charles Stark Draper Laboratory Inc. System and method for the segmentation of optical coherence tomography slices

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374028A (en) * 2015-10-12 2016-03-02 中国科学院上海光学精密机械研究所 Optical coherence tomography retina image layering method
CN111462160A (en) * 2019-01-18 2020-07-28 北京京东尚科信息技术有限公司 Image processing method, device and storage medium
CN110390650A (en) * 2019-07-23 2019-10-29 中南大学 OCT image denoising method based on intensive connection and generation confrontation network
CN112330638A (en) * 2020-11-09 2021-02-05 苏州大学 Horizontal registration and image enhancement method for retina OCT (optical coherence tomography) image
CN112700390A (en) * 2021-01-14 2021-04-23 汕头大学 Cataract OCT image repairing method and system based on machine learning

Also Published As

Publication number Publication date
CN114092464A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
US11989877B2 (en) Method and system for analysing images of a retina
CN109886933B (en) Medical image recognition method and device and storage medium
CN110097968B (en) Baby brain age prediction method and system based on resting state functional magnetic resonance image
JP6842481B2 (en) 3D quantitative analysis of the retinal layer using deep learning
CN111612756B (en) Coronary artery specificity calcification detection method and device
CN111862020B (en) Method and device for predicting physiological age of anterior ocular segment, server and storage medium
CN109935321A (en) Patients with depression based on function nmr image data switchs to the risk forecast model of bipolar disorder
CN111179372A (en) Image attenuation correction method, device, computer equipment and storage medium
CN113298800B (en) CT angiography CTA source image processing method, device and equipment
CN113920109A (en) Medical image recognition model training method, recognition method, device and equipment
JP2019503214A (en) Fast automatic segmentation of hierarchical images by heuristic graph search
CN117392746A (en) Rehabilitation training evaluation assisting method, device, computer equipment and storage medium
CN110175983A (en) Eyeground lesion screening method, device, computer equipment and storage medium
CN113658165A (en) Cup-to-tray ratio determining method, device, equipment and storage medium
CN116312986A (en) Three-dimensional medical image labeling method and device, electronic equipment and readable storage medium
CN117409002A (en) Visual identification detection system for wounds and detection method thereof
CN114092464B (en) OCT image processing method and device
CN114359219A (en) OCT image layering and focus semantic segmentation method, device and storage medium
WO2022096867A1 (en) Image processing of intravascular ultrasound images
CN116823851B (en) Feature reconstruction-based unsupervised domain self-adaptive OCT image segmentation method and system
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN116934686A (en) OCT (optical coherence tomography) image detection method and device based on multi-direction image fusion
JP3647970B2 (en) Region extraction device
CN109978861B (en) Polio detection method, apparatus, device and computer readable storage medium
CN118298004B (en) Heart function assessment method and system based on three-dimensional echocardiography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant