CN113344893A - High-precision fundus arteriovenous identification method, device, medium and equipment - Google Patents

High-precision fundus arteriovenous identification method, device, medium and equipment Download PDF

Info

Publication number
CN113344893A
CN113344893A CN202110695188.6A CN202110695188A CN113344893A CN 113344893 A CN113344893 A CN 113344893A CN 202110695188 A CN202110695188 A CN 202110695188A CN 113344893 A CN113344893 A CN 113344893A
Authority
CN
China
Prior art keywords
fundus
image
vein
artery
vein image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110695188.6A
Other languages
Chinese (zh)
Inventor
凌赛广
董洲
柯鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yiwei Science And Technology Beijing Co ltd
Original Assignee
Yiwei Science And Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yiwei Science And Technology Beijing Co ltd filed Critical Yiwei Science And Technology Beijing Co ltd
Priority to CN202110695188.6A priority Critical patent/CN113344893A/en
Publication of CN113344893A publication Critical patent/CN113344893A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method, a device, a medium and equipment for identifying high-precision fundus arteriovenous, wherein the method comprises the following steps: extracting blood vessels from the fundus image to obtain a blood vessel segmentation map; extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method; extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a depth learning model; acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image. The method can well keep the robustness of identification on the premise of acquiring the arteriovenous with high precision.

Description

High-precision fundus arteriovenous identification method, device, medium and equipment
Technical Field
The invention relates to the field of fundus disease screening, in particular to a high-precision fundus arteriovenous identification method, device, medium and equipment.
Background
The eyeground blood vessels are divided into arteries and veins, and the identification of the arteriovenous is not only the basis for judging the arteriosclerosis of the eyeground, but also the basis for judging the retinopathy of chronic diseases and cardiovascular and cerebrovascular diseases (such as hypertensive retinopathy), and can reflect the structural damage of the chronic diseases and the cardiovascular and cerebrovascular diseases to the body, so that the identification of the arteriovenous of the eyeground and the acquisition of corresponding arteriovenous indexes are very important for the analysis of the eyeground diseases, the systemic chronic diseases and the cardiovascular and cerebrovascular diseases.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: the prior art is not high in identification precision of fundus arteriovenous.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method, an apparatus, a medium, and a device for high-precision fundus arteriovenous identification, so as to realize high-precision arteriovenous identification.
To achieve the above object, in a first aspect, an embodiment of the present invention provides a method for high-precision fundus arteriovenous identification, including:
extracting blood vessels from the fundus image to obtain a blood vessel segmentation map;
extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method;
extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model;
acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image.
In some possible embodiments, the extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method may include:
extracting a first fundus artery image and a first fundus vein image from the blood vessel segmentation map according to the color features and morphological features of the arteries and veins in the blood vessel segmentation map, wherein the first fundus artery image comprises an aorta and a branch artery, and the first fundus vein image comprises a main vein and a branch vein.
In some possible embodiments, the method may further comprise: correcting the attributes of the branch artery and the branch vein according to the topological characteristics of the aorta and the main vein; wherein the topological feature comprises: the branch connected to the aorta is a branch artery and the branch connected to the main vein is a branch vein, and the aorta and the main vein may intersect but are not connected.
In some possible embodiments, the obtaining a fundus artery image according to the first fundus artery image and the second fundus artery image, and obtaining a fundus vein image according to the first fundus vein image and the second fundus vein image may specifically include:
weighting or intersecting the first fundus artery image and the second fundus artery image to obtain a fundus artery image;
and weighting or intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image.
In some possible embodiments, the vessel segmentation map includes a first region and a second region centered on a disk, the first region being closer to the disk than the second region;
in the first region, the first fundus artery image corresponds to a weight lower than that of the second fundus artery image, and in the second region, the first fundus artery image corresponds to a weight higher than that of the second fundus artery image;
in the first region, the first fundus vein image corresponds to a weight lower than that of the second fundus vein image, and in the second region, the first fundus vein image corresponds to a weight higher than that of the second fundus vein image.
In some possible embodiments, the method further comprises: the vessel segmentation map comprises N regions with the optic disc as the center, wherein N is an integer larger than 2,
in the area close to the optic disc, the first fundus artery image corresponds to a weight lower than that of the second fundus artery image, and in the area far from the optic disc, the first fundus artery image corresponds to a weight higher than that of the second fundus artery image;
in the area close to the optic disc, the first fundus vein image corresponds to a weight lower than that of the second fundus vein image, and in the area far from the optic disc, the first fundus vein image corresponds to a weight higher than that of the second fundus vein image.
In some possible embodiments, the method further comprises: intersecting the first fundus artery image and the second fundus artery image to obtain a fundus artery image, specifically comprising: intersecting the aorta and the branch artery identified in the first eye artery image with the aorta identified in the second fundus artery main artery image to obtain a fundus artery image;
intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image, which specifically comprises: and intersecting the main vein and the branch vein identified in the first eye arteriovenous image with the main vein identified in the second eye fundus vein image to obtain an eye fundus vein image.
In a second aspect, an embodiment of the present invention provides a device for high-precision fundus arteriovenous identification, including:
the blood vessel segmentation module is used for extracting blood vessels from the fundus image to obtain a blood vessel segmentation image;
the first arteriovenous extraction module is used for extracting a first fundus artery image and a first fundus vein image from the blood vessel segmentation map according to a computer vision method technology;
the second arteriovenous extraction module is used for extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation image according to the deep learning model;
and the combined processing module is used for obtaining a fundus artery image according to the first fundus artery image and the second fundus artery image and obtaining a fundus vein image according to the first fundus vein image and the second fundus vein image.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements any one of the methods for high-precision fundus arteriovenous identification described above.
In a fourth aspect, an embodiment of the present invention provides an apparatus for high-precision fundus arteriovenous identification, including:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any one of the methods of high precision fundus arteriovenous identification described above.
In a fifth aspect, an embodiment of the present invention further provides a system for high-precision fundus arteriovenous identification, including:
the eye fundus camera is used for acquiring eye fundus images and transmitting the acquired eye fundus images to the cloud server;
a cloud server configured to: extracting blood vessels from the fundus image to obtain a blood vessel segmentation map; extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method; extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model; acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image.
The technical scheme has the following beneficial effects:
the embodiment of the invention obtains a blood vessel segmentation chart by extracting blood vessels from the fundus image; extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method; extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model; acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image; therefore, the accuracy of arteriovenous identification can be improved by jointly using a computer vision method and a deep learning model algorithm.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a high-precision fundus arteriovenous identification method according to an embodiment of the present invention;
FIG. 2 is a flow chart of identifying arteriovenous based on computer vision method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a correction of fundus arteriovenous extracted according to a computer vision method in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of the determination of the combined processing weights for the fundus image in regions centered on the optic disc according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of intersection of an arteriovenous image extracted according to a computer vision method and a deep learning model in an embodiment of the present invention
FIG. 6 is a functional block diagram of a high precision fundus arteriovenous identification device of an embodiment of the present invention;
FIG. 7 is a functional block diagram of a computer-readable storage medium of an embodiment of the present invention;
fig. 8 is a functional block diagram of an apparatus for high-precision fundus arteriovenous identification according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The fundus blood vessels are divided into arteries and veins, however, due to the fineness of the fundus blood vessels, the identification of the arteries and veins is always a pain point in the industry, particularly the identification of branch arteries and veins, and on the premise of extracting the fundus blood vessels and identifying the arteries and veins with high precision, the robustness of the extraction effect is always the greatest difficulty in the extraction of the fundus artery and veins.
Example one
Fig. 1 is a functional block diagram of a method for high-precision fundus arteriovenous identification according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
s110: blood vessels are extracted from the fundus image, and a blood vessel segmentation map is obtained.
S120: a first fundus artery image and a first fundus vein image are extracted from the vessel segmentation map according to a computer vision method.
Specifically, the Computer Vision technology (CV) implements fundus image segmentation by the artificial intelligence Computer Vision technology to obtain an artery and vein segmentation result. Computer Vision technology (CV) is to use a camera and a Computer to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further perform image processing to make an image more suitable for human eye observation or transmission to an instrument for detection. The computer vision method in this embodiment refers to a processing method that does not depend on machine learning or deep learning. As an example: according to the common characteristics that the artery and the vein appear in pairs, the caliber of the vein is larger, the caliber of the artery is smaller, the color of the artery is white, the color of the vein is red and the like, the aorta, the main vein, the branch artery and the branch vein are identified. Then, according to the topological characteristics of the aorta and the main vein, correcting the attributes of the branch artery and the branch vein; wherein the topological feature comprises: the branch connected with the aorta is a branch artery, the branch connected with the main vein is a branch vein, and the aorta and the main vein can be intersected or not intersected but not connected.
In some embodiments, the extracting the first fundus artery image and the first fundus vein image from the vessel segmentation map according to the computer vision method of step S120, as shown in fig. 2, may include:
s121: extracting a first fundus artery image and a first fundus vein image from the blood vessel segmentation map according to the color features and morphological features of the arteries and veins in the blood vessel segmentation map, wherein the first fundus artery image comprises an aorta and a branch artery, and the first fundus vein image comprises a main vein and a branch vein. Specifically, the aorta, the main vein, the branch artery and the branch vein are extracted from the blood vessel segmentation map according to the common characteristics that the artery and the vein appear in pairs, the caliber of the vein is larger, the caliber of the artery is smaller, the color of the artery is white, the color of the vein is red and the like.
In some embodiments, the method further comprises step S122: and correcting the attributes of the branch arteries and the branch veins according to the topological characteristics of the vessels of the aorta and the main vein. By correcting is meant modifying, for example: the main vessel connected with the branch vessel is an artery, but the branch vessel is classified or identified as a vein, which indicates that the attribute of the main vessel is identified incorrectly, and the attribute of the main vessel is modified by taking the attribute of the main vessel as a reference. Fig. 3 is a schematic diagram showing correction of fundus arteriovenous images extracted by a computer vision method, in which fig. 3(a) is a fundus arteriovenous image extracted by a computer vision method, fig. 3(b) is a corrected fundus arteriovenous image, and arteriovenous attributes of branch vessels are further identified and corrected by the characteristics that a branch connected with an aorta is a branch artery, a branch connected with a main vein is a branch vein, and arteriovenous are intersected but not connected. Extending from the main vein are branch veins, including but not limited to: secondary branching, tertiary branching, and the like. A blood vessel may be arterial or venous in nature. In the embodiment, when an arteriovenous is judged, if the attributes of a blood vessel before and after are inconsistent, particularly the attributes of arteriovenous of a branch are inconsistent, the branch connected with the main vein is determined to be a branch vein, and the branch connected with the main artery is determined to be a branch artery.
S130: a second fundus artery image and a second fundus vein image are extracted from the blood vessel segmentation map according to the deep learning model.
In some possible implementations, the deep learning model may include one or more of any of the following: the virtual Network (Residual Network), VGGNet (Visual Geometry Group), FCNN (full Convolutional Neural Network), Unet Neural Network, deep lab V3+ Network model, fast-RCNN (fast regional Convolutional Neural Network), SSD (Single shell multi box Detector, SSD object detection algorithm).
S140: a fundus artery image is obtained from the first fundus artery image and the second fundus artery image, and a fundus vein image is obtained from the first fundus vein image and the second fundus vein image. In some embodiments, the computer vision method in step S140 may obtain a fundus artery image according to the first fundus artery image and the second fundus artery image, and obtain a fundus vein image according to the first fundus vein image and the second fundus vein image, and specifically may include:
weighting or intersecting the first fundus artery image and the second fundus artery image to obtain a fundus artery image;
and weighting or intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image.
In some embodiments, the vessel segmentation map includes a first region and a second region centered on the optic disc, the first region being closer to the optic disc than the second region, and the active vein attribute features being more pronounced.
Because the computer vision technology has better recognition effect on the subtle features such as the color and the shape of the blood vessel, and the deep learning algorithm is not sensitive to the subtle features but has good recognition effect on the main blood vessel, the method comprises the following steps:
in the first region, the deep learning algorithm is mainly used for performing arteriovenous identification, namely the weight corresponding to the first fundus artery image is lower than the weight corresponding to the second fundus artery image, and in the second region, the computer vision technology is mainly used for performing arteriovenous identification, namely the weight corresponding to the first fundus artery image is higher than the weight corresponding to the second fundus artery image;
in some possible embodiments, the weight occupied by the deep learning algorithm and the weight occupied by the computer vision may be determined according to specific situations, and the weight occupied by the arteriovenous image identified by the method is higher.
In the weighting processing process in the embodiment of the invention, the identification of the active vein close to the optic disc takes the deep learning result as the main reference basis, the weight value of the deep learning algorithm is larger, and the identification of the active vein far away from the optic disc takes the result extracted by the computer vision technology as the main reference basis, and the weight value of the computer vision technology is larger. Furthermore, the blood vessel attributes can be further corrected, and the consistency of the continuous blood vessel attributes is ensured.
In some embodiments, the vessel segmentation map includes a plurality of regions, such as a plurality of circular regions or rectangular regions, which are divided or defined from inside to outside by taking the optic disc as a center; in fig. 4, a first region, a second region, a third region, and a fourth region are schematically depicted, and there may be more or less fundus regions in practice. In different regions of the first fundus arteriovenous image, different weight values are respectively set, for example, a weight value W1< W2< W3< W4 which increases once is set. Accordingly, on the second fundus image obtained by the arteriovenous feature detection model based on the deep learning algorithm, the above-described plurality of regions are similarly defined, and different regions are preset with different joint processing weights, for example, a first region, a second region, a third region, and a fourth region whose weight value distribution decreases from inside to outside in order are defined as W1 '> W2' > W3 '> W4'. Among them, W1+ W1 'is 1, W2+ W2' is 1, W3+ W3 'is 1, and W4+ W4' is 1. As an example:
w1 ═ 50% < W2 ═ 67% < W3 ═ 84% < W4 ═ 100%, and W1 '> 50% > W2' > 33% > W3 '> 16% > W4' > 0. In the process of performing weighted summation and further screening processing on the first fundus arteriovenous image and the second fundus arteriovenous image to obtain a final arteriovenous image, the method may specifically include: and in different fundus areas, according to the weighted sum value of the pixel values of the first fundus arteriovenous image and the second fundus arteriovenous image in the fundus area, performing further screening processing on the weighted sum value, and determining a final regional arteriovenous image in the fundus area. The process is repeated, and a final complete and high-precision artery and vein image can be obtained according to the artery and vein images of the plurality of regions.
In yet another embodiment, the second fundus image obtained by the model for detecting arteriovenous features based on the deep learning algorithm is intersected with the first fundus image extracted according to the computer vision, referring to fig. 5, fig. 5(a) is the first fundus arteriovenous image identified according to the computer vision, fig. 5(b) is the second fundus arteriovenous image identified by the deep learning algorithm, and fig. 5(c) is the intersection fundus arteriovenous image obtained by intersecting the fundus arteriovenous image identified according to the computer vision with the fundus arteriovenous image identified by the deep learning algorithm. In fig. 5(a), since a large number of tiny blood vessels or capillary vessels in the edge region are included, the accuracy reaches a sub-pixel level, so that the tiny blood vessels can be accurately identified, see fig. 5(b), while the active vein blood vessels or the larger blood vessels in the fundus image can be accurately identified according to the deep learning algorithm, so that the fundus arteriovenous image identified by the computer vision and the fundus arteriovenous image identified by the deep learning algorithm are intersected to obtain an intersection fundus arteriovenous image, see fig. 5 (c).
In the weighting solving or intersection processing process in the embodiment of the invention, the identification of the active vein close to the optic disc takes the deep learning result as the main reference basis, the weight value of the deep learning algorithm is larger, and the result extracted by a computer vision method is taken as the main reference basis far from the optic disc, and the weight value of the computer vision method is larger. Furthermore, the blood vessel attributes can be further corrected later, and the consistency of the continuous blood vessel attributes is ensured.
The above method, which in a preferred embodiment comprises the steps of:
(1) and image preprocessing, specifically, drying, normalization and enhancement. The drying is mainly to remove the noise of the image in the shooting and imaging process and reduce the interference of the noise on the blood vessel characteristics. Normalization mainly aims to realize the differentiation exposure, the unification of color and brightness of the fundus images and unify the gray value ranges of different images, so that the generalization capability of the algorithm on massive images is improved, and the technical productization is possible. The enhancement is mainly to enlarge the difference between the interesting features and the background features, so that the features of the image are more prominent, and the threshold segmentation and extraction are facilitated.
(2) And a fundus blood vessel extraction step, wherein a blood vessel segmentation map is obtained from the fundus image by using a computer vision technology and/or a deep learning algorithm.
(3) Arteriovenous identification method 1: the preliminary extraction is obtained by identifying the characteristics of the arteriovenous color (the artery color is generally brighter), the morphological characteristics (the aorta and the main vein generally appear in pairs, the artery diameter is relatively narrower), the topological characteristics (for the branch arteriovenous, the artery connected with the artery and the vein connected with the vein are arteries) and the like.
(4) Arteriovenous identification method 2: and carrying out arteriovenous identification by using deep learning to obtain an artery identification image.
(5) And (4) solving intersection of the two images, wherein the weight obtained by deep learning of arteriovenous identification close to the optic disc is larger, and the weight obtained by computer vision of arteriovenous branches far away from the optic disc is larger.
(6) And (3) correction of the artery and vein attributes of the branch blood vessels: its properties are modified according to its connection to the connected active vein.
(7) And measuring the arteriovenous indexes.
On the basis of identifying the artery and the vein with high precision, the embodiment of the invention has the advantages that the blood vessel extraction precision reaches a sub-pixel level, and the robustness is strong. On the basis of extracting the fundus blood vessels, the embodiment of the invention distinguishes and identifies the arteriovenous attributes of the fundus blood vessels, and further calculates arterial blood vessel indexes (pipe diameter, curvature, fractal dimension and the like), venous blood vessel indexes (pipe diameter, curvature, fractal dimension and the like) and arteriovenous ratio values, thereby laying a foundation for subsequent fundus disease analysis. The embodiment of the invention can also improve the identification precision of the branch artery and vein and other small blood vessels.
Example two
Fig. 6 is a functional block diagram of a high-precision fundus arteriovenous identification device according to an embodiment of the present invention. As shown in fig. 6, the apparatus 600 includes:
a blood vessel segmentation module 610, configured to extract blood vessels from the fundus image, and obtain a blood vessel segmentation map;
a first arteriovenous extraction module 620, configured to extract a first fundus artery image and a first fundus vein image from the blood vessel segmentation map according to a computer vision method;
a second arteriovenous extraction module 630, configured to extract a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model;
a joint processing module 640, configured to obtain a fundus artery image according to the first fundus artery image and the second fundus artery image, and obtain a fundus vein image according to the first fundus vein image and the second fundus vein image.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
EXAMPLE III
Fig. 7 is a functional block diagram of a computer-readable storage medium according to an embodiment of the present invention. As shown in fig. 7, a computer program 710 is stored in the computer readable storage medium 700, and when executed by the processor, the computer program 710 implements:
extracting blood vessels from the fundus image to obtain a blood vessel segmentation map;
extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method;
extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model;
acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Example four
Fig. 8 is a functional block diagram of an apparatus for high-precision fundus arteriovenous identification according to an embodiment of the present invention. As shown in fig. 8, the system comprises one or more processors 801, a communication interface 802, a memory 803 and a communication bus 804, wherein the processors 801, the communication interface 802 and the memory 803 are communicated with each other through the communication bus 804.
A memory 803 for storing a computer program;
the processor 801 is configured to, when executing the program stored in the memory 803, implement:
extracting blood vessels from the fundus image to obtain a blood vessel segmentation map;
extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method;
extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model;
acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image.
In some possible embodiments, the processing performed by the processor 801, the extracting the first artery-fundus image and the first vein-fundus image from the vessel segmentation map according to the computer vision method, may include:
extracting a first fundus artery image and a first fundus vein image from the blood vessel segmentation map according to the color features and morphological features of the arteries and veins in the blood vessel segmentation map, wherein the first fundus artery image comprises an aorta and a branch artery, and the first fundus vein image comprises a main vein and a branch vein.
In some possible embodiments, the processor 801 performs a process in which the properties of the branch arteries and the branch veins are corrected according to the topological features of the aorta and the main vein; wherein the topological feature comprises: the branch connected with the aorta is a branch artery, the branch connected with the main vein is a branch vein, and the aorta and the main vein can be intersected or not intersected but not connected.
In some possible embodiments, the processing performed by the processor 801, obtaining a fundus artery image according to the first fundus artery image and the second fundus artery image, and obtaining a fundus vein image according to the first fundus vein image and the second fundus vein image may specifically include:
weighting or intersecting the first fundus artery image and the second fundus artery image to obtain a fundus artery image;
and weighting or intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image.
In some possible embodiments, the processor 801 executes a process in which the vessel segmentation map includes a disk, a first region and a second region, and the first region is closer to the disk than the second region;
in the first region, the first fundus artery image corresponds to a weight lower than that of the second fundus artery image, and in the second region, the first fundus artery image corresponds to a weight higher than that of the second fundus artery image;
in the first region, the first fundus vein image corresponds to a weight lower than that of the second fundus vein image, and in the second region, the first fundus vein image corresponds to a weight higher than that of the second fundus vein image.
In some possible embodiments, the processor 801 performs a process in which the vessel segmentation map includes N regions centered on the optic disc, where N is an integer greater than 2,
in the area close to the optic disc, the first fundus artery image corresponds to a weight lower than that of the second fundus artery image, and in the area far from the optic disc, the first fundus artery image corresponds to a weight higher than that of the second fundus artery image;
in the area close to the optic disc, the first fundus vein image corresponds to a weight lower than that of the second fundus vein image, and in the area far from the optic disc, the first fundus vein image corresponds to a weight higher than that of the second fundus vein image.
In some possible embodiments, the processing performed by the processor 801 to intersect the first fundus artery image and the second fundus artery image to obtain a fundus artery image specifically includes: intersecting the aorta and the branch artery identified in the first eye artery image with the aorta identified in the second fundus artery main artery image to obtain a fundus artery image;
intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image, which specifically comprises: and intersecting the main vein and the branch vein identified in the first fundus artery and vein image with the main vein identified in the second fundus vein image to obtain a fundus vein image.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for communication between the electronic equipment and other equipment.
The bus 804 includes hardware, software, or both to couple the above-described components to one another. For example, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. A bus may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The memory 803 may include mass storage for data or instructions. By way of example, and not limitation, the memory 803 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, a tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory 803 may include removable or non-removable (or fixed) media, where appropriate. In a particular embodiment, the memory 803 is a non-volatile solid-state memory. In certain embodiments, memory 803 comprises Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although the present application provides method steps as described in an embodiment or flowchart, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or end product executes, it may execute sequentially or in parallel (e.g., parallel processors or multi-threaded environments, or even distributed data processing environments) according to the method shown in the embodiment or the figures.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device, the electronic device and the readable storage medium embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method for high-precision fundus arteriovenous identification is characterized by comprising the following steps:
extracting blood vessels from the fundus image to obtain a blood vessel segmentation map;
extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method;
extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation map according to a deep learning model;
acquiring a fundus artery image according to the first fundus artery image and the second fundus artery image, and acquiring a fundus vein image according to the first fundus vein image and the second fundus vein image.
2. The method of claim 1, wherein said extracting a first fundus artery image and a first fundus vein image from the vessel segmentation map according to a computer vision method comprises:
extracting a first fundus artery image and a first fundus vein image from the blood vessel segmentation map according to the color features and morphological features of the arteries and veins in the blood vessel segmentation map, wherein the first fundus artery image comprises an aorta and a branch artery, and the first fundus vein image comprises a main vein and a branch vein.
3. The method of claim 2, further comprising: correcting the attributes of the branch artery and the branch vein according to the topological characteristics of the aorta and the main vein; wherein the topological feature comprises: the branch connected to the aorta is a branch artery and the branch connected to the main vein is a branch vein, and the aorta and the main vein may intersect but are not connected.
4. The method according to claim 1, wherein obtaining a fundus artery image from the first fundus artery image and the second fundus artery image and obtaining a fundus vein image from the first fundus vein image and the second fundus vein image specifically comprises:
weighting or intersecting the first fundus artery image and the second fundus artery image to obtain a fundus artery image;
and weighting or intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image.
5. The method of claim 4, wherein the vessel segmentation map includes a first region centered on the optic disc and a second region, the first region being closer to the optic disc than the second region;
in the first region, the first fundus artery image corresponds to a weight lower than that of the second fundus artery image, and in the second region, the first fundus artery image corresponds to a weight higher than that of the second fundus artery image;
in the first region, the first fundus vein image corresponds to a weight lower than that of the second fundus vein image, and in the second region, the first fundus vein image corresponds to a weight higher than that of the second fundus vein image.
6. The method of claim 5, wherein the vessel segmentation map includes N regions centered on the optic disc, where N is an integer greater than 2,
in the area close to the optic disc, the first fundus artery image corresponds to a weight lower than that of the second fundus artery image, and in the area far from the optic disc, the first fundus artery image corresponds to a weight higher than that of the second fundus artery image;
in the area close to the optic disc, the first fundus vein image corresponds to a weight lower than that of the second fundus vein image, and in the area far from the optic disc, the first fundus vein image corresponds to a weight higher than that of the second fundus vein image.
7. The method of claim 4,
intersecting the first fundus artery image and the second fundus artery image to obtain a fundus artery image, specifically comprising: intersecting the aorta and the branch artery identified in the first fundus artery image with the aorta identified in the second fundus artery main artery image to obtain a fundus artery image;
intersecting the first fundus vein image and the second fundus vein image to obtain a fundus vein image, which specifically comprises: and intersecting the main vein and the branch vein identified in the first eye arteriovenous image with the main vein identified in the second eye fundus vein image to obtain an eye fundus vein image.
8. The utility model provides a device of high accuracy fundus arteriovenous discernment which characterized in that includes:
the blood vessel segmentation module is used for extracting blood vessels from the fundus image to obtain a blood vessel segmentation image;
the first arteriovenous extraction module is used for extracting a first fundus artery image and a first fundus vein image from the blood vessel segmentation image according to a computer vision method;
the second arteriovenous extraction module is used for extracting a second fundus artery image and a second fundus vein image from the blood vessel segmentation image according to the deep learning model;
and the combined processing module is used for obtaining a fundus artery image according to the first fundus artery image and the second fundus artery image and obtaining a fundus vein image according to the first fundus vein image and the second fundus vein image.
9. A computer-readable storage medium on which a computer program is stored, which program, when executed by a processor, implements a method of high accuracy fundus arteriovenous identification as claimed in any one of claims 1 to 7.
10. An apparatus for high-precision fundus arteriovenous identification, characterized in that it comprises:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of high accuracy fundus arteriovenous identification of any of claims 1-7.
CN202110695188.6A 2021-06-23 2021-06-23 High-precision fundus arteriovenous identification method, device, medium and equipment Pending CN113344893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110695188.6A CN113344893A (en) 2021-06-23 2021-06-23 High-precision fundus arteriovenous identification method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110695188.6A CN113344893A (en) 2021-06-23 2021-06-23 High-precision fundus arteriovenous identification method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN113344893A true CN113344893A (en) 2021-09-03

Family

ID=77477679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110695188.6A Pending CN113344893A (en) 2021-06-23 2021-06-23 High-precision fundus arteriovenous identification method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN113344893A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511883A (en) * 2022-11-10 2022-12-23 北京鹰瞳科技发展股份有限公司 Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001145604A (en) * 1999-11-19 2001-05-29 Nippon Telegr & Teleph Corp <Ntt> Method for identifying artery and vein of fundus oculi image
CN109166124A (en) * 2018-11-20 2019-01-08 中南大学 A kind of retinal vascular morphologies quantization method based on connected region
CN110443813A (en) * 2019-07-29 2019-11-12 腾讯医疗健康(深圳)有限公司 Blood vessel, the dividing method of eye fundus image, device, equipment and readable storage medium storing program for executing
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111797901A (en) * 2020-06-09 2020-10-20 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Retinal artery and vein classification method and device based on topological structure estimation
CN111932535A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
US20200394789A1 (en) * 2019-06-12 2020-12-17 Carl Zeiss Meditec Inc Oct-based retinal artery/vein classification
US20210022606A1 (en) * 2018-04-18 2021-01-28 Nikon Corporation Image processing method, program, image processing device, and ophthalmic system
CN112734774A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 High-precision fundus blood vessel extraction method, device, medium, equipment and system
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001145604A (en) * 1999-11-19 2001-05-29 Nippon Telegr & Teleph Corp <Ntt> Method for identifying artery and vein of fundus oculi image
US20210022606A1 (en) * 2018-04-18 2021-01-28 Nikon Corporation Image processing method, program, image processing device, and ophthalmic system
CN109166124A (en) * 2018-11-20 2019-01-08 中南大学 A kind of retinal vascular morphologies quantization method based on connected region
US20200394789A1 (en) * 2019-06-12 2020-12-17 Carl Zeiss Meditec Inc Oct-based retinal artery/vein classification
CN110443813A (en) * 2019-07-29 2019-11-12 腾讯医疗健康(深圳)有限公司 Blood vessel, the dividing method of eye fundus image, device, equipment and readable storage medium storing program for executing
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111797901A (en) * 2020-06-09 2020-10-20 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Retinal artery and vein classification method and device based on topological structure estimation
CN111932535A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy
CN112734774A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 High-precision fundus blood vessel extraction method, device, medium, equipment and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SOURYA SENGUPTA等: ""Application of Deep Learning in Fundus Image Processing for Ophthalmic Diagnosis -- A Review"", Retrieved from the Internet <URL:https://arxiv.org/pdf/1812.07101.pdf> *
杨毅: ""视网膜血管分割与动静脉分类方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2017 (2017-02-15), pages 3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511883A (en) * 2022-11-10 2022-12-23 北京鹰瞳科技发展股份有限公司 Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel

Similar Documents

Publication Publication Date Title
CN111340789A (en) Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN110889826B (en) Eye OCT image focus region segmentation method, device and terminal equipment
CN111681242B (en) Retinal vessel arteriovenous distinguishing method, device and equipment
CN112734785B (en) Method, device, medium and equipment for determining sub-pixel level fundus blood vessel boundary
CN104182974B (en) A speeded up method of executing image matching based on feature points
CN112734828B (en) Method, device, medium and equipment for determining center line of fundus blood vessel
CN112734774B (en) High-precision fundus blood vessel extraction method, device, medium, equipment and system
CN113470102B (en) Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN112037287B (en) Camera calibration method, electronic equipment and storage medium
CN113344893A (en) High-precision fundus arteriovenous identification method, device, medium and equipment
CN113450329B (en) Microcirculation image blood vessel branch erythrocyte flow rate calculation method and system
CN111681276A (en) Method and device for determining ratio of arteriovenous diameter in fundus image and electronic equipment
CN112734773B (en) Sub-pixel-level fundus blood vessel segmentation method, device, medium and equipment
CN112560839A (en) Automatic identification method and system for reading of pointer instrument
CN114387219A (en) Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground
CN110969617A (en) Method, device and equipment for identifying image of optic cup and optic disk and storage medium
CN115100178A (en) Method, device, medium and equipment for evaluating morphological characteristics of fundus blood vessels
CN114387209A (en) Method, apparatus, medium, and device for fundus structural feature determination
CN115829980A (en) Image recognition method, device, equipment and storage medium for fundus picture
CN115546185A (en) Blood vessel image contour extraction method, device, equipment and storage medium
CN115511883A (en) Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel
CN115830686A (en) Biological recognition method, system, device and storage medium based on feature fusion
CN114387218A (en) Vision-calculation-based identification method, device, medium, and apparatus for characteristics of fundus oculi
CN112734701A (en) Fundus focus detection method, fundus focus detection device and terminal equipment
CN113344895A (en) High-precision fundus blood vessel diameter measuring method, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination