CN109993765B - Method for detecting retinal vein cross compression angle - Google Patents

Method for detecting retinal vein cross compression angle Download PDF

Info

Publication number
CN109993765B
CN109993765B CN201910281136.7A CN201910281136A CN109993765B CN 109993765 B CN109993765 B CN 109993765B CN 201910281136 A CN201910281136 A CN 201910281136A CN 109993765 B CN109993765 B CN 109993765B
Authority
CN
China
Prior art keywords
edge
points
point
value
double
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910281136.7A
Other languages
Chinese (zh)
Other versions
CN109993765A (en
Inventor
赵晓芳
刘铠瑜
林盛鑫
李碧富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan No1 Middle School
Dongguan University of Technology
Original Assignee
Dongguan No1 Middle School
Dongguan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan No1 Middle School, Dongguan University of Technology filed Critical Dongguan No1 Middle School
Priority to CN201910281136.7A priority Critical patent/CN109993765B/en
Publication of CN109993765A publication Critical patent/CN109993765A/en
Application granted granted Critical
Publication of CN109993765B publication Critical patent/CN109993765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a retinal vein cross compression angle detection method, which comprises the steps of image preprocessing, edge corner point detection and screening, wherein the detection and screening of the corner points of an image after binaryzation are carried out, and meanwhile, extracted double corner points are respectively used as one point of double straight lines; for the separation of double edges, extracting blood vessel edge information of the binarized image, and simultaneously separating the double edges of the blood vessel wall according to the positions of angular points to obtain an independent single edge; fitting a curve to the separated single edge, and performing curvature calculation according to a curve equation to obtain an extreme value large value point; and (4) calculating the angle. The invention realizes the measurement of the angle under smaller error without strictly depending on the personal level of an ophthalmologist, and eliminates the difference of individuals.

Description

Method for detecting retinal vein cross compression angle
Technical Field
The invention relates to a detection method, in particular to a method for detecting a retinal vein cross compression angle.
Background
Retinal vein occlusion is caused by retinal inflammation, retinal perfusion, hypertension, arteriosclerosis and the like, and the narrowing of veins caused by arteriovenous crossing compression of the fundus retina is considered to be related to branch retinal vein occlusion medically, so the narrowing degree of retinal veins at arteriovenous crossing is particularly important for judging the branch retinal vein occlusion. The degree of narrowing of the vein can be defined by the angle.
The retinal vein cross compression angle detection is mainly carried out on branch retinal vein occlusion by adopting an angle calculation mode, and has better adaptability to rotation, scaling and translation. However, the implementation process mainly depends on manual observation and manual marking calculation by an ophthalmologist, and the process is not only low in efficiency, but also strong in subjectivity and strictly depends on the level of the ophthalmologist. Meanwhile, the manual measurement error reaches about ten degrees, and in the presence of a large amount of picture data, an ophthalmologist consumes a large amount of time and energy, treatment time of a patient is delayed, visual fatigue is generated, misjudgment on data is possible, and wrong judgment is made on the illness state of the patient.
Disclosure of Invention
The invention aims to provide a method for detecting the cross compression angle of the retinal vein, which realizes the automatic calculation and detection of the angle.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method for detecting a retinal vein cross compression angle is characterized by comprising the following steps:
the method comprises the following steps: image preprocessing, namely filtering, denoising, enhancing and binarizing the image;
step two: detecting and screening edge angular points, namely detecting and screening angular points of the image after binaryzation, providing initial double-separation points for the subsequent double-edge separation, and simultaneously taking the extracted double-angular points as one point of a double-straight line;
step three: for the separation of double edges, extracting blood vessel edge information of the binarized image, and simultaneously separating the double edges of the blood vessel wall according to the positions of angular points to obtain an independent single edge;
step four: performing edge fitting and curve curvature extreme point acquisition, performing curve fitting on the separated single edge, and performing curvature calculation according to a curve equation to obtain an extreme value large value point;
step five: and (4) angle calculation, namely forming double straight lines by the corner points and curvature extreme points corresponding to the double edges respectively, and then calculating the forward included angle of the double straight lines along the flow direction of the blood vessel.
Further, the first step is specifically to filter, denoise and enhance an input image, then perform expansion operation to connect fine gaps, at the moment, acquire blood vessels to be detected in an interactive mode, then perform binarization processing, extract main parts in a mode of acquiring a maximum connected domain, and reject other redundant edge information.
Further, the second step is specifically to perform corner detection on the ROI region by using a CPDA-based corner detection technology, and then perform five classifications on the identified various corners: and detecting angular points in the range through a rectangular filter to filter out the needed bifurcation points.
Further, the third step is specifically
3.1 extracting the double corners: selecting a part to be detected by manual frame selection, selecting two proper corner points in the part to be detected by manual frame selection, wherein the two selected corner points are called as seed points for edge growth;
3.2 eliminating invalid edge points at the first level: selecting the edge value of the x and y values of the dual-corner point as the edge point screening condition, namely
Figure BDA0002021700630000031
Wherein x, y are x, y coordinates, x, satisfying the conditioncorner,ycornerIs the x, y coordinate, x, of two corner pointsi,xj,yi,yjX, y coordinates of valid edge points; dividing edge points in the graph after regular cutting into an effective part and an ineffective part according to effectiveness, wherein the effective part forms the edge points which are fitted finally, the ineffective part does not participate in edge formation, and only adopts a regular cutting mode to introduce useless points when selecting angular points; applying the x and y values of the obtained result to all the edge points, and removing the unsatisfied edge points;
3.3 distance judgment: taking the y value obtained above as the starting point of the line scan, moving from the edge where the y value is located to the most significant value of the effective edge point, i.e. moving
y→ymax(i,,j)
Respectively judging the distance between the edge point and the edge growing point, namely the double-corner point
Figure BDA0002021700630000032
D=min(A,B)
Wherein A is the distance from the first edge growing point to the edge point to be identified, B is the distance from the second edge growing point to the edge point to be identified, and D is the maximum distance tolerance value allowed to become the edge of the edge; when A and B are not satisfied, the point is considered as a noise point, and elimination processing is carried out; when one of A and B is established, the point is considered to be an effective edge point and is classified into the edge of the edge; when both A and B are established, selecting the result with the minimum distance as a selection result; due to the existence of double-edge seed points, the edge growing points follow y → ymax(i,j)And two independent edges are formed by growth;
3.4 invalid edge points are removed in two stages: after the double edges are obtained, the invalid edge points still exist, and the invalid edge points are removed again by using a method in primary removal of the invalid edge points, so that the effect of cleaning the edge points is achieved.
Further, the fourth step is specifically
4.1 edge point pretreatment: some edge points with the same x or y value exist in the obtained edge points, and the edge points need to be screened before the equation fitting process; the screening mode mainly comprises three modes of maximum screening, minimum screening and mean screening, and the main processing mode is the direction of the edge point far away from the other edge; when the x values of the edge points with large area are overlapped, the y value is selected as an independent variable again for fitting;
4.2 edge fitting: after the edge points without repeated x or y are obtained, the equation is fitted by a quintic polynomial
y=p1*x5+p2*x4+p3*x3
+p4*x^2+p5*x+p6
Or
x=p1*y5+p2*y4+p3*y3
+p4*y^2+p5*y+p6
Fitting the data to obtain a fitting equation;
and (3) solving curve curvature and extreme value: according to the formula of curvature
Figure BDA0002021700630000051
The curvature of a part of the fitting equation with the edge points is solved, then curvature extreme values are solved, most of conditions of the part are calculated and captured by the extreme values due to the fact that data fracture exists at the head and the tail of the edge, but the extreme points of the part are removed, and therefore the fact that the real extreme points are not affected by artificial cutting factors is guaranteed.
Further, the fifth step is specifically that
According to the obtained curvature extreme point and the obtained double-angle point, respectively obtaining double straight lines according to a formula
Figure BDA0002021700630000052
And calculating the tangent value of the forward included angle, and then obtaining the angle value through the arc tangent.
Compared with the prior art, the invention has the following advantages and effects: according to the invention, the picture is immediately processed after the picture is acquired, so that the time for subjective intervention and diagnosis judgment of doctors is reduced, effective judgment and corresponding treatment can be carried out on the state of illness of the patient according to the obtained data on site, and manual errors and time delay are reduced; the method realizes measurement of the angle under smaller error without strictly depending on the individual level of an ophthalmologist, and eliminates the individual difference.
Drawings
Fig. 1 is a flowchart of a method for detecting a retinal vein cross compression angle according to the present invention.
Fig. 2 is a schematic diagram of a bifurcation point in an embodiment of the present invention.
Figure 3 is a schematic diagram of a manual corner box of an embodiment of the present invention.
FIG. 4 is a schematic diagram of one-level culling of invalid points according to an embodiment of the invention.
FIG. 5 is a left and right edge schematic view of an embodiment of the present invention.
FIG. 6 is a schematic diagram of left and right edge screening for the same coordinates and fitting according to an embodiment of the invention.
FIG. 7 is a schematic diagram of curvature fitting and extreme point acquisition according to an embodiment of the invention.
FIG. 8 is a graphical illustration of bi-linear angle values for an embodiment of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
As shown in fig. 1, a method for detecting a retinal vein crossing compression angle of the present invention includes the following steps:
the method comprises the following steps: and image preprocessing, namely filtering, denoising, enhancing and binarizing the image.
This part of the processing is mainly to enhance the blood vessel and background contrast in order to segment the blood vessel from the background. Filtering, denoising and enhancing an input image, then performing expansion operation to connect fine gaps, obtaining blood vessels to be detected in an interactive mode, then performing binarization processing, extracting main parts in a mode of obtaining a maximum connected domain, and rejecting other redundant edge information.
Step two: and detecting and screening edge corners, namely detecting and screening corners of the image after binarization, providing initial double-separation points for subsequent double-edge separation, and simultaneously taking the extracted double-corner points as one of points of a double straight line.
Carrying out corner detection on the ROI area through a corner detection technology based on CPDA, and then carrying out five classifications on various identified corners: the end point, the middle point, the T-shaped bifurcation point, the Y-shaped bifurcation point and the cross point are detected by a rectangular filter to filter out the required bifurcation point, as shown in figure 2.
Step three: and for the separation of double edges, extracting the blood vessel edge information of the binarized image, and simultaneously separating the double edges of the blood vessel wall according to the positions of the angular points to obtain an independent single edge.
The double-corner points are used as initial growing points needing edge separation, and edges corresponding to the respective corner points are formed through the step-by-step connection of the corner points so as to achieve the purpose of separation.
3.1 extracting the double corners: through manual frame selection, the part to be detected is manually selected, two suitable angular points are selected at the moment, and manual intervention is relied on, so that the realization speed can be greatly increased, and the accuracy can be greatly improved. The two corner points selected are called the seed points for edge growth, as shown in fig. 3.
3.2 invalid edge point primary removing process: selecting the edge value of the x and y values of the dual-corner point as the edge point screening condition, namely
Figure BDA0002021700630000071
Wherein x, y are x, y coordinates, x, satisfying the conditioncorner,ycornerIs the x, y coordinate, x, of two corner pointsi,xj,yi,yjThe x, y coordinates of the valid edge points. The edge points in the graph after regular cutting are divided into effective parts and ineffective parts according to effectiveness, the effective parts form the edge points which are fitted finally, the ineffective parts do not participate in edge formation, and only useless points are introduced by adopting a regular cutting mode when the corner points are selected. The schematic diagram is explained as shown in fig. 4. And applying the x and y values of the obtained result to all the edge points, removing the unsatisfied edge points, and further removing the remaining edge points if the invalid edge points still exist in the remaining edge points according to the following distance judgment.
3.3 distance judgment: taking the y value obtained above as the starting point of the line scan, moving from the edge where the y value is located to the most significant value of the effective edge point, i.e. moving
y→ymax(i,j)
Respectively judging the distance between the edge point and the edge growing point, namely the double-corner point
Figure BDA0002021700630000081
D=min(A,B)
Wherein, A is the distance from the first edge growing point to the edge point to be identified, B is the distance from the second edge growing point to the edge point to be identified, and D is the maximum distance tolerance value allowed to become the edge of the edge. When A and B are not satisfied, the point is considered as a noise point, and elimination processing is carried out; when one of A and B is established, the point is considered to be an effective edge point and is classified into the edge of the edge; and when A and B are both true, selecting the result with the minimum distance as the selection result. Due to the existence of double-edge seed points, the edge growing points follow y → ymax(i,j)And the growth forms two separate edges. The effect schematic is shown in fig. 5.
3.4 invalid edge points are removed in two stages: after the double edge is obtained, there still exist invalid edge points, which are formed because the growth direction of the invalid edge points is not matched with the edge trend lines during the growth process. At this time, the invalid edge points are removed again by using a method in primary removal of the invalid edge points, so that the effect of cleaning the edge points is achieved.
Step four: and (3) obtaining an edge fitting and curve curvature extreme point, fitting a curve to the single edge after separation, and calculating the curvature according to a curve equation to obtain an extreme value large value point.
And performing equation fitting on the obtained edges, and then obtaining extreme points by solving the curvatures of the corresponding edges.
4.1 edge point pretreatment: among the obtained edge points, there are some edge points having the same x or y value, which need to be filtered before the equation fitting process. In general, we fit mostly with x as an argument, so edge points where duplicate x occurs can only eventually retain a y value at that x value. The screening modes mainly comprise a maximum value screening mode, a minimum value screening mode and a mean value screening mode, and the main processing mode is the direction of the edge point far away from the other edge, so that the edge contour can be retained to the maximum degree, and the effect is shown in fig. 6. When the x values of the edge points with large areas coincide, the edge points are lost in a large amount by fitting the x coordinates, the reduction degree of the edge information is reduced greatly, for example, after 20 edge points are screened, only 3 edge points are left, only 3/20 is left after the effective point proportion, and the fitting effect is poor. At this time, the y value is selected again as an independent variable to carry out fitting, and most of edge points are made to participate in the reduction process as much as possible so as to keep the truest edge.
4.2 edge fitting: after the edge points without repeated x or y are obtained, the equation is fitted by a quintic polynomial
y=p1*x5+p2*x4+p3*x3
+p4*x^2+p5*x+p6
Or
x=p1*y5+p2*y4+p3*y3
+p4*y^2+p5*y+p6
And fitting the data to obtain a fitting equation.
And (3) solving curve curvature and extreme value: according to the formula of curvature
Figure BDA0002021700630000101
The curvature of a part of the fitting equation with the edge points is solved, then curvature extreme values are solved, most of conditions of the part are obtained by extreme value calculation and captured due to the fact that data fracture exists at the head and the tail of the edge, but the extreme points of the part are removed out to ensure that the real extreme points are not affected by artificial cutting factors, and an effect graph is shown in fig. 7.
Step five: and (4) angle calculation, namely forming double straight lines by the corner points and curvature extreme points corresponding to the double edges respectively, and then calculating the forward included angle of the double straight lines along the flow direction of the blood vessel.
According to the obtained curvature extreme point and the obtained double-angle point, a double straight line can be obtained respectively, and the double straight lines are obtained according to a formula
Figure BDA0002021700630000102
The tangent value of the forward included angle is obtained, and then the angle value is obtained through the arctangent, and the effect is shown in figure 8.
According to the invention, the picture is immediately processed after the picture is acquired, so that the time for subjective intervention and diagnosis judgment of doctors is reduced, effective judgment and corresponding treatment can be carried out on the state of illness of the patient according to the obtained data on site, and manual errors and time delay are reduced; the method realizes measurement of the angle under smaller error without strictly depending on the individual level of an ophthalmologist, and eliminates the individual difference.
The above description of the present invention is intended to be illustrative. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (7)

1. A method for detecting a retinal vein cross compression angle is characterized by comprising the following steps:
the method comprises the following steps: image preprocessing, namely filtering, denoising, enhancing and binarizing the image;
step two: detecting and screening edge corners, namely detecting and screening corners of the binarized image;
step three: double-edge separation, namely taking double-corner points as initial growing points needing edge separation, and forming edges corresponding to respective corner points through step-by-step connection of the corner points;
the third step is specifically that
3.1 extracting the double corners: selecting a part to be detected by manual frame selection, selecting two proper corner points in the part to be detected by manual frame selection, wherein the two selected corner points are called as seed points for edge growth;
3.2, eliminating invalid edge points at the first level;
the process of the first-level elimination of the invalid edge points is specifically
Selecting the edge value of the x and y values of the dual-corner point as the edge point screening condition, namely
Figure FDA0002624677310000011
Wherein x, y are x, y coordinates, x, satisfying the conditioncorner,ycornerIs the x, y coordinate, x, of two corner pointsi,xj,yi,yjX, y coordinates of valid edge points;
dividing edge points in the graph after regular cutting into an effective part and an ineffective part according to effectiveness, wherein the effective part forms the edge points which are fitted finally, the ineffective part does not participate in edge formation, and only adopts a regular cutting mode to introduce useless points when selecting angular points;
applying the x and y values of the obtained result to all the edge points, and removing the unsatisfied edge points;
3.3, judging the distance;
the specific process of the distance judgment is
Using the y value obtained at 3.2 as the starting point of the line scan, the edge where the y value is located is moved to the maximum value of the effective edge point, i.e. the value is shifted from the edge where the y value is located
y→ymax(i,j)
Respectively judging the distance between the edge point and the edge growing point, namely the double-corner point
Figure FDA0002624677310000021
D=min(A,B)
Wherein A is the distance from the first edge growing point to the edge point to be identified, B is the distance from the second edge growing point to the edge point to be identified, and D is the maximum distance tolerance value allowed to become the edge of the edge;
when A and B are not satisfied, the point is considered as a noise point, and elimination processing is carried out;
when one of A and B is established, the point is considered to be an effective edge point and is classified into the edge of the edge;
when both A and B are established, selecting the result with the minimum distance as a selection result;
due to the existence of double-edge seed points, the edge growing points follow y → ymax(i,j)And two independent edges are formed by growth;
3.4 invalid edge points are removed in two stages: after the double edges are obtained, the invalid edge points still exist, and are removed again by using a method in primary removal of the invalid edge points, so that the effect of cleaning the edge points is achieved;
step four: performing edge fitting and curve curvature extreme point acquisition, performing curve fitting on the separated single edge, and performing curvature calculation according to a curve equation to obtain an extreme value large value point;
step five: and (4) angle calculation, namely forming double straight lines by the corner points and curvature extreme points corresponding to the double edges respectively, and then calculating the forward included angle of the double straight lines along the flow direction of the blood vessel.
2. The method for detecting a retinal vein cross-compression angle of claim 1, wherein: the method comprises the following steps of firstly, filtering, denoising and enhancing an input image, then, carrying out expansion operation to connect fine gaps, obtaining blood vessels to be detected in an interactive mode, then carrying out binarization processing, extracting main parts in a mode of obtaining a maximum connected domain, and rejecting other redundant edge information.
3. The method for detecting a retinal vein cross-compression angle of claim 1, wherein: the second step is to perform corner detection on the ROI area through a corner detection technology based on CPDA, and then perform five classifications on various identified corners: and detecting angular points in the range through a rectangular screening device to screen out the needed bifurcation points.
4. The method for detecting a retinal vein cross-compression angle of claim 1, wherein: the fourth step is specifically that
4.1 edge point pretreatment;
4.2 fitting edges;
4.3, solving the curvature and extreme value of the curve: according to the formula of curvature
Figure FDA0002624677310000031
And (3) solving the curvature of the part of the fitting equation with the edge point, then solving the curvature extreme value, and removing the extreme value point of the part under the condition that the data fracture exists at the head part and the tail part of the edge.
5. The method for detecting a retinal vein cross-compression angle of claim 4, wherein: the edge point pretreatment process is specifically
Some edge points with the same x or y value exist in the obtained edge points, and the edge points need to be screened before the equation fitting process;
fitting is carried out by taking x as an independent variable, so that an edge point with repeated x can only keep a y value at the x value;
the screening mode comprises three modes of maximum screening, minimum screening and mean screening, and the adopted processing mode is that the edge outline of the edge point is retained to the maximum extent in the direction away from the other edge;
when the x values of the edge points with large areas coincide, the edge points are largely lost by fitting the x coordinates, the reduction degree of the edge information is greatly reduced, and the y values are selected as independent variables for fitting again, so that most of the edge points participate in the reduction process to keep the truest edge.
6. The method for detecting a retinal vein cross-compression angle of claim 4, wherein: the edge fitting process is specifically
After the edge points without repeated x or y are obtained, the equation is fitted by a quintic polynomial
y=p1*x5+p2*x4+p3*x3+p4*x^2+p5*x+p6
Or
x=p1*y5+p2*y4+p3*y3+p4*y^2+p5*y+p6
And fitting the data to obtain a fitting equation.
7. The method for detecting a retinal vein cross-compression angle of claim 1, wherein: the fifth step is that according to the obtained curvature extreme point and the obtained double-angle point, double straight lines can be obtained respectively, and according to a formula
Figure FDA0002624677310000051
And calculating the tangent value of the forward included angle, and then obtaining the angle value through the arc tangent.
CN201910281136.7A 2019-04-09 2019-04-09 Method for detecting retinal vein cross compression angle Active CN109993765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910281136.7A CN109993765B (en) 2019-04-09 2019-04-09 Method for detecting retinal vein cross compression angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910281136.7A CN109993765B (en) 2019-04-09 2019-04-09 Method for detecting retinal vein cross compression angle

Publications (2)

Publication Number Publication Date
CN109993765A CN109993765A (en) 2019-07-09
CN109993765B true CN109993765B (en) 2020-10-30

Family

ID=67132632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910281136.7A Active CN109993765B (en) 2019-04-09 2019-04-09 Method for detecting retinal vein cross compression angle

Country Status (1)

Country Link
CN (1) CN109993765B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861999A (en) * 2020-06-24 2020-10-30 北京百度网讯科技有限公司 Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium
CN112669321B (en) * 2021-03-22 2021-08-03 常州微亿智造科技有限公司 Sand blasting unevenness detection method based on feature extraction and algorithm classification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2360641A1 (en) * 2010-01-20 2011-08-24 Kowa Company, Ltd. Image processing method and image processing device
CN104867147A (en) * 2015-05-21 2015-08-26 北京工业大学 SYNTAX automatic scoring method based on coronary angiogram image segmentation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824295B (en) * 2014-03-03 2016-08-17 天津医科大学 The dividing method of adherent hyaline-vascular type lung nodule in a kind of lung CT image
CN106157320B (en) * 2016-07-29 2019-02-01 上海联影医疗科技有限公司 A kind of image blood vessel segmentation method and device
JP6815798B2 (en) * 2016-09-09 2021-01-20 株式会社トプコン Ophthalmic imaging equipment and ophthalmic image processing equipment
CN107464239A (en) * 2017-08-09 2017-12-12 南通大学 A kind of blood vessel angle method for automatic measurement based on liver cancer immunity group image
CN108961334B (en) * 2018-06-26 2020-05-08 电子科技大学 Retinal vessel wall thickness measuring method based on image registration
CN109166124B (en) * 2018-11-20 2021-12-14 中南大学 Retinal blood vessel morphology quantification method based on connected region

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2360641A1 (en) * 2010-01-20 2011-08-24 Kowa Company, Ltd. Image processing method and image processing device
CN104867147A (en) * 2015-05-21 2015-08-26 北京工业大学 SYNTAX automatic scoring method based on coronary angiogram image segmentation

Also Published As

Publication number Publication date
CN109993765A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN108520211A (en) The extracting method of finger venous image feature based on finger folding line
CN107392910B (en) Lung lobe segmentation method and device based on CT image
KR101472558B1 (en) The system and method for automatic segmentation of lung, bronchus, pulmonary vessels images from thorax ct images
CN109410221B (en) Cerebral perfusion image segmentation method, device, server and storage medium
EP3497669B1 (en) Method for automatically detecting systemic arteries in arbitrary field-of-view computed tomography angiography (cta).
CN109993765B (en) Method for detecting retinal vein cross compression angle
US8958621B2 (en) Corneal graft evaluation based on optical coherence tomography image
Pujari et al. Analysis of ultrasound images for identification of Chronic Kidney Disease stages
CN108257126B (en) Blood vessel detection and registration method, equipment and application of three-dimensional retina OCT image
JP2010088795A (en) Medical image processor and medical image diagnosis device
Xiao et al. Pulmonary fissure detection in CT images using a derivative of stick filter
Schneider et al. Automatic global vessel segmentation and catheter removal using local geometry information and vector field integration
Kumari et al. Blood vessel extraction using wiener filter and morphological operation
Mudassar et al. Extraction of blood vessels in retinal images using four different techniques
Kazeminia et al. Bone extraction in X-ray images by analysis of line fluctuations
CN104851103B (en) Choroidal artery abstracting method based on SD OCT retinal images
CN116277978A (en) Multimode bone joint digital 3D printing method
CN110796086A (en) Iris segmentation method of AS-OCT image based on local phase tensor algorithm
Mesquita et al. An algorithm for measuring pterygium's progress in already diagnosed eyes
Kumar et al. Automatic segmentation of lung lobes and fissures for surgical planning
Shang et al. Adaptive directional region growing segmentation of the hepatic vasculature
Supriyanto et al. Automatic non invasive kidney volume measurement based on ultrasound image
CN110838121A (en) Child hand bone joint identification method for assisting bone age identification
CN113362280A (en) Dynamic target tracking method based on medical radiography
Vasani et al. Lung cancer detection using CT scan images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant