CN116740203B - Safety storage method for fundus camera data - Google Patents

Safety storage method for fundus camera data Download PDF

Info

Publication number
CN116740203B
CN116740203B CN202311021360.5A CN202311021360A CN116740203B CN 116740203 B CN116740203 B CN 116740203B CN 202311021360 A CN202311021360 A CN 202311021360A CN 116740203 B CN116740203 B CN 116740203B
Authority
CN
China
Prior art keywords
region
blood vessel
compression
point
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311021360.5A
Other languages
Chinese (zh)
Other versions
CN116740203A (en
Inventor
赵曼
王恩军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Polytechnic College
Original Assignee
Shandong Polytechnic College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Polytechnic College filed Critical Shandong Polytechnic College
Priority to CN202311021360.5A priority Critical patent/CN116740203B/en
Publication of CN116740203A publication Critical patent/CN116740203A/en
Application granted granted Critical
Publication of CN116740203B publication Critical patent/CN116740203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Security & Cryptography (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention relates to the technical field of electric data compression storage, in particular to a safe storage method for fundus camera data. The method collects fundus images, identifies abnormal areas, background areas and initial vascular points, and obtains each vascular area through screening and classifying the initial vascular points. The intersection region of the blood vessel region and the abnormal region is taken as a first compression region, the non-intersection region or the abnormal region without intersection in the abnormal region is taken as a second compression region, and the blood vessel region is taken as a third compression region. And (3) targeted compression is carried out on the fundus image through compression coefficients corresponding to different compression areas, and image information is transmitted to a storage server for storage by combining with the classification labels of the compressed image. The invention ensures the transmission and storage efficiency of fundus images and the quality of compressed images, and does not affect the medical reference function after image retrieval.

Description

Safety storage method for fundus camera data
Technical Field
The invention relates to the technical field of electric data compression storage, in particular to a safe storage method for fundus camera data.
Background
Fundus screening is a key step of diagnosis and treatment, is not limited to the examination of ophthalmic diseases, and can monitor systemic diseases according to various rich arterial and venous vessels of fundus. The existing fundus camera can acquire a high-quality fundus image, but data storage of a large number of fundus images takes much time.
In the existing fundus image storage technology, most fundus images are classified and stored according to different types of fundus images by utilizing a neural network. Considering that the storage amount of the image is large and the storage requirement is high, the bottom image is generally compressed, and the common technology is to divide the image into high-frequency and low-frequency information for compression of different degrees or combine the image characteristics of blood vessels for compression of different degrees. The prior art has the problem that the correlation between the background and the blood vessel is not fully considered in the compression of the high-low frequency information or the compression of the image characteristics of the blood vessel, so that the compression and the storage are too independent, and the effect after the image recovery is affected.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a safe storage method for fundus camera data, which adopts the following technical scheme:
the invention provides a safe storage method for fundus camera data, which comprises the following steps:
collecting a fundus image, and identifying an abnormal region in the fundus image; screening out an initial vascular point and a background point according to the color information of each pixel point in the fundus image; the background points form a background area;
performing edge detection on the fundus image, and matching the obtained edge points with the initial blood vessel points to obtain blood vessel edge points; classifying the blood vessel edge points according to gradient direction differences of adjacent blood vessel edge points to obtain a plurality of groups of similar blood vessel edge points, wherein each group of similar blood vessel edge points form a blood vessel region;
taking an intersecting region of the blood vessel region and the abnormal region as a first compression region, taking a non-intersecting region or an abnormal region without intersecting in the abnormal region as a second compression region, and taking the blood vessel region as a third compression region; obtaining the contrast ratio of each compression region and the background region; obtaining a compression coefficient of each compression region according to a compression coefficient formula, the compression coefficient formula comprising:
wherein,is the firstCompression coefficients of the compression regions;is the firstContrast of the individual compressed regions;as a symbolic representation function, ifThe compressed area is the first compressed areaOtherwiseFirst, theThe area of the compressed regions;
encoding and compressing the bottom image according to the compression coefficient of each compression region to obtain a compressed image; and taking the diagnosis information and the patient identity information corresponding to the compressed image as classification labels, and transmitting the compressed image and the corresponding classification labels to a storage server for storage.
Further, the identifying the abnormal region in the fundus image includes:
and processing the fundus image by using the trained target detection network to obtain an abnormal region bounding box, and taking the region in the abnormal region bounding box as an abnormal region.
Further, screening out an initial vascular point and a background point according to the color information of each pixel point in the fundus image;
and converting the fundus image into an HSV color space, obtaining a tone channel value of each pixel point under the H channel, and taking the pixel point with the tone channel value smaller than a preset tone threshold value as an initial blood vessel point.
Further, classifying the blood vessel edge points according to gradient direction differences of adjacent blood vessel edge points to obtain multiple groups of similar blood vessel edge points, wherein each group of similar blood vessel edge points form a blood vessel region, and the method comprises the following steps:
obtaining similar judgment indexes of each blood vessel edge point according to an edge gradient direction model, wherein the edge gradient direction model comprises:
wherein,is the firstThe same kind of judging index of each pixel point,is the firstThe gradient direction angle of the edge points of the individual blood vessels,is the first toThe adjacent first pixel pointsThe gradient direction angle of the edge points of the individual blood vessels,is the first toThe adjacent first pixel pointsGradient direction angle of each vessel edge point;
traversing each blood vessel edge point in sequence, and taking the blood vessel edge points with continuous and judgment indexes smaller than a preset judgment threshold value as a group of similar blood vessel edge points; and sequentially connecting the edge points of each group of similar blood vessels to obtain two blood vessel edges, wherein the two blood vessel edges form a blood vessel region.
Further, the method for acquiring the intersection region comprises the following steps:
sequentially traversing the vessel edge points on the vessel region, searching the intersection points of the vessel region and the abnormal region, taking the intersection point as a first vessel edge intersection point if searching to obtain an intersection point, obtaining a second vessel edge intersection point in the extending direction of the first vessel edge intersection point, and dividing the abnormal region into an intersection region and a non-intersection region according to the first vessel edge intersection point and the second vessel edge intersection point; the extension direction is perpendicular to the gradient direction of the first vessel edge intersection point.
Further, the encoding and compressing the bottom image according to the compression coefficient of each compression region, and obtaining the compressed image includes:
based on the traditional Huffman coding, the compression coefficients of different compression areas are used as compression weights to obtain a compressed image.
The invention has the following beneficial effects:
1. according to the embodiment of the invention, the high-quality compressed image can be transmitted to the storage server according to the interrelation between the high-frequency information point and the low-frequency information point and the compression of important areas which are easy to distort and have different degrees, so that the image detail in the fundus image can be kept as far as possible, and the image quality is higher compared with the prior art after the image is called and decompressed in the storage server.
2. According to the embodiment of the invention, the pixel points of the ocular fundus blood vessel can be accurately extracted by utilizing the ductility and the local direction consistency of the ocular fundus blood vessel, meanwhile, the abnormal region of the micro fracture of the blood vessel can be completely reserved, the intersection characteristic between the abnormal region and the blood vessel is obtained, and an accurate compression coefficient reference is provided for image compression.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for securely storing fundus camera data according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to specific embodiments, structures, features and effects of a safe storage method for fundus camera data according to the present invention, which are described in detail below with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a secure storage method for fundus camera data provided by the present invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for securely storing fundus camera data according to an embodiment of the present invention is shown, where the method includes:
step S1: collecting a fundus image, and identifying an abnormal region in the fundus image; screening out an initial vascular point and a background point according to the color information of each pixel point in the fundus image; the background spots constitute a background area.
In the embodiment of the invention, the image acquisition end is a fundus camera, and the compressed image receiving end is a storage server of a hospital. The fundus image is acquired through the fundus camera, and label classification is carried out through doctor diagnosis of the fundus image and relevant identity information of a patient, so that the subsequent classified storage is facilitated.
And then, detecting abnormal areas of the labeled fundus image by using the trained target detection network, and acquiring the abnormal areas in the fundus image. In the embodiment of the invention, the target detection network is a commonly used Faster R-CNN, and the network output is a bounding box of the abnormal region, namely, the region in the bounding box of the abnormal region is used as the abnormal region. Surrounding frame data of the abnormal region is stored, and compression coefficients of different regions are calculated subsequently.
By detecting the abnormal region, the influence of the non-target region is reduced during the subsequent analysis of the image background and the vascular characteristics, and the speed of acquiring the compression coefficients of different regions is improved.
In order to accurately acquire vascular edge pixel points, HSV color space conversion is carried out on the fundus image, fundus images are converted into HSV color space, and fundus HSV images are acquired. Wherein H is expressed as a tone value, a tone channel value of each pixel point under the H channel is obtained, and a tone threshold value is setAnd taking the pixel points with tone channel values smaller than a preset tone threshold value as initial vascular points, wherein the rest pixel points are background points, and the background points form a background area.
Step S2: performing edge detection on the fundus image, and matching the obtained edge points with the initial blood vessel points to obtain blood vessel edge points; classifying the blood vessel edge points according to gradient direction differences of adjacent blood vessel edge points to obtain a plurality of groups of similar blood vessel edge points, wherein each group of similar blood vessel edge points form a blood vessel region.
Graying the fundus image, and performing edge detection to obtain all edge pixel points in the image. Because the initial blood vessel point comprises a blood vessel pixel point and a retina background blood seepage abnormal pixel point, the obtained edge point is matched with the initial blood vessel point, and the blood vessel edge point is obtained.
Classifying the blood vessel edge points according to gradient direction differences of adjacent blood vessel edge points to obtain a plurality of groups of similar blood vessel edge points, wherein each group of similar blood vessel edge points form a blood vessel region, and the method specifically comprises the following steps:
since the fundus has rich arteriovenous vessels, most of edge pixel points obtained by edge detection belong to vessel edge pixel points and abnormal region edge point pixel points. The blood vessel has good morphological characteristics, the blood vessel extends from the central point of the retina to the periphery of the retina, and the blood vessel has certain ductility and the extending direction is not fixed, so that an edge gradient direction model is constructed, similar blood vessel edge points of the same blood vessel of the retina are obtained, similar judging indexes of the edge points of each blood vessel are obtained according to the edge gradient direction model, and the edge gradient direction model comprises:
wherein,is the firstThe same kind of judging index of each pixel point,is the firstThe gradient direction angle of the edge points of the individual blood vessels,is the first toThe adjacent first pixel pointsThe gradient direction angle of the edge points of the individual blood vessels,is the first toThe adjacent first pixel pointsGradient direction angle of each vessel edge point; since retinal blood vessels are interlaced with each other, ifIf the edge pixel points are blood vessel branch nodes, there may be more than one adjacent blood vessel edge pixel points, and the selected firstThe edge pixel points are necessarily the edge pixel points of one of the blood vessel branches, and the firstAnd must also be the next edge pixel point on the branch. By utilizing the extension characteristic and the local direction consistency of the blood vessel, the edge gradient direction model is obtained, the characteristic that the extension direction of the blood vessel is not fixed can be overcome, and the edge of the blood vessel with a single branch can be accurately obtained.
Setting a judgment threshold by utilizing the characteristic of the local direction consistency of the blood vesselTraversing each blood vessel edge point at 5 degrees in sequence, and taking the blood vessel edge points with continuous and judgment indexes smaller than a preset judgment threshold value as a group of similar blood vessel edge points; and sequentially connecting the edge points of each group of similar blood vessels to obtain two blood vessel edges, wherein the two blood vessel edges form a blood vessel region. Since the resolution of the fundus blood vessel image is required to be high, the use of pixel-by-pixel is verifiedThe necessity of acquiring the vascular edge pixel points.
Step S3: taking an intersecting region of the blood vessel region and the abnormal region as a first compression region, taking a non-intersecting region or an abnormal region without intersecting in the abnormal region as a second compression region, and taking the blood vessel region as a third compression region; the compression coefficient of each compression region is obtained.
When local retinal angiogenesis breaks or hemangiomas exist, the above-described feature of local directional uniformity of blood vessels is broken. The local retinal angiogenesis rupture or the presence of hemangioma, that is, the abnormal region obtained in step S1, so the intersection region with the vascular edge pixel point in the abnormal region is used as the first compressed region, and the specific acquisition method of the intersection region includes:
sequentially traversing the vessel edge points on the vessel region, searching the intersection points of the vessel region and the abnormal region, taking the intersection point as a first vessel edge intersection point if searching to obtain an intersection point in order to obtain a complete vessel edge pixel point intersected with the abnormal region, obtaining a second vessel edge intersection point in the extending direction of the first vessel edge intersection point, considering the second vessel edge intersection point and the first vessel edge intersection point as the same branch vessel edge point, and dividing the abnormal region into an intersection region and a non-intersection region according to the first vessel edge intersection point and the second vessel edge intersection point; the extension direction is perpendicular to the gradient direction of the first vessel edge intersection point.
The intersection region of the blood vessel region and the abnormal region is taken as a first compression region, the non-intersection region or the abnormal region without intersection in the abnormal region is taken as a second compression region, and the blood vessel region is taken as a third compression region. The whole fundus image is divided into a plurality of compression regions, and since the background region is compressed in a default manner in the subsequent compression process, the acquisition of the subsequent compression coefficient is performed without considering the background region as the compression region.
In the prior art, the compression method for the image generally compresses high-frequency and low-frequency information to different degrees, wherein the high-frequency information points are generally edge texture pixel points and low-frequency informationThe dots are typically background pixel dots. In order to consider the correlation between high-frequency information points and low-frequency information points, enough detail features are reserved, and the compression coefficients of different areas are obtained by utilizing the contrast relation between the high-frequency information points and abnormal area pixel points and the background. The contrast is the difference value of the gray average value of the pixel points in different areasWherein, the method comprises the steps of, wherein,represent the firstThe gray-scale average value of each region,and (3) representing the gray average value of the normal background area, and obtaining the compression coefficient of each compression area according to a compression coefficient formula, wherein the compression coefficient formula comprises:
wherein,is the firstCompression coefficients of the compression regions;is the firstContrast of the individual compressed regions;as a symbolic representation function, ifThe compressed area is the first compressed areaOtherwiseFirst, theArea of the compressed region.
By the method, the compression coefficients of different areas are obtained, and the three compression areas are taken as examples for the convenience of explanation, and the compression coefficients of abnormal areas intersected with blood vessels are taken as the compression coefficientsThe compression coefficient of an abnormal region that does not intersect with a blood vessel is taken as the compression coefficientCompression coefficient of blood vessel edge pixel point as compression coefficient. In the process of compressing the fundus image, the retina background area can be normally compressed, and the compression coefficient is utilizedFirst stage compression is performed on the first compression region by using compression coefficientPerforming two-stage compression on the second compression region by using compression coefficientAnd performing three-stage compression on the third compression region. The method can better preserve the detail quality of the compressed image, can not distort the detail parts such as blood vessels in the decompressing process, preserve the medical effect of the image, and has better preservation effect on tiny blood vessel rupture.
Step S4: encoding and compressing the bottom image according to the compression coefficient of each compression region to obtain a compressed image; and taking the diagnosis information and the patient identity information corresponding to the compressed image as classification labels, and transmitting the compressed image and the corresponding classification labels to a storage server for storage.
And (3) obtaining compression coefficients of different areas in the step (S3), and compressing and storing the bottom-eye image. Specifically, based on the conventional huffman coding, the compression coefficients of different compression regions are increased, so that the background region, the abnormal region, and the non-indiscriminate compression and huffman coding of the abnormal region and the blood vessel related region are realized as the prior art, which is not described in detail herein. And compressing the compression coefficients of the compression areas as compression weights and gray values of pixel points in the image to realize a compression effect.
According to the classifying label of the compressed image and the corresponding compressed image, classifying and storing all the compressed images in a fundus image database of a storage server by taking the classifying label as an index, the storing method can improve the storing efficiency and the fetching speed of fundus images, can ensure the high quality of decompressed images of the stored images, and does not affect the medical reference function after the fetching of the images.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (5)

1. A secure storage method for fundus camera data, the method comprising:
collecting a fundus image, and identifying an abnormal region in the fundus image; screening out an initial vascular point and a background point according to the color information of each pixel point in the fundus image; the background points form a background area;
performing edge detection on the fundus image, and matching the obtained edge points with the initial blood vessel points to obtain blood vessel edge points; classifying the blood vessel edge points according to gradient direction differences of adjacent blood vessel edge points to obtain a plurality of groups of similar blood vessel edge points, wherein each group of similar blood vessel edge points form a blood vessel region;
taking an intersecting region of the blood vessel region and the abnormal region as a first compression region, taking a non-intersecting region or an abnormal region without intersecting in the abnormal region as a second compression region, and taking the blood vessel region as a third compression region; obtaining the contrast ratio of each compression region and the background region; obtaining a compression coefficient of each compression region according to a compression coefficient formula, the compression coefficient formula comprising:
wherein (1)>Is->Compression coefficients of the compression regions; />Is->Contrast of the individual compressed regions; />For symbolizing the function, if->The compressed region is the first compressed region, & gt>Otherwise->;/>Is->The area of the compressed regions;
encoding and compressing the bottom image according to the compression coefficient of each compression region to obtain a compressed image; the diagnosis information and the patient identity information corresponding to the compressed image are used as classification labels, and the compressed image and the corresponding classification labels are transmitted to a storage server together for storage;
the method for acquiring the intersection area comprises the following steps:
sequentially traversing the vessel edge points on the vessel region, searching the intersection points of the vessel region and the abnormal region, taking the intersection point as a first vessel edge intersection point if searching to obtain an intersection point, obtaining a second vessel edge intersection point in the extending direction of the first vessel edge intersection point, and dividing the abnormal region into an intersection region and a non-intersection region according to the first vessel edge intersection point and the second vessel edge intersection point; the extension direction is perpendicular to the gradient direction of the first vessel edge intersection point.
2. A secure storage method for fundus camera data according to claim 1, wherein said identifying an abnormal region in a fundus image comprises:
and processing the fundus image by using the trained target detection network to obtain an abnormal region bounding box, and taking the region in the abnormal region bounding box as an abnormal region.
3. The method for safely storing fundus camera data according to claim 1, wherein the initial blood vessel point and the background point are screened out according to the color information of each pixel point in the fundus image;
and converting the fundus image into an HSV color space, obtaining a tone channel value of each pixel point under the H channel, and taking the pixel point with the tone channel value smaller than a preset tone threshold value as an initial blood vessel point.
4. The method according to claim 1, wherein classifying the blood vessel edge points according to gradient direction differences of adjacent blood vessel edge points to obtain a plurality of groups of similar blood vessel edge points, each group of similar blood vessel edge points forming a blood vessel region comprises:
obtaining similar judgment indexes of each blood vessel edge point according to an edge gradient direction model, wherein the edge gradient direction model comprises:
wherein (1)>Is->Similar judgment indexes of each pixel point, < +.>Is->Gradient direction angle of the edge points of the individual blood vessels +.>Is->Adjacent ones of the pixels>Gradient direction angle of the edge points of the individual blood vessels +.>Is->Adjacent ones of the pixels>Gradient direction angle of each vessel edge point;
traversing each blood vessel edge point in sequence, and taking the blood vessel edge points with continuous and judgment indexes smaller than a preset judgment threshold value as a group of similar blood vessel edge points; and sequentially connecting the edge points of each group of similar blood vessels to obtain two blood vessel edges, wherein the two blood vessel edges form a blood vessel region.
5. The method according to claim 1, wherein the encoding and compressing the fundus image based on the compression coefficient of each compression region to obtain a compressed image comprises:
based on the traditional Huffman coding, the compression coefficients of different compression areas are used as compression weights to obtain a compressed image.
CN202311021360.5A 2023-08-15 2023-08-15 Safety storage method for fundus camera data Active CN116740203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311021360.5A CN116740203B (en) 2023-08-15 2023-08-15 Safety storage method for fundus camera data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311021360.5A CN116740203B (en) 2023-08-15 2023-08-15 Safety storage method for fundus camera data

Publications (2)

Publication Number Publication Date
CN116740203A CN116740203A (en) 2023-09-12
CN116740203B true CN116740203B (en) 2023-11-28

Family

ID=87911834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311021360.5A Active CN116740203B (en) 2023-08-15 2023-08-15 Safety storage method for fundus camera data

Country Status (1)

Country Link
CN (1) CN116740203B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348541A (en) * 2019-05-10 2019-10-18 腾讯医疗健康(深圳)有限公司 Optical fundus blood vessel image classification method, device, equipment and storage medium
CN110874597A (en) * 2018-08-31 2020-03-10 福州依影健康科技有限公司 Blood vessel feature extraction method, device and system for fundus image and storage medium
CN111127425A (en) * 2019-12-23 2020-05-08 北京至真互联网技术有限公司 Target detection positioning method and device based on retina fundus image
WO2020147263A1 (en) * 2019-01-18 2020-07-23 平安科技(深圳)有限公司 Eye fundus image quality evaluation method, device and storage medium
WO2021253939A1 (en) * 2020-06-18 2021-12-23 南通大学 Rough set-based neural network method for segmenting fundus retinal vascular image
CN116012594A (en) * 2023-01-06 2023-04-25 复旦大学 Fundus image feature extraction method, fundus image feature extraction device and diagnosis system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284101A (en) * 2017-07-28 2021-08-20 新加坡国立大学 Method for modifying retinal fundus images for a deep learning model
CN110930446B (en) * 2018-08-31 2024-03-19 福州依影健康科技有限公司 Pretreatment method and storage device for quantitative analysis of fundus images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874597A (en) * 2018-08-31 2020-03-10 福州依影健康科技有限公司 Blood vessel feature extraction method, device and system for fundus image and storage medium
WO2020147263A1 (en) * 2019-01-18 2020-07-23 平安科技(深圳)有限公司 Eye fundus image quality evaluation method, device and storage medium
CN110348541A (en) * 2019-05-10 2019-10-18 腾讯医疗健康(深圳)有限公司 Optical fundus blood vessel image classification method, device, equipment and storage medium
CN111127425A (en) * 2019-12-23 2020-05-08 北京至真互联网技术有限公司 Target detection positioning method and device based on retina fundus image
WO2021253939A1 (en) * 2020-06-18 2021-12-23 南通大学 Rough set-based neural network method for segmenting fundus retinal vascular image
CN116012594A (en) * 2023-01-06 2023-04-25 复旦大学 Fundus image feature extraction method, fundus image feature extraction device and diagnosis system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Recent trends and advances in fundus image analysis: A review;Shahzaib Iqbal 等;Computers in Biology and Medicine;第151卷;全文 *
基于置信度计算的快速眼底图像视盘定位;吴慧;陈再良;欧阳平波;陈昌龙;邹北骥;;计算机辅助设计与图形学学报(第06期);全文 *
眼底数码图像处理关键技术研究;袁野等;北京生物医学工程(第01期);全文 *

Also Published As

Publication number Publication date
CN116740203A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN113011485B (en) Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device
CN109886933B (en) Medical image recognition method and device and storage medium
US11210789B2 (en) Diabetic retinopathy recognition system based on fundus image
CN111340789A (en) Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN104573712B (en) Arteriovenous retinal vessel sorting technique based on eye fundus image
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
WO2021184805A1 (en) Blood pressure prediction method and device using multiple data sources
JP2002165757A (en) Diagnostic supporting system
CN114283158A (en) Retinal blood vessel image segmentation method and device and computer equipment
CN111161287A (en) Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning
CN111340773B (en) Retinal image blood vessel segmentation method
Subramanian et al. Diabetic Retinopathy–Feature Extraction and Classification using Adaptive Super Pixel Algorithm
CN110610480B (en) MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN111028232A (en) Diabetes classification method and equipment based on fundus images
CN111862090A (en) Method and system for esophageal cancer preoperative management based on artificial intelligence
CN113763336A (en) Image multi-task identification method and electronic equipment
CN114066846A (en) CTP non-acute occlusion ischemia assessment method and system based on deep learning
Mookiah et al. On the quantitative effects of compression of retinal fundus images on morphometric vascular measurements in VAMPIRE
CN111047590A (en) Hypertension classification method and device based on fundus images
CN116740203B (en) Safety storage method for fundus camera data
CN116071337A (en) Endoscopic image quality evaluation method based on super-pixel segmentation
CN115018855A (en) Optic disk and optic cup segmentation method based on blood vessel characteristic guidance
CN112614091A (en) Ultrasonic multi-section data detection method for congenital heart disease
CN113130050A (en) Medical information display method and system
CN117974692B (en) Ophthalmic medical image processing method based on region growing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant