CN116681923A - Automatic ophthalmic disease classification method and system based on artificial intelligence - Google Patents

Automatic ophthalmic disease classification method and system based on artificial intelligence Download PDF

Info

Publication number
CN116681923A
CN116681923A CN202310534978.5A CN202310534978A CN116681923A CN 116681923 A CN116681923 A CN 116681923A CN 202310534978 A CN202310534978 A CN 202310534978A CN 116681923 A CN116681923 A CN 116681923A
Authority
CN
China
Prior art keywords
image
ophthalmic
classified
focus
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310534978.5A
Other languages
Chinese (zh)
Inventor
肖璇
李莹
高翔
纪振宇
杨宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renmin Hospital of Wuhan University
Original Assignee
Renmin Hospital of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renmin Hospital of Wuhan University filed Critical Renmin Hospital of Wuhan University
Priority to CN202310534978.5A priority Critical patent/CN116681923A/en
Publication of CN116681923A publication Critical patent/CN116681923A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of disease classification, and discloses an automatic ophthalmic disease classification method and system based on artificial intelligence, wherein the method comprises the following steps: performing image classification on the target ophthalmic image to obtain a classified ophthalmic image; extracting image features of the classified ophthalmic images to obtain classified image features, determining focus images in the classified ophthalmic images, acquiring disorder description information corresponding to the classified ophthalmic images, and analyzing focus categories of the classified ophthalmic images to obtain first focus categories; scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and performing focus category analysis on the classified ophthalmic image to obtain a second focus category; calculating deviation coefficients of the first focus category and the second focus category, analyzing actual focus categories corresponding to classified ophthalmic images, and classifying the classified ophthalmic images into ophthalmic diseases to obtain a first classification result. The invention aims to improve the accuracy of automatic classification of ophthalmic diseases based on artificial intelligence.

Description

Automatic ophthalmic disease classification method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of disease classification, in particular to an automatic ophthalmic disease classification method and system based on artificial intelligence.
Background
Ophthalmic diseases are some diseases of eyes, and the types of ophthalmic diseases are many, so that when in treatment, the corresponding types of ophthalmic diseases need to be accurately determined so as to facilitate the subsequent establishment of corresponding treatment measures.
However, the existing automatic classification method of the ophthalmic diseases mainly comprises the steps of checking eyes through medical equipment, judging the type corresponding to the ophthalmic diseases according to checking results, but similar results are generated by different symptoms, so that the acquired eye images have the problem of higher similarity, and further, the error of judging the types of the diseases affects the subsequent treatment, so that a method capable of improving the automatic classification accuracy of the ophthalmic diseases based on artificial intelligence is needed.
Disclosure of Invention
The invention provides an automatic ophthalmic disease classification method and system based on artificial intelligence, and mainly aims to improve the accuracy of automatic ophthalmic disease classification based on artificial intelligence.
In order to achieve the above object, the present invention provides an automatic classification method for ophthalmic diseases based on artificial intelligence, comprising:
Acquiring an ophthalmic acquisition image to be classified, performing image preprocessing on the ophthalmic acquisition image to obtain a target ophthalmic image, and performing image classification on the target ophthalmic image to obtain a classified ophthalmic image;
extracting image features of the classified ophthalmic image to obtain classified image features, determining focus images in the classified ophthalmic image according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic image, and analyzing focus categories of the classified ophthalmic image according to the disorder description information and the focus images to obtain first focus categories;
scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category;
calculating a deviation coefficient of the first lesion category and the second lesion category by the following formula;
P=∑ m [T m log f(T m )+(1-T m+1 )log f(T m+1 )]
wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, log f (T m ) Log corresponding to the true value representing the first lesion class, log f (T m+1 ) Representing a logarithm corresponding to the true value of the second lesion class;
if the deviation coefficient is larger than a preset deviation value, analyzing an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category, and classifying the classified ophthalmic image for ophthalmic diseases according to the actual focus category to obtain a first classification result;
and if the deviation coefficient is not greater than the preset deviation value, carrying out integration treatment on the first focus category and the second focus category to obtain a target focus category, and carrying out ophthalmic disease classification on the classified ophthalmic image according to the target focus category to obtain a second classification result.
Optionally, the performing image preprocessing on the ophthalmologic acquired image to obtain a target ophthalmologic image includes:
performing de-duplication treatment on the ophthalmologic acquired image to obtain a de-duplicated ophthalmologic image;
performing noise reduction treatment on the de-duplicated ophthalmic image to obtain a noise-reduced ophthalmic image;
performing image clipping on the noise-reduced ophthalmic image to obtain a clipped ophthalmic image;
And performing image augmentation processing on the cut ophthalmic image to obtain a target ophthalmic image.
Optionally, the image classifying the target ophthalmic image to obtain a classified ophthalmic image includes:
identifying an image title of each image in the target ophthalmic image, and extracting title text in the image title;
extracting keywords in the title text according to the title text to obtain title keywords;
calculating the similarity of each keyword in the title keywords to obtain title similarity;
and carrying out image classification on the target ophthalmic image according to the title similarity to obtain a classified ophthalmic image.
Optionally, the extracting the image features of the classified ophthalmic image to obtain classified image features includes:
identifying the image color in each image in the classified ophthalmic images, drawing a color histogram corresponding to each image according to the image color, and constructing a color matrix corresponding to the color histogram;
extracting color matrix characteristics of the color matrix by using a preset HSV color model, identifying image pixel points in each image, and detecting pixel point gray values corresponding to the image pixel points;
Measuring the frequency of each gray value in the gray values of the pixel points to obtain gray frequencies, constructing a gray matrix corresponding to each image according to the gray frequencies, and extracting texture features of each image according to the gray matrices to obtain image texture features;
and carrying out feature fusion on the color matrix features and the image texture features to obtain fusion features, and taking the fusion features as classified image features corresponding to the classified ophthalmic images.
Optionally, the extracting the color matrix feature of the color matrix by using a preset HSV color model includes:
extracting color matrix features of the color matrix using the following formula in the HSV color model:
wherein D is color Representing color matrix characteristics of a color matrix, A i Represents the matrix average value corresponding to the ith color matrix, C represents the matrix number of the color matrix, i represents the serial number in the color matrix, E i,i+1 Matrix value of the ith matrix representing the ith color matrix, B i Representing the matrix variance corresponding to the ith color matrix, F i And represents the matrix skewness corresponding to the ith color matrix.
Optionally, the feature fusion is performed on the color matrix feature and the image texture feature to obtain a fusion feature, which includes:
Filling missing values of the color matrix features and the image texture features respectively to obtain first filling features and second filling features;
respectively carrying out standardization processing on the first filling feature and the second filling feature to obtain a first standard feature and a second standard feature, and respectively carrying out feature selection on the first standard feature and the second standard feature to obtain a first selection feature and a second selection feature;
vectorizing the first selected feature and the second selected feature to obtain a first feature vector and a second feature vector, and vector fusion is carried out on the first feature vector and the second feature vector to obtain a fusion feature vector;
and obtaining the fusion characteristic corresponding to the color matrix characteristic and the image texture characteristic according to the fusion characteristic vector.
Optionally, the performing focus category analysis on the classified ophthalmic image according to the condition description information and the focus image to obtain a first focus category includes:
determining focus symptoms corresponding to the classified ophthalmic images according to the disorder description information;
extracting features of the focus image by using a preset convolutional neural network to obtain focus features, and calculating a pixel brightness value corresponding to each pixel point in the focus image;
Determining a lesion level of the lesion symptom according to the pixel brightness value;
and carrying out focus analysis on the classified ophthalmic image by combining the focus level, the focus characteristic and the focus symptom to obtain a first focus category.
Optionally, the performing focus category analysis on the classified ophthalmic image according to the diagnostic tag to obtain a second focus category includes:
performing attribute analysis on the diagnostic tag to obtain a tag attribute, and calculating a tag weight coefficient corresponding to the diagnostic tag according to the tag attribute;
according to the label weight coefficient, carrying out label screening on the diagnosis label to obtain a target label, and carrying out label fusion on the target label to obtain a fusion label;
and carrying out semantic analysis on the fusion tag to obtain tag semantics, and carrying out focus category analysis on the classified ophthalmic image according to the tag semantics to obtain a second focus category.
Optionally, the calculating, according to the tag attribute, a tag weight coefficient corresponding to the diagnostic tag includes:
and calculating a label weight coefficient corresponding to the diagnosis label through the following formula:
wherein G is j Represents the label weight coefficient corresponding to the j-th diagnostic label, j represents the serial number of the diagnostic label, H j Representing the mapping value, K, corresponding to the jth diagnostic tag j Represents the vector average value corresponding to the jth diagnostic tag,indicating the number of labels corresponding to the diagnostic label.
In order to solve the above problems, the present invention also provides an artificial intelligence-based automatic classification system for ophthalmic diseases, the system comprising:
the image classification module is used for acquiring an ophthalmic acquisition image to be classified, carrying out image preprocessing on the ophthalmic acquisition image to obtain a target ophthalmic image, and carrying out image classification on the target ophthalmic image to obtain a classified ophthalmic image;
the focus analysis module is used for extracting image features of the classified ophthalmic image to obtain classified image features, determining focus images in the classified ophthalmic image according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic image, and analyzing focus categories of the classified ophthalmic image according to the disorder description information and the focus images to obtain a first focus category;
the label extraction module is used for scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category;
The deviation calculation module is used for calculating the deviation coefficient of the first focus category and the second focus category through the following formula;
P=∑ m [T m log f(T m )+(1-T m+1 )log f(T m+1 )]
wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, log f (T m ) Log corresponding to the true value representing the first lesion class, log f (T m+1 ) Representing a logarithm corresponding to the true value of the second lesion class;
the first classification module is used for analyzing the actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category and classifying the classified ophthalmic image for ophthalmic diseases according to the actual focus category if the deviation coefficient is larger than a preset deviation value, so as to obtain a first classification result;
and the second classification module is used for carrying out integration treatment on the first focus category and the second focus category to obtain a target focus category if the deviation coefficient is not greater than the preset deviation value, and carrying out ophthalmic disease classification on the classified ophthalmic image according to the target focus category to obtain a second classification result.
According to the invention, the ophthalmologic acquisition image to be classified is acquired, the ophthalmologic acquisition image is subjected to image preprocessing, and then some invalid images in the ophthalmologic acquisition image are removed, and some important images are reserved; furthermore, it should be understood that if the deviation coefficient is greater than the preset deviation value, it indicates that there is an error between the first lesion category and the second lesion category, and the present invention analyzes the actual lesion category corresponding to the classified ophthalmic image according to the first lesion category and the second lesion category so as to obtain a more accurate lesion type, and it should be understood that if the deviation coefficient is not greater than the preset deviation value, it indicates that there is no error between the first lesion category and the second lesion category, or the deviation is within the receiving range, and the present invention integrates the first lesion category and the second lesion category so as to obtain a more detailed lesion category. Therefore, the ophthalmic disease automatic classification method and the ophthalmic disease automatic classification system based on the artificial intelligence can improve the accuracy of the ophthalmic disease automatic classification based on the artificial intelligence.
Drawings
FIG. 1 is a schematic flow chart of an automatic classification method for ophthalmic diseases based on artificial intelligence according to an embodiment of the present application;
FIG. 2 is a functional block diagram of an automatic classification system for ophthalmic diseases based on artificial intelligence according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device for implementing the automatic classification method of ophthalmic diseases based on artificial intelligence according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides an ophthalmic disease automatic classification method based on artificial intelligence. In the embodiment of the present application, the execution subject of the automatic classification method of ophthalmic diseases based on artificial intelligence includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided in the embodiment of the present application. In other words, the automatic classification method of ophthalmic diseases based on artificial intelligence may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a flow chart of an automatic classifying method for ophthalmic diseases based on artificial intelligence according to an embodiment of the invention is shown. In this embodiment, the automatic classification method of ophthalmic diseases based on artificial intelligence includes steps S1 to S5.
S1, acquiring an ophthalmology acquisition image to be classified, performing image preprocessing on the ophthalmology acquisition image to obtain a target ophthalmology image, and performing image classification on the target ophthalmology image to obtain a classified ophthalmology image.
According to the method, the ophthalmologic acquisition images to be classified are acquired, image preprocessing is carried out on the ophthalmologic acquisition images, and then some invalid images in the ophthalmologic acquisition images are removed, and some important images are reserved, wherein the ophthalmologic acquisition images are images, which are obtained by preprocessing the ophthalmologic acquisition images through various instruments and medical means, of eyes in an ophthalmologic project, and the target ophthalmologic images are images obtained after preprocessing operation of the ophthalmologic acquisition images.
As one embodiment of the present invention, the performing image preprocessing on the ophthalmologically acquired image to obtain a target ophthalmologically image includes: performing de-duplication processing on the ophthalmologic acquisition image to obtain a de-duplication ophthalmologic image, performing noise reduction processing on the de-duplication ophthalmologic image to obtain a noise reduction ophthalmologic image, performing image clipping on the noise reduction ophthalmologic image to obtain a clipping ophthalmologic image, and performing image augmentation processing on the clipping ophthalmologic image to obtain a target ophthalmologic image.
The de-duplication ophthalmic image is an image obtained by removing some repeated images in the ophthalmic collected image, the noise reduction ophthalmic image is an image obtained by reducing or inhibiting some noise in the de-duplication ophthalmic image, and the clipping ophthalmic image is an image obtained by clipping an invalid region in the noise reduction ophthalmic image.
Furthermore, the de-duplication processing of the ophthalmic collected image can be realized through a Hash algorithm, the de-duplication processing of the de-duplication ophthalmic image can be realized through a mean filter, the image clipping of the de-duplication ophthalmic image can be realized through an image clipping tool, the image clipping tool is compiled by a script language, and the image augmentation processing of the clipping ophthalmic image can be realized through the processing of turning, rotating, color adjusting and the like of the clipping ophthalmic image.
The invention can divide the images of the same type in the target ophthalmic image together by classifying the images of the target ophthalmic image, thereby improving the processing efficiency of the subsequent images, wherein the classified ophthalmic image is the image obtained by classifying the target ophthalmic image.
As one embodiment of the present invention, the image classifying the target ophthalmic image to obtain a classified ophthalmic image includes: identifying an image title of each image in the target ophthalmic image, extracting a title text in the image title, extracting keywords in the title text according to the title text to obtain title keywords, calculating the similarity of each keyword in the title keywords to obtain title similarity, and classifying the target ophthalmic image according to the title similarity to obtain a classified ophthalmic image.
The title text is information expressed by words in the title of the image, the title keywords are representative words in the title text, and the title similarity represents the similarity degree of each keyword in the title keywords.
Further, identifying the image title of each image in the target ophthalmic image may be achieved through an OCR text recognition technique, extracting the title text in the image title may be achieved through an extraction ft-idf algorithm, calculating the similarity of each keyword in the title keywords may be achieved through a manhattan distance algorithm, and performing image classification on the target ophthalmic image may be achieved through an image classifier, such as an MLP classifier.
S2, extracting image features of the classified ophthalmic images to obtain classified image features, determining focus images in the classified ophthalmic images according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic images, and analyzing focus categories of the classified ophthalmic images according to the disorder description information and the focus images to obtain first focus categories.
The invention further discloses a method for analyzing the first focus category, which is characterized in that the characteristic part in the classified ophthalmic image is known by extracting the image characteristic of the classified ophthalmic image so as to facilitate the analysis of the first focus category.
As one embodiment of the present invention, the image feature extraction of the classified ophthalmic image to obtain a classified image feature includes: identifying the image color in each image in the classified ophthalmic image, drawing a color histogram corresponding to each image according to the image color, constructing a color matrix corresponding to the color histogram, extracting color matrix features of the color matrix by using a preset HSV color model, identifying image pixel points in each image, detecting pixel point gray values corresponding to the image pixel points, measuring the occurrence frequency of each gray value in the pixel point gray values to obtain gray frequencies, constructing a gray matrix corresponding to each image according to the gray frequencies, extracting texture features of each image according to the gray matrix to obtain image texture features, carrying out feature fusion on the color matrix features and the image texture features to obtain fusion features, and taking the fusion features as classified image features corresponding to the classified ophthalmic image.
The image color is the color in each image in the classified ophthalmic image, the color histogram is a matrix formed by using a histogram to describe the distribution condition of the color in each image, the color matrix is a matrix formed by the color histogram, the preset HSV color model is a model for extracting color features, the gray scale frequency is the number of times of occurrence of each numerical value in the pixel gray scale value, the gray scale matrix is a matrix corresponding to the gray scale frequency, the image texture features are characterization corresponding to the texture in each image, and the fusion features are features obtained by fusing the color matrix features and the image texture features together.
Further, as an optional embodiment of the present invention, identifying an image color in each image in the classified ophthalmic image may be implemented by a spectral analysis method, drawing a color histogram corresponding to each image may be implemented by a visio drawing tool, constructing a color matrix corresponding to the color histogram may be implemented by a matrix construction function, for example, a zeros function, measuring a frequency of occurrence of each gray value in the pixel gray values may be implemented by a countif function, performing texture feature extraction on each image may be implemented by performing vectorization operation on the gray matrix to obtain a matrix vector, obtaining a texture feature corresponding to each image according to the matrix vector, and performing image enhancement processing on the target collected image may be implemented by a gray transformation enhancement method.
Further, as an optional embodiment of the present invention, the extracting the color matrix feature of the color matrix by using a preset HSV color model includes:
extracting color matrix features of the color matrix using the following formula in the HSV color model:
wherein D is color Representing color matrix characteristics of a color matrix, A i Represents the matrix average value corresponding to the ith color matrix, C represents the matrix number of the color matrix, i represents the serial number in the color matrix, E i,i+1 Matrix value of the ith matrix representing the ith color matrix, B i Representing the matrix variance corresponding to the ith color matrix, F i And represents the matrix skewness corresponding to the ith color matrix.
As an optional embodiment of the present invention, the feature fusing the color matrix feature and the image texture feature to obtain a fused feature includes: and respectively filling missing values of the color matrix features and the image texture features to obtain first filling features and second filling features, respectively carrying out standardization processing on the first filling features and the second filling features to obtain first standard features and second standard features, respectively carrying out feature selection on the first standard features and the second standard features to obtain first selection features and second selection features, carrying out vectorization operation on the first selection features and the second selection features to obtain first feature vectors and second feature vectors, carrying out vector fusion on the first feature vectors and the second feature vectors to obtain fusion feature vectors, and obtaining fusion features corresponding to the color matrix features and the image texture features according to the fusion feature vectors.
The first filling feature and the second filling feature are features obtained after the color matrix feature and the features in the image texture feature are filled by missing values, the first standard feature and the second standard feature are features obtained after feature formats of the first filling feature and the second filling feature are processed uniformly, the first selection feature and the second selection feature are representative features selected from the first standard feature and the second standard feature respectively, and the first feature vector and the second feature vector are vector expression forms corresponding to the first selection feature and the second selection feature respectively.
Further, as an optional embodiment of the present invention, the filling of missing values of the color matrix feature and the image texture feature may be implemented by a mean filling method, the normalizing process of the first filling feature and the second filling feature may be implemented by a polar error normalizing method, the feature selection of the first standard feature and the second standard feature may be implemented by a feature selection algorithm, the vectorizing operation of the first selection feature and the second selection feature may be implemented by a word2vec algorithm, and the vector fusion of the first feature vector and the second feature vector may be implemented by a weighted fusion method.
The invention can accurately obtain the focus part in the classified ophthalmic image by determining the focus image in the classified ophthalmic image according to the classified image characteristics, wherein the focus image is an image with diseases in the classified ophthalmic image, and further, the focus image in the classified ophthalmic image can be determined by comparing the characteristic of the classified image with the characteristic of the historical image.
According to the invention, the disease description information corresponding to the classified ophthalmic image is obtained, and the focus category analysis is carried out on the classified ophthalmic image according to the disease description information and the focus image, so that the accuracy of the focus category analysis can be improved, wherein the disease description information is according to the opinion and the description of symptoms given by a doctor or expert of the classified ophthalmic image, and the first focus category is the focus type obtained by combining the disease description information and the focus image analysis.
As one embodiment of the present invention, the performing focus category analysis on the classified ophthalmic image according to the condition description information and the focus image to obtain a first focus category includes: according to the disease description information, determining focus symptoms corresponding to the classified ophthalmic image, performing feature extraction on the focus image by using a preset convolutional neural network to obtain focus features, calculating pixel brightness values corresponding to each pixel point in the focus image, determining focus levels of the focus symptoms according to the pixel brightness values, and performing focus analysis on the classified ophthalmic image by combining the focus levels, the focus features and the focus symptoms to obtain a first focus category.
The preset convolutional neural network is a neural network used for extracting characteristics of a specific image, the focus characteristics are characteristics of the focus image about focus positions, the pixel brightness value represents the brightness degree of each pixel point, and the focus level represents the severity degree corresponding to the focus symptoms.
Further, as an optional embodiment of the present invention, determining the focal symptom corresponding to the classified ophthalmic image may be obtained by using semantics corresponding to the condition description information, performing feature extraction on the focal image may be implemented by using a convolution kernel in the preset convolutional neural network, calculating a pixel brightness value corresponding to each pixel point in the focal image may be obtained by l=r×0.30+g×0.59+b×0.11, R represents a red color channel in each pixel point, G represents a green color channel in each pixel point, B represents a blue color channel in each pixel point, and determining the focal level of the focal symptom may be obtained by a ratio of the pixel brightness value to a normal brightness value.
And S3, scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category.
The diagnostic report corresponding to the classified ophthalmic image is scheduled, the diagnostic label corresponding to the diagnostic report is extracted, identification information in the diagnostic report can be solved, and convenience is provided for subsequent analysis of focus categories, wherein the diagnostic report is diagnostic opinion corresponding to the classified ophthalmic image, the diagnostic label is identification information corresponding to the diagnostic report, further, the diagnostic report corresponding to the classified ophthalmic image is scheduled to be realized through a time slice rotation algorithm, and the diagnostic label corresponding to the diagnostic report is extracted through a label extractor.
The invention can improve the accuracy of focus analysis by analyzing the focus category of the classified ophthalmic image according to the diagnosis tag, wherein the second focus category is obtained by analyzing the focus category of the classified ophthalmic image according to the diagnosis tag.
As one embodiment of the present invention, the performing focus category analysis on the classified ophthalmic image according to the diagnostic tag, to obtain a second focus category includes: performing attribute analysis on the diagnosis tag to obtain a tag attribute, calculating a tag weight coefficient corresponding to the diagnosis tag according to the tag attribute, performing tag screening on the diagnosis tag according to the tag weight coefficient to obtain a target tag, performing tag fusion on the target tag to obtain a fusion tag, performing semantic analysis on the fusion tag to obtain a tag semantic, and performing focus category analysis on the classified ophthalmic image according to the tag semantic to obtain a second focus category.
The label attribute is attribute information corresponding to the diagnostic label, such as items to which the label belongs, label screening on the diagnostic label can be achieved according to the numerical value of the label weight, label fusion on the target label can be achieved through a label fusion tool, the label fusion tool is compiled by a script language, semantic analysis on the fusion label can be achieved through a semantic analysis method, and focus category analysis on the classified ophthalmic image can be achieved through interpretation according to the label semantic.
Further, as an optional embodiment of the present invention, the calculating, according to the tag attribute, a tag weight coefficient corresponding to the diagnostic tag includes:
and calculating a label weight coefficient corresponding to the diagnosis label through the following formula:
wherein G is j Represents the label weight coefficient corresponding to the j-th diagnostic label, j represents the serial number of the diagnostic label, H j Representing the mapping value, K, corresponding to the jth diagnostic tag j Represents the vector average value corresponding to the jth diagnostic tag,indicating the number of labels corresponding to the diagnostic label.
S4, calculating the deviation coefficient of the first focus category and the second focus category.
The invention calculates the deviation coefficient of the first focus category and the second focus category so as to be convenient for knowing the deviation condition of the first focus category and the second focus category analysis and to be convenient for determining the specific category corresponding to the classified ophthalmic image later, wherein the deviation coefficient represents the deviation degree of the first focus category and the second focus category.
As an embodiment of the present invention, the calculating a deviation coefficient of the first lesion category and the second lesion category includes:
calculating a deviation coefficient of the first lesion category and the second lesion category by the following formula:
P=∑ m [T m log f(T m )+(1-T m+1 )log f(T m+1 )]
Wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, log f (T m ) Log corresponding to the true value representing the first lesion class, log f (T m+1 ) And a logarithm corresponding to the true value representing the second lesion class.
And S5, if the deviation coefficient is larger than a preset deviation value, analyzing an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category, and classifying the classified ophthalmic image for the ophthalmic disease according to the actual focus category to obtain a first classification result.
It should be appreciated that if the deviation coefficient is greater than a preset deviation value, it indicates that an error exists between the first focus category and the second focus category, and the present invention analyzes an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category so as to obtain a more accurate focus category, where the actual focus category is a result obtained by analyzing the first focus category and the second focus category, and further, the analyzing the actual focus category corresponding to the classified ophthalmic image may be obtained by comparing the first focus category and the second focus category with an actual symptom.
According to the invention, the ophthalmic diseases of the classified ophthalmic images are classified according to the actual focus categories, so that the accuracy of the disease classification of the classified ophthalmic images is improved, wherein the first classification result is obtained after the classified ophthalmic images are subjected to the ophthalmic diseases classification according to the actual focus categories, and further, the classified ophthalmic images are subjected to the ophthalmic diseases classification through a classifier, such as a decision tree classifier.
And S6, if the deviation coefficient is not greater than the preset deviation value, integrating the first focus category and the second focus category to obtain a target focus category, and classifying the classified ophthalmic image into ophthalmic diseases according to the target focus category to obtain a second classification result.
It should be understood that if the deviation coefficient is not greater than the preset deviation value, it indicates that the first lesion category and the second lesion category have no error directly, or the difference is within the receiving range, the method and the device perform an integration treatment on the first lesion category and the second lesion category so as to obtain a more detailed lesion category, where the target lesion category is a lesion category obtained after combining information interaction between the first lesion category and the second lesion category, and further, the integration treatment on the first lesion category and the second lesion category may be obtained through an integration treatment on information by a texjoin function.
According to the invention, the classified ophthalmic image is subjected to ophthalmic disease classification according to the target focus category, so that the disease classification corresponding to the classified ophthalmic image is completed, wherein the second classification result is a result obtained after the classified ophthalmic image is classified according to the target focus category, and further, the classified ophthalmic image is subjected to ophthalmic disease classification through the decision tree classifier.
According to the invention, the ophthalmologic acquisition image to be classified is acquired, the ophthalmologic acquisition image is subjected to image preprocessing, and then some invalid images in the ophthalmologic acquisition image are removed, and some important images are reserved; furthermore, it should be understood that if the deviation coefficient is greater than the preset deviation value, it indicates that there is an error between the first lesion category and the second lesion category, and the present invention analyzes the actual lesion category corresponding to the classified ophthalmic image according to the first lesion category and the second lesion category so as to obtain a more accurate lesion type, and it should be understood that if the deviation coefficient is not greater than the preset deviation value, it indicates that there is no error between the first lesion category and the second lesion category, or the deviation is within the receiving range, and the present invention integrates the first lesion category and the second lesion category so as to obtain a more detailed lesion category. Therefore, the ophthalmic disease automatic classification method based on the artificial intelligence provided by the embodiment of the invention can improve the accuracy of the ophthalmic disease automatic classification based on the artificial intelligence.
Fig. 2 is a functional block diagram of an automatic classification system for ophthalmic diseases based on artificial intelligence according to an embodiment of the present invention.
The automatic ophthalmic disease classifying system 100 based on artificial intelligence according to the present invention may be installed in an electronic device. Depending on the functions implemented, the automatic classification system 100 for ophthalmic diseases based on artificial intelligence may include an image classification module 101, a lesion analysis module 102, a label extraction module 103, a deviation calculation module 104, a first classification module 105, and a second classification module 106. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the image classification module 101 is configured to obtain an ophthalmic collected image to be classified, perform image preprocessing on the ophthalmic collected image to obtain a target ophthalmic image, and perform image classification on the target ophthalmic image to obtain a classified ophthalmic image;
the focus analysis module 102 is configured to perform image feature extraction on the classified ophthalmic image to obtain classified image features, determine a focus image in the classified ophthalmic image according to the classified image features, obtain disorder description information corresponding to the classified ophthalmic image, and perform focus category analysis on the classified ophthalmic image according to the disorder description information and the focus image to obtain a first focus category;
The label extracting module 103 is configured to schedule a diagnostic report corresponding to the classified ophthalmic image, extract a diagnostic label corresponding to the diagnostic report, and perform lesion category analysis on the classified ophthalmic image according to the diagnostic label to obtain a second lesion category;
the deviation calculating module 104 is configured to calculate a deviation coefficient of the first lesion category and the second lesion category according to the following formula;
P=∑ m [T m log f(T m )+(1-T m+1 )log f(T m+1 )]
wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, logf (T m ) Log, logf (T m+1 ) Representing a logarithm corresponding to the true value of the second lesion class;
the first classification module 105 is configured to analyze an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category if the deviation coefficient is greater than a preset deviation value, and classify the classified ophthalmic image for an ophthalmic disease according to the actual focus category, so as to obtain a first classification result;
The second classification module 106 is configured to integrate the first lesion category and the second lesion category to obtain a target lesion category if the deviation coefficient is not greater than the preset deviation value, and classify the classified ophthalmic image according to the target lesion category to obtain a second classification result.
In detail, each module in the automatic classification system 100 for ophthalmic diseases based on artificial intelligence according to the embodiment of the present application adopts the same technical means as the automatic classification method for ophthalmic diseases based on artificial intelligence described in fig. 1, and can produce the same technical effects, which are not described herein.
Fig. 3 is a schematic structural diagram of an electronic device 1 for implementing an automatic classification method of ophthalmic diseases based on artificial intelligence according to an embodiment of the present application.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program stored in the memory 11 and executable on the processor 10, such as an artificial intelligence based automatic classification method program for ophthalmic diseases.
The processor 10 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing Unit, CPU), a microprocessor, a digital processing chip, a graphics processor, a combination of various control chips, and so on. The processor 10 is a Control Unit (Control Unit) of the electronic device 1, connects respective parts of the entire electronic device using various interfaces and lines, executes or executes programs or modules stored in the memory 11 (for example, executes an artificial intelligence-based automatic classification method program for ophthalmic diseases, etc.), and invokes data stored in the memory 11 to perform various functions of the electronic device and process data.
The memory 11 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 11 may in other embodiments also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only for storing application software installed in an electronic device and various types of data, such as codes of an automatic classification method program for ophthalmic diseases based on artificial intelligence, but also for temporarily storing data that has been output or is to be output.
The communication bus 12 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
The communication interface 13 is used for communication between the electronic device 1 and other devices, including a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Fig. 3 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
An artificial intelligence based automatic classification method program for ophthalmic diseases stored in the memory 11 of the electronic device 1 is a combination of instructions which, when executed in the processor 10, may implement:
Acquiring an ophthalmic acquisition image to be classified, performing image preprocessing on the ophthalmic acquisition image to obtain a target ophthalmic image, and performing image classification on the target ophthalmic image to obtain a classified ophthalmic image;
extracting image features of the classified ophthalmic image to obtain classified image features, determining focus images in the classified ophthalmic image according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic image, and analyzing focus categories of the classified ophthalmic image according to the disorder description information and the focus images to obtain first focus categories;
scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category;
calculating a deviation coefficient of the first lesion category and the second lesion category by the following formula;
P=∑ m [T m log f(T m )+(1-T m+1 )log f(T m+1 )]
wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing correspondence of a second lesion categoryTrue value, log f (T m ) Log corresponding to the true value representing the first lesion class, log f (T m+1 ) Representing a logarithm corresponding to the true value of the second lesion class;
if the deviation coefficient is larger than a preset deviation value, analyzing an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category, and classifying the classified ophthalmic image for ophthalmic diseases according to the actual focus category to obtain a first classification result;
and if the deviation coefficient is not greater than the preset deviation value, carrying out integration treatment on the first focus category and the second focus category to obtain a target focus category, and carrying out ophthalmic disease classification on the classified ophthalmic image according to the target focus category to obtain a second classification result.
In particular, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
acquiring an ophthalmic acquisition image to be classified, performing image preprocessing on the ophthalmic acquisition image to obtain a target ophthalmic image, and performing image classification on the target ophthalmic image to obtain a classified ophthalmic image;
extracting image features of the classified ophthalmic image to obtain classified image features, determining focus images in the classified ophthalmic image according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic image, and analyzing focus categories of the classified ophthalmic image according to the disorder description information and the focus images to obtain first focus categories;
scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category;
calculating a deviation coefficient of the first lesion category and the second lesion category by the following formula;
P=∑ m [T m log f(T m )+(1-T m+1 )log f(T m+1 )]
Wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, log f (T m ) Log corresponding to the true value representing the first lesion class, log f (T m+1 ) Representing a logarithm corresponding to the true value of the second lesion class;
if the deviation coefficient is larger than a preset deviation value, analyzing an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category, and classifying the classified ophthalmic image for ophthalmic diseases according to the actual focus category to obtain a first classification result;
and if the deviation coefficient is not greater than the preset deviation value, carrying out integration treatment on the first focus category and the second focus category to obtain a target focus category, and carrying out ophthalmic disease classification on the classified ophthalmic image according to the target focus category to obtain a second classification result.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present application without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. An automatic classification method of ophthalmic diseases based on artificial intelligence, comprising:
Acquiring an ophthalmic acquisition image to be classified, performing image preprocessing on the ophthalmic acquisition image to obtain a target ophthalmic image, and performing image classification on the target ophthalmic image to obtain a classified ophthalmic image;
extracting image features of the classified ophthalmic image to obtain classified image features, determining focus images in the classified ophthalmic image according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic image, and analyzing focus categories of the classified ophthalmic image according to the disorder description information and the focus images to obtain first focus categories;
scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category;
calculating a deviation coefficient of the first lesion category and the second lesion category by the following formula;
P=∑ m T m logfT m +1-T m+1 logfT m+1
wherein P represents the deviation coefficient of the first focus category and the second focus category, and m and m+1 represent respectivelyA first lesion category and a second lesion category, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, log fT, corresponding to the second lesion category m Log fT representing the logarithm of the true value of the first lesion class m+1 Representing a logarithm corresponding to the true value of the second lesion class;
if the deviation coefficient is larger than a preset deviation value, analyzing an actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category, and classifying the classified ophthalmic image for ophthalmic diseases according to the actual focus category to obtain a first classification result;
and if the deviation coefficient is not greater than the preset deviation value, carrying out integration treatment on the first focus category and the second focus category to obtain a target focus category, and carrying out ophthalmic disease classification on the classified ophthalmic image according to the target focus category to obtain a second classification result.
2. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 1, wherein the performing image preprocessing on the collected ophthalmic image to obtain a target ophthalmic image comprises:
performing de-duplication treatment on the ophthalmologic acquired image to obtain a de-duplicated ophthalmologic image;
performing noise reduction treatment on the de-duplicated ophthalmic image to obtain a noise-reduced ophthalmic image;
Performing image clipping on the noise-reduced ophthalmic image to obtain a clipped ophthalmic image;
and performing image augmentation processing on the cut ophthalmic image to obtain a target ophthalmic image.
3. An artificial intelligence based automatic classification method for ophthalmic diseases as claimed in claim 1, wherein said image classifying said target ophthalmic image to obtain a classified ophthalmic image comprises:
identifying an image title of each image in the target ophthalmic image, and extracting title text in the image title;
extracting keywords in the title text according to the title text to obtain title keywords;
calculating the similarity of each keyword in the title keywords to obtain title similarity;
and carrying out image classification on the target ophthalmic image according to the title similarity to obtain a classified ophthalmic image.
4. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 1, wherein the image feature extraction of the classified ophthalmic image to obtain classified image features comprises:
identifying the image color in each image in the classified ophthalmic images, drawing a color histogram corresponding to each image according to the image color, and constructing a color matrix corresponding to the color histogram;
Extracting color matrix characteristics of the color matrix by using a preset HSV color model, identifying image pixel points in each image, and detecting pixel point gray values corresponding to the image pixel points;
measuring the frequency of each gray value in the gray values of the pixel points to obtain gray frequencies, constructing a gray matrix corresponding to each image according to the gray frequencies, and extracting texture features of each image according to the gray matrices to obtain image texture features;
and carrying out feature fusion on the color matrix features and the image texture features to obtain fusion features, and taking the fusion features as classified image features corresponding to the classified ophthalmic images.
5. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 4, wherein the extracting the color matrix features of the color matrix using a preset HSV color model comprises:
extracting color matrix features of the color matrix using the following formula in the HSV color model:
wherein D is color Representing color matrix characteristics of a color matrix, A i Represents the matrix average value corresponding to the ith color matrix, C represents the matrix number of the color matrix, i represents the serial number in the color matrix, E i,i+1 Matrix value of the ith matrix representing the ith color matrix, B i Representing the matrix variance corresponding to the ith color matrix, F i And represents the matrix skewness corresponding to the ith color matrix.
6. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 1, wherein the feature fusion of the color matrix features and the image texture features to obtain fusion features comprises:
filling missing values of the color matrix features and the image texture features respectively to obtain first filling features and second filling features;
respectively carrying out standardization processing on the first filling feature and the second filling feature to obtain a first standard feature and a second standard feature, and respectively carrying out feature selection on the first standard feature and the second standard feature to obtain a first selection feature and a second selection feature;
vectorizing the first selected feature and the second selected feature to obtain a first feature vector and a second feature vector, and vector fusion is carried out on the first feature vector and the second feature vector to obtain a fusion feature vector;
and obtaining the fusion characteristic corresponding to the color matrix characteristic and the image texture characteristic according to the fusion characteristic vector.
7. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 1, wherein the performing focus category analysis on the classified ophthalmic image according to the disease description information and the focus image to obtain a first focus category comprises:
determining focus symptoms corresponding to the classified ophthalmic images according to the disorder description information;
extracting features of the focus image by using a preset convolutional neural network to obtain focus features, and calculating a pixel brightness value corresponding to each pixel point in the focus image;
determining a lesion level of the lesion symptom according to the pixel brightness value;
and carrying out focus analysis on the classified ophthalmic image by combining the focus level, the focus characteristic and the focus symptom to obtain a first focus category.
8. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 1, wherein the performing focus category analysis on the classified ophthalmic image according to the diagnostic tag to obtain a second focus category comprises:
performing attribute analysis on the diagnostic tag to obtain a tag attribute, and calculating a tag weight coefficient corresponding to the diagnostic tag according to the tag attribute;
According to the label weight coefficient, carrying out label screening on the diagnosis label to obtain a target label, and carrying out label fusion on the target label to obtain a fusion label;
and carrying out semantic analysis on the fusion tag to obtain tag semantics, and carrying out focus category analysis on the classified ophthalmic image according to the tag semantics to obtain a second focus category.
9. The automatic classification method of ophthalmic diseases based on artificial intelligence according to claim 1, wherein the calculating the label weight coefficient corresponding to the diagnostic label according to the label attribute comprises:
and calculating a label weight coefficient corresponding to the diagnosis label through the following formula:
wherein G is j Represents the label weight coefficient corresponding to the j-th diagnostic label, j represents the serial number of the diagnostic label, H j Representing the mapping value, K, corresponding to the jth diagnostic tag j Represents the vector average value corresponding to the jth diagnostic tag,indicating the number of labels corresponding to the diagnostic label.
10. An artificial intelligence based automatic ophthalmic disease classification system, the system comprising:
the image classification module is used for acquiring an ophthalmic acquisition image to be classified, carrying out image preprocessing on the ophthalmic acquisition image to obtain a target ophthalmic image, and carrying out image classification on the target ophthalmic image to obtain a classified ophthalmic image;
The focus analysis module is used for extracting image features of the classified ophthalmic image to obtain classified image features, determining focus images in the classified ophthalmic image according to the classified image features, acquiring disorder description information corresponding to the classified ophthalmic image, and analyzing focus categories of the classified ophthalmic image according to the disorder description information and the focus images to obtain a first focus category;
the label extraction module is used for scheduling a diagnosis report corresponding to the classified ophthalmic image, extracting a diagnosis label corresponding to the diagnosis report, and analyzing the focus category of the classified ophthalmic image according to the diagnosis label to obtain a second focus category;
the deviation calculation module is used for calculating the deviation coefficient of the first focus category and the second focus category through the following formula;
P=∑ m T m log fT m +1-T m+1 logfT m+1
wherein P represents the deviation coefficient of the first focus category and the second focus category, m and m+1 represent the first focus category and the second focus category respectively, T m Representing the true value, T, corresponding to the first lesion category m+1 Representing the true value, log fT, corresponding to the second lesion category m Log fT representing the logarithm of the true value of the first lesion class m+1 Representing a logarithm corresponding to the true value of the second lesion class;
the first classification module is used for analyzing the actual focus category corresponding to the classified ophthalmic image according to the first focus category and the second focus category and classifying the classified ophthalmic image for ophthalmic diseases according to the actual focus category if the deviation coefficient is larger than a preset deviation value, so as to obtain a first classification result;
and the second classification module is used for carrying out integration treatment on the first focus category and the second focus category to obtain a target focus category if the deviation coefficient is not greater than the preset deviation value, and carrying out ophthalmic disease classification on the classified ophthalmic image according to the target focus category to obtain a second classification result.
CN202310534978.5A 2023-05-12 2023-05-12 Automatic ophthalmic disease classification method and system based on artificial intelligence Pending CN116681923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310534978.5A CN116681923A (en) 2023-05-12 2023-05-12 Automatic ophthalmic disease classification method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310534978.5A CN116681923A (en) 2023-05-12 2023-05-12 Automatic ophthalmic disease classification method and system based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN116681923A true CN116681923A (en) 2023-09-01

Family

ID=87789917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310534978.5A Pending CN116681923A (en) 2023-05-12 2023-05-12 Automatic ophthalmic disease classification method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116681923A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274278A (en) * 2023-09-28 2023-12-22 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117523503A (en) * 2024-01-08 2024-02-06 威科电子模块(深圳)有限公司 Preparation equipment safety monitoring method and system based on thick film circuit board

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274278A (en) * 2023-09-28 2023-12-22 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117274278B (en) * 2023-09-28 2024-04-02 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117523503A (en) * 2024-01-08 2024-02-06 威科电子模块(深圳)有限公司 Preparation equipment safety monitoring method and system based on thick film circuit board
CN117523503B (en) * 2024-01-08 2024-05-03 威科电子模块(深圳)有限公司 Preparation equipment safety monitoring method and system based on thick film circuit board

Similar Documents

Publication Publication Date Title
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
CN116681923A (en) Automatic ophthalmic disease classification method and system based on artificial intelligence
CN112906502A (en) Training method, device and equipment of target detection model and storage medium
CN113270197A (en) Health prediction method, system and storage medium based on artificial intelligence
CN117392470B (en) Fundus image multi-label classification model generation method and system based on knowledge graph
CN116186594B (en) Method for realizing intelligent detection of environment change trend based on decision network combined with big data
CN116311539B (en) Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN113707304A (en) Triage data processing method, device, equipment and storage medium
CN113361482A (en) Nuclear cataract identification method, device, electronic device and storage medium
CN116884612A (en) Intelligent analysis method, device, equipment and storage medium for disease risk level
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
CN116129182A (en) Multi-dimensional medical image classification method based on knowledge distillation and neighbor classification
CN113792801B (en) Method, device, equipment and storage medium for detecting face dazzling degree
CN114049676A (en) Fatigue state detection method, device, equipment and storage medium
CN115526882A (en) Medical image classification method, device, equipment and storage medium
CN113515591B (en) Text defect information identification method and device, electronic equipment and storage medium
CN113920590A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium
CN114187476A (en) Vehicle insurance information checking method, device, equipment and medium based on image analysis
CN117523503B (en) Preparation equipment safety monitoring method and system based on thick film circuit board
CN117235480B (en) Screening method and system based on big data under data processing
CN117253096B (en) Finger-adaptive health index monitoring method, device, equipment and storage medium
CN116959099B (en) Abnormal behavior identification method based on space-time diagram convolutional neural network
Sharma et al. Advancement in diabetic retinopathy diagnosis techniques: automation and assistive tools
CN117373580B (en) Performance analysis method and system for realizing titanium alloy product based on time sequence network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination