CN112950577A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents
Image processing method, image processing device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112950577A CN112950577A CN202110221152.4A CN202110221152A CN112950577A CN 112950577 A CN112950577 A CN 112950577A CN 202110221152 A CN202110221152 A CN 202110221152A CN 112950577 A CN112950577 A CN 112950577A
- Authority
- CN
- China
- Prior art keywords
- historical
- image
- target
- determining
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000012545 processing Methods 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 29
- 210000001585 trabecular meshwork Anatomy 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 16
- 230000008520 organization Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 57
- 210000005252 bulbus oculi Anatomy 0.000 description 19
- 210000000554 iris Anatomy 0.000 description 16
- 238000010586 diagram Methods 0.000 description 13
- 201000002862 Angle-Closure Glaucoma Diseases 0.000 description 7
- 201000004616 primary angle-closure glaucoma Diseases 0.000 description 7
- 208000010412 Glaucoma Diseases 0.000 description 5
- 210000001508 eye Anatomy 0.000 description 4
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 2
- 210000004240 ciliary body Anatomy 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 210000003786 sclera Anatomy 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 206010030348 Open-Angle Glaucoma Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 230000003511 endothelial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004410 intraocular pressure Effects 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 238000012014 optical coherence tomography Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 208000036460 primary closed-angle glaucoma Diseases 0.000 description 1
- 201000006366 primary open angle glaucoma Diseases 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000002636 symptomatic treatment Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The embodiment of the invention discloses an image processing method, an image processing device, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring an original image of a target object; inputting the original image into a trained image recognition network model to obtain position information of a target tissue in the target object; determining an index parameter of the target tissue in the original image based on the position information of the target tissue; inputting each index parameter into a trained category identification network model, determining a probability value of the target object belonging to a target category, and determining whether the target object belongs to the target category based on the probability value. So as to realize the effect of efficiently and accurately determining the target class of the target object.
Description
Technical Field
Embodiments of the present invention relate to image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Glaucoma is one of the leading causes of blindness in the human eye. Glaucoma is mainly divided into three types: childhood, secondary and primary glaucoma is divided into primary open angle glaucoma and primary closed angle glaucoma. Primary Angle Closure Disease (PACD) includes: (1) primary angle closure glaucoma (PACS); (2) primary Angle Closure (PAC); (3) primary Angle Closure Glaucoma (PACG). PACG is an irreversible disease whose clinical manifestations are increased intraocular pressure, decreased visual field and damaged optic nerve. In its early stages of onset (PACS and PAC phases), timely symptomatic treatment can prevent or delay the progression to PACG. Therefore, early diagnosis of PACD is very important.
Currently, the clinic diagnosis of PACD mainly uses an gonioscope, the method needs to anaesthetize and contact the eyes of a patient, and after an eye image is obtained, diagnosis is carried out by depending on the subjective experience of a doctor.
The diagnostic methods of PACD described above are too dependent on the subjective experience of the physician and are poorly reproducible.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, and aims to achieve the effect of efficiently and accurately determining the target class of a target object.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
acquiring an original image of a target object;
inputting the original image into a trained image recognition network model to obtain position information of a target tissue in the target object;
determining an index parameter of the target tissue in the original image based on the position information of the target tissue;
inputting each index parameter into a trained category identification network model, determining a probability value of the target object belonging to a target category, and determining whether the target object belongs to the target category based on the probability value.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the original image acquisition module is used for acquiring an original image of a target object;
the target position information determining module is used for inputting the original image into a trained image recognition network model to obtain the position information of a target tissue in the target object;
an index parameter determination module for determining an index parameter of the target tissue in the original image based on the position information;
and the probability value determining module is used for inputting each index parameter into a trained category identification network model, determining the probability value of the target object belonging to the target category, and determining whether the target object belongs to the target category based on the probability value.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the image processing method according to any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the image processing method according to any one of the embodiments of the present invention.
According to the technical scheme of the embodiment of the invention, the acquired original image of the target object is input into the trained image recognition network model, so that the method has the advantages that the position information of the target tissue in the target object can be accurately obtained without being determined according to the experience value of a doctor, and the influence of misjudgment and subjective factors is avoided. And then determining the index parameters of the target tissue in the original image based on the position information of the target tissue, inputting each index parameter into a trained class recognition network model, and determining whether the target object belongs to the target class based on the probability value of the target object, which is output by the class recognition network model, belonging to the target class.
Drawings
FIG. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to a second embodiment of the present invention;
FIG. 3 is a schematic processing flow chart of an image recognition network model according to a second embodiment of the present invention;
FIG. 4 is a flowchart of an image processing method according to a third embodiment of the present invention;
fig. 5 is a schematic partial structure view of an eyeball according to a third embodiment of the invention;
FIG. 6 is a schematic diagram of the determination of the index parameter of the target tissue according to the third embodiment of the present invention;
FIG. 7 is a simplified diagram of index parameters of a target tissue according to a third embodiment of the present invention;
FIG. 8 is an AS-OCT diagram of the index parameters of the target tissue in the third embodiment of the present invention;
FIG. 9 is a flowchart of an image processing method in a fourth embodiment of the present invention;
fig. 10 is a schematic structural diagram of an image processing apparatus according to a fifth embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device in a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the embodiment is applicable to a case of determining a category of a target tissue, the method may be executed by an image processing apparatus, the image processing apparatus may be implemented by software and/or hardware, and the image processing apparatus may be configured on an electronic computing device, and specifically includes the following steps:
and S110, acquiring an original image of the target object.
Illustratively, the target object may be an object for which an image scan is performed, and may be a person, for example.
In the embodiment of the present invention, the preferred target object is a specific scanned object, and may be, for example, a certain part, such as the abdomen, the chest, and the like.
In the embodiment of the present invention, the target object may be an eyeball of a human.
The original image may be an image of the target object acquired after the target object is scanned. For example, if the target object is an eyeball, the original image is a scanned image of the eyeball, and may be an anterior segment Optical coherence tomography (AS-OCT) image.
In the embodiment of the present invention, the original image may be a three-dimensional image or a two-dimensional image, which is not limited herein.
And S120, inputting the original image into the trained image recognition network model to obtain the position information of the target tissue in the target object.
For example, the image recognition network model may be a model for recognizing an object in an input image, for example, a deep learning based neural network model, such as a convolutional neural network.
In the embodiment of the present invention, the image recognition network model is not limited, and any model that can be used to recognize an object in an image input thereto belongs to the scope of the embodiment of the present invention.
The target tissue may be the tissue to be identified.
In the embodiment of the present invention, if the target object is an eyeball, the target tissue may be Scleral Spur (SS) in the eyeball.
Since, in the current eye examination, it is mainly the examination of glaucoma, that is, whether the patient has glaucoma or not, when PACD examination is performed using AS-OCT images, it is mainly necessary to acquire the position of scleral spur in the eyeball, and obtain clinical indicators (for example, AOD250, AOD500, AOD750, TISA250, TISA500, TISA750, and the like, where AOD is the angle opening distance and TIS is the trabecular meshwork inter-iris area) according to the position of scleral spur, and it is possible to determine whether the patient has PACD or not according to the obtained clinical indicators. Thus, scleral spur is an important structure during examination of PACD. And judging whether the patient has PACD, wherein one of the main reference indexes is the clinical indexes, namely AOD250, AOD500, AOD750, TISA250, TISA500, TISA750 and the like.
After the original image is input into the trained image recognition network model, the position information of the target tissue in the target object can be obtained based on the image recognition network model.
In the technical scheme of the embodiment of the invention, the original image is input into the trained image recognition network model, so that the method has the advantages that the position information of the target tissue in the target object can be accurately obtained, the position information does not need to be determined according to the experience value of a doctor, and the influence of misjudgment and subjective factors is avoided.
S130, determining index parameters of the target tissue in the original image based on the position information of the target tissue.
For example, the index parameter may be an index parameter corresponding to the target tissue.
In the embodiment of the present invention, the target tissue is scleral spur, and the index parameters of the target tissue may be the angle opening distance, the trabecular meshwork iris angle, the area between trabecular meshwork irises, and the like.
After the position information of the target tissue is determined, the position information of the target tissue can be input into a pre-developed anterior segment parameter automatic calculation program, and the index parameter of the target tissue can be obtained based on the position information of the target tissue automatically by the anterior segment parameter automatic calculation program.
In the embodiment of the present invention, the anterior segment parameter automatic calculation program may be developed in advance by an operator, a specific anterior segment parameter automatic calculation program is not shown here, and any program that can obtain the index parameter of the target tissue by using the position information of the target tissue belongs to the protection scope of the embodiment of the present invention.
S140, inputting the index parameters into the trained category identification network model, determining the probability value of the target object belonging to the target category, and determining whether the target object belongs to the target category based on the probability value.
For example, the class identification network model may be a model for identifying a class of the target object. For example, the neural network model based on deep learning, such as convolutional neural network, classifier, multi-layer perceptron, etc. can be used.
In the embodiment of the present invention, the class identification network model is not limited, and any model that can be used to identify the class of the target object according to the index parameter of the target organization belongs to the protection scope of the embodiment of the present invention.
The target class may be a class to which the target object belongs. For example, if the target object is an eyeball and the target tissue is a scleral spur, the target class here may be PACD.
After the index parameters of the target tissue are determined, the index parameters can be input into a trained category identification network model, the probability value of the target object belonging to the target category can be determined based on the trained category identification network model, and whether the target object belongs to the target category can be determined based on the probability value.
After the original image is input into the trained category identification network model, the network model is identified based on the image category, the probability value that the target object belongs to the target category can be obtained, and whether the target object belongs to the target category is determined based on the probability value.
In the technical scheme of the embodiment of the invention, the original image is input into the trained image category identification network model, so that the method has the advantages that whether the target object belongs to the target category can be accurately determined, the determination according to the experience value of a doctor is not needed, and the influence of misjudgment and subjective factors is avoided.
According to the technical scheme of the embodiment of the invention, the acquired original image of the target object is input into the trained image recognition network model, so that the method has the advantages that the position information of the target tissue in the target object can be accurately obtained without being determined according to the experience value of a doctor, and the influence of misjudgment and subjective factors is avoided. And then determining the index parameters of the target tissue in the original image based on the position information of the target tissue, inputting each index parameter into a trained class recognition network model, and determining whether the target object belongs to the target class based on the probability value of the target object, which is output by the class recognition network model, belonging to the target class.
Example two
Fig. 2 is a flowchart of an image processing method according to a second embodiment of the present invention, and the second embodiment of the present invention may be combined with various alternatives in the above embodiments. In this embodiment of the present invention, optionally, the training method for the image recognition network model includes: acquiring at least one group of historical images, wherein each group of historical images comprises: historical scanned images of the target object, and historical position information of the target tissue in the historical scanned images; inputting each group of historical scanned images into an image recognition network model to be trained, and determining predicted position information of a target tissue in each historical scanned image according to an output result of the image recognition network model; determining a first loss function based on each predicted position information and each historical position information corresponding to each historical scanned image; and adjusting parameters of the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that the training of the image recognition network model is finished.
As shown in fig. 2, the method of the embodiment of the present invention specifically includes the following steps:
s210, acquiring at least one group of historical images, wherein each group of historical images comprises: historical scan images of the target object, and historical location information of the target tissue in the historical scan images.
Illustratively, the image recognition network model is trained based on a plurality of sets of historical images.
The historical image may be an image acquired by a previous scan of the target object and location information of the target tissue based on the image acquired by the previous scan of the target object.
Namely, each group of history images comprises: historical scan images of the target object, and historical location information of the target tissue in the historical scan images.
Specifically, a set of history images is taken as an example, a target object is taken as an eyeball, and a history scan image of the target object may be a previous scan image of the acquired target object. For example, AS-OCT images of the eyeball of each patient acquired before may be used.
The historical location information may be location information of the target tissue in the historical scan image determined from the historical scan image. For example, a doctor may observe the historical scanned image, and then mark the position of the target tissue in the historical scanned image, so as to obtain the historical position information of the target tissue in the historical scanned image. For example, taking the target object as an eyeball, the target tissue is scleral spur.
When the image recognition network model is trained, firstly, a plurality of groups of historical images are acquired so as to train the image recognition network model by using the plurality of groups of historical images.
S220, inputting each group of historical scanned images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanned image according to the output result of the image recognition network model.
For example, for any historical scan image, the preset position information may be the position of the target tissue in the historical scan image predicted based on the image recognition network model to be trained.
In the embodiment of the present invention, a processing flow diagram of the image recognition network model described with reference to fig. 3 is shown. For any historical scanned image, the output of the image recognition network model may be: the image of the target tissue marked in the historical scan image output by the image recognition network model, i.e. the rightmost diagram in stage1 in fig. 3, and in the rightmost diagram in stage1 in fig. 3, the target tissue is framed by box a.
It should be noted that, in the embodiments of the present invention, the target objects are all illustrated by using eyeballs, and the target tissues are illustrated by taking scleral spur as an example, in the eyeballs, there is one scleral spur on each of the left and right sides according to the anatomical structure of the eyeballs. The particular anatomy of the eyeball is prior art and will not be described in detail here. Specific ways to identify scleral spur may be: the corneal endothelial layer is seen as a line, the scleral lining is seen as a line, and the intersection of the two lines is the scleral spur.
After acquiring a plurality of sets of historical images, inputting the historical scanned images in each set of historical images into an image recognition network model to be trained, wherein the leftmost image in fig. 3 is any one of the historical scanned images, and the target tissue is enclosed by a box in the leftmost image in fig. 3.
It should be noted that, in the embodiment of the present invention, after acquiring a plurality of sets of historical scanned images, before inputting the historical scanned images into the image recognition network model to be trained, the historical scanned images may be preprocessed. A specific pre-processing may be to down-sample the historical scan image to reduce the processing pressure of the computer.
Optionally, the pre-processing may be: and converting each historical scanned image into a gray image, and sampling the size of each historical scanned image after being converted into the gray image to obtain each historical scanned image with the target size.
For example, the target size may be a size sampled by the size of each historical scan image after being converted into a grayscale image according to the user requirement (or the requirement of the subsequent image recognition network model to be trained).
In the embodiment of the present invention, the historical scan image is 3-channel AS-OCT image of 21232 x 1866, and in order to reduce the computer processing pressure, the data size needs to be reduced. A specific way may be to convert each historical scanned image into a grayscale image, so that only data of one channel may be taken subsequently, and then down-sample the size of each historical scanned image after being converted into the grayscale image to a certain pixel, for example, 800 × 800 pixels.
In the embodiment of the invention, in order to obtain more accurate predicted position information of the target tissue and reduce the quantization error of the positioning of the target tissue, a certain strategy can be adopted. The specific strategy is as follows:
optionally, the predicted position information of the target tissue in each historical scanned image is determined according to the output result of the image recognition network model, and specifically, the predicted position information may be: determining the original position information of the target tissue in the output result of any one of the historical scanned images; based on the original position information of the target tissue, cutting the output image to obtain a cut scanning image; based on the intermediate position information of the target tissue in the cropped scanned image, the intermediate position information is mapped into the output image, and the predicted position information of the target tissue is determined.
For example, the original location information may be the location information of the target tissue in the historical scan image determined by the result output by the image recognition network model.
In the embodiment of the present invention, the output result is an output image including the original position information mark. I.e. the result of the image recognition network model output is an image comprising the original location information tag, like the rightmost graph in stage1 in fig. 3.
The cropped scanned image may be an image obtained by cropping an image output from the image recognition network model.
In the embodiment of the present invention, since the cropped scanned image is obtained by cropping the image output from the image recognition network model, the image output from the image recognition network model has the target tissue, and correspondingly, the cropped scanned image also has the target tissue.
The intermediate position information may be position information of the target tissue in the cropped scan image.
As shown in fig. 3, when the rightmost image (output image) in stage1 is obtained, the image is clipped to obtain a clipped scanned image. The specific cropping mode is to respectively collect the left and right target tissues in the output image as references, that is, to respectively crop the left target tissue and the right target tissue in the output image, so as to obtain a cropped scanned image (i.e., the leftmost image in stage2 in fig. 3, where the upper and lower images are respectively the cropped scanned image of the left target tissue and the cropped scanned image of the right target tissue).
After the cropped scanned image is obtained, position information of the target tissue in the cropped scanned image, that is, intermediate position information, is obtained from the cropped scanned image, that is, the rightmost image in stage2 in fig. 3 is obtained. In the rightmost drawing in stage2 in fig. 3, the upper drawing is the positional information of the left target tissue in the cropped scan image of the left target tissue, and the lower drawing is the positional information of the right target tissue in the cropped scan image of the right target tissue.
After the intermediate position information of the target tissue is obtained, the intermediate position information of the target tissue is mapped to the image output by the initial image recognition network model, and the predicted position information of the target tissue can be obtained, so that the obtained predicted position information of the target tissue is more accurate.
It is understood that the above-described determination of the predicted location information of the target organization is divided into two stages (stage1 and stage 2). In order to obtain rough position information of the target tissue in the historical scan image, stage1 performs cropping on the target tissue on the output image according to the position information of the target tissue obtained in stage1, so as to obtain a smaller image compared with the output image, i.e., a cropped scan image (for example, the output image may be an image of 800 × 800 pixels, and the cropped scan image may be an image of 400 × 400 pixels). Therefore, the original resolution can be reserved, the position of the target tissue can be accurately positioned, and the memory of the display card can be saved. Then, the position information of the target tissue in the cutting scanning image, namely the middle position information, is determined, and finally, the middle position information is mapped to the output image to obtain the accurate predicted position information of the target tissue.
In the actual operation, if the user finds that the predicted position information of the target tissue obtained after passing through stage1 and stage2 is not accurate enough, the intermediate position information obtained at stage2 may be repeatedly corrected, specifically, the correction process is a process of repeating stage2 (i.e., the cropped scanned image is cropped again, an image smaller than the cropped scanned image is obtained, and the position information of the target tissue in the smaller cropped scanned image is determined) until the predicted position information of the target tissue that the user has satisfied is obtained.
In the embodiment of the present invention, the specific determination of the predicted location information of the target tissue according to the intermediate location information obtained in stage2 may be based on the following formula:
wherein P is predicted position information, PS1For original position information obtained in stage1, PS2The size is the pixel size of the clipped scanned image, which is the intermediate position information obtained for stage 2.
In the embodiment of the invention, the network models of Stage1 and Stage2 are both split network UNET. The network structure of UNET consists of a 4-pass convolutional-pooling layer constituent Encoder (Encoder) and a 4-pass upsampling layer constituent Decoder (Decoder). For the encoder layer and the decoder layer of the same layer, the jump connection mode is adopted to transfer the information of the image to the deep network.
In an embodiment of the present invention, the tags of stage1 and stage2 are generated using a Gaussian distribution according to the location of the target tissue.
S230, a first loss function is determined based on the respective predicted position information and the respective historical position information corresponding to the respective historical scanned images.
For example, the first loss function may be a loss function of the image recognition network model to be trained, which is determined based on each predicted position information and each historical position information corresponding to each historical scan image.
And after the predicted position information corresponding to each historical scanned image is obtained, comparing the predicted position information corresponding to each historical scanned image with the historical position information corresponding to each historical scanned image, and determining a loss function, namely a first loss function, of the image recognition network model to be trained according to the comparison result.
S240, parameter adjustment is carried out on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and it is determined that training of the image recognition network model is completed.
For example, the first preset loss function threshold may be a preset threshold of the first loss function. And when the first loss function is smaller than the threshold value, the training of the image recognition network model to be trained is proved to be finished.
And adjusting parameters of the image recognition network model to be trained according to the obtained first loss function until the first loss function obtained by any iteration is smaller than a first preset loss function threshold value, and determining that the training of the image recognition network model to be trained is completed.
Therefore, the original image of the subsequent target object can be processed based on the trained image recognition network model so as to quickly and accurately obtain the position information of the target tissue in the target object.
And S250, acquiring an original image of the target object.
And S260, inputting the original image into the trained image recognition network model to obtain the position information of the target tissue in the target object.
S270, determining index parameters of the target tissue in the original image based on the position information of the target tissue.
S280, inputting the index parameters into the trained category identification network model, determining the probability value of the target object belonging to the target category, and determining whether the target object belongs to the target category based on the probability value.
According to the technical scheme of the embodiment of the invention, the image recognition network model is trained by utilizing a plurality of groups of historical images to obtain the trained image recognition network model, so that the subsequent original images of the subsequent target object can be processed based on the trained image recognition network model, the position information of the target tissue in the target object is quickly and accurately obtained, the image recognition time is saved, the working efficiency is improved, meanwhile, the position information of the target tissue is obtained by utilizing the image recognition network model, the position information does not need to be determined according to the experience value of a doctor, and the influence of misjudgment and subjective factors is avoided.
EXAMPLE III
Fig. 4 is a flowchart of an image processing method according to a third embodiment of the present invention, and the third embodiment of the present invention may be combined with various alternatives in the foregoing embodiments. In this embodiment of the present invention, optionally, the determining the index parameter of the target tissue based on the location information includes: determining the trabecular meshwork iris angle based on the position information of the target tissue and the position information of the reference tissue; determining the opening distance of the room angle and the area between trabecular meshwork irises based on the position information of the target tissue and the distance from the position information of the target tissue to the vertex angle of the trabecular meshwork irises; based on the location information of the target tissue, and the trabecular intertillary area, the corner opening distance is determined.
As shown in fig. 4, the method of the embodiment of the present invention specifically includes the following steps:
s310, acquiring at least one group of historical images, wherein each group of historical images comprises: historical scan images of the target object, and historical location information of the target tissue in the historical scan images.
S320, inputting each group of historical scanned images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanned image according to the output result of the image recognition network model.
S330, a first loss function is determined based on the respective predicted position information and the respective historical position information corresponding to the respective historical scanned images.
S340, adjusting parameters of the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that the training of the image recognition network model is finished.
And S350, acquiring an original image of the target object.
And S360, inputting the original image into the trained image recognition network model to obtain the position information of the target tissue in the target object.
S370, determining the trabecular meshwork iris angle based on the position information of the target tissue and the externally input room angle opening distance; based on the location information of the target tissue, and the corner opening distance, the trabecular meshwork inter-iris area is determined.
For example, in the embodiment of the present invention, the index parameter of the target tissue may be, but is not limited to, at least one of the following: trabecular meshwork iris angle and trabecular meshwork iris intermembrane area.
In the embodiment of the present invention, the target tissue is exemplified by scleral spur.
Referring to fig. 5, a partial structure of an eyeball is schematically shown, in fig. 5, 1 is a cornea, 2 is a scleral spur, 3 is a sclera, 4 is a ciliary body, and 5 is an iris.
Referring to fig. 6, the schematic diagram of determining the index parameter of the target tissue according to the structure of the eyeball illustrated in fig. 5 shows a point 2 as a scleral spur, 3 as a sclera, and 4 as a ciliary body in fig. 6. In fig. 6 (a), the Angle Opening Distance (AOD) may be selected according to the user's requirement, for example, 500um or 750um may be selected, which is not limited herein.
As shown in fig. 6 (a), AOD is Num extending outward from scleral spur (where N is the distance to scleral spur), for example, N is 500, and then AOD 500. The outward extension here means an extension to the right of the drawing (a) in fig. 6.
As shown in fig. 6 (b), the corneal inner layer and the iris inner layer are extended according to the AOD value, the two extended lines (extended line P and extended line Q in fig. 6 (b)) intersect at a point, and the included angle between the extended line P and the extended line Q is Trabecular Iris Angle (TIA).
As shown in (c) of fig. 6, the area of the region of the boundary of the target tissue to the AOD is regarded as trabecular-iris space area (TIAS) according to the value of the AOD and the position information of the target tissue.
According to the above calculation method, each index parameter of the target tissue can be obtained. Fig. 7 is a simplified diagram of each index parameter of the target tissue, fig. 8 is an AS-OCT diagram of each index parameter of the target tissue, and in fig. 8, SS is scleral spur.
Thus, according to the position information of the target tissue, the index parameter of the target tissue can be calculated, so that whether the target tissue belongs to the target class or not can be judged based on the index parameter of the target tissue. Besides, the opening and closing conditions of the house corner can be roughly judged.
And S380, inputting the index parameters into the trained category identification network model, determining the probability value of the target object belonging to the target category, and determining whether the target object belongs to the target category based on the probability value.
According to the technical scheme of the embodiment of the invention, the trabecular meshwork iris angle is determined according to the position information of the target tissue and the externally input room angle opening distance, and the area between the trabecular meshwork irises is determined based on the position information of the target tissue and the room angle opening distance, so that the index parameter of the target tissue can be calculated according to the position information of the target tissue, and whether the target tissue belongs to the target category or not can be judged based on the index parameter of the target tissue subsequently.
Example four
Fig. 9 is a flowchart of an image processing method according to a fourth embodiment of the present invention, and the embodiment of the present invention and various alternatives in the above embodiments may be combined. In this embodiment of the present invention, optionally, the training method for the class recognition network model includes: acquiring multiple groups of historical parameter information, wherein each group of historical parameter information comprises: the method comprises the following steps of obtaining historical instruction parameters and historical labels of target objects corresponding to the historical instruction parameters, wherein the historical labels comprise: the target object belongs to a target class, and the target object does not belong to the target class; inputting each group of historical parameter information into a class recognition network model to be trained, and determining a calculated probability value of a target object corresponding to each historical instruction parameter as a target class; determining a calculation history label corresponding to each history instruction parameter based on the calculation probability value; determining a second loss function based on each computation history label and the history label corresponding to each history instruction parameter; and adjusting parameters of the class recognition network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the class recognition network model is trained completely.
As shown in fig. 9, the method of the embodiment of the present invention specifically includes the following steps:
s401, acquiring at least one group of historical images, wherein each group of historical images comprises: historical scan images of the target object, and historical location information of the target tissue in the historical scan images.
S402, inputting each group of historical scanned images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanned image according to the output result of the image recognition network model.
S403, a first loss function is determined based on the respective predicted position information and the respective historical position information corresponding to the respective historical scanned images.
S404, parameter adjustment is carried out on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and the fact that training of the image recognition network model is completed is determined.
S405, obtaining multiple sets of historical parameter information, wherein each set of historical parameter information comprises: the method comprises the following steps of obtaining historical instruction parameters and historical labels of target objects corresponding to the historical instruction parameters, wherein the historical labels comprise: the target object belongs to a target class and the target object does not belong to the target class.
For example, before determining the probability value of the target object belonging to the target class by using the class identification network model, the class identification network model is trained first, so that the trained class identification network model can be used to determine the probability value of the target object belonging to the target class.
The historical parameter information may be information relating to acquisition of an index parameter of the target tissue.
Specifically, taking any group of historical parameter information as an example, each group of historical parameter information includes: historical instruction parameters and historical labels of the target objects corresponding to the historical instruction parameters.
The historical instruction parameters may be previously acquired index parameters of the target tissue. Specifically, for example, the index parameter of the target tissue may be obtained from the position information of the target tissue in the historical scan image.
The history tag may be a tag of a target object corresponding to the history instruction parameter, where the tag may be: the target object belongs to the target class, and the target object does not belong to the target class.
S406, inputting each group of historical parameter information into a class recognition network model to be trained, and determining the calculated probability value of the target object corresponding to each historical instruction parameter as the target class.
For example, the calculated probability value may be a probability value of the target object corresponding to the historical instruction parameter predicted by the category identification network model as the target category.
And inputting the acquired multiple groups of historical parameter information into a class identification network model to be trained, and determining the calculation probability value of the target object corresponding to each historical instruction parameter as a target class based on the class identification network model.
And S407, determining the calculation history labels corresponding to the history instruction parameters based on the calculation probability values.
For example, the computed historical tag may be a tag corresponding to a historical instruction parameter determined based on the computed probability value.
And determining the calculation history label corresponding to each history instruction parameter according to the obtained calculation probability value.
Specifically, a threshold of the calculated probability value is preset, and when the obtained calculated probability value is greater than the threshold, the label corresponding to the historical instruction parameter is determined as follows: the target object belongs to a target class. When the obtained calculation probability value is smaller than the threshold value, the label corresponding to the historical instruction parameter is determined as follows: the target object does not belong to the target class.
Specifically, for example, the calculated probability value of a certain historical instruction parameter obtained based on the class recognition network model to be trained is 0.8, the threshold value of the preset calculated probability value is 0.5, the target object is an eyeball, the target tissue is a scleral spur, and the target class is PACD. Then 0.8 > 0.5, proving that the target object (eyeball) corresponding to the historical instruction parameters has PACD disease.
And S408, determining a second loss function based on the historical labels corresponding to the historical command parameters and the computation historical labels.
For example, the second loss function may be a loss function of the class recognition network model to be trained, which is determined based on the computation history labels corresponding to the historical instruction parameters and the historical labels corresponding to the historical instruction parameters.
And after the calculation history labels corresponding to the historical instruction parameters are obtained, comparing the calculation history labels corresponding to the historical instruction parameters with the historical labels corresponding to the historical instruction parameters, and determining a loss function, namely a second loss function, of the class identification network model to be trained according to the comparison result.
And S409, performing parameter adjustment on the class identification network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the class identification network model is trained completely.
For example, the second preset loss function threshold may be a preset threshold of the second loss function. And when the second loss function is smaller than the threshold value, the training of the class recognition network model to be trained is proved to be finished.
And adjusting parameters of the class recognition network model to be trained according to the obtained second loss function until the second loss function obtained by any iteration is smaller than a second preset loss function threshold value, and determining that the class recognition network model to be trained is trained.
Therefore, the index parameters of the subsequent target object can be processed based on the trained class recognition network model so as to quickly and accurately obtain the probability value of the target object belonging to the target class, and whether the target object belongs to the target class is determined according to the probability value.
S410, acquiring an original image of the target object.
S411, inputting the original image into the trained image recognition network model to obtain the position information of the target tissue in the target object.
S412, determining the trabecular meshwork iris angle based on the position information of the target tissue and the externally input room angle opening distance; based on the location information of the target tissue, and the corner opening distance, the trabecular meshwork inter-iris area is determined.
And S413, inputting each index parameter into the trained category identification network model, determining the probability value of the target object belonging to the target category, and determining whether the target object belongs to the target category based on the probability value.
Optionally, the determining, based on the probability value, whether the target object belongs to the target category includes: when the probability value is greater than or equal to a preset probability threshold value, determining that the target object belongs to a target category; and when the probability value is smaller than a preset probability threshold value, determining that the target object does not belong to the target category.
For example, the preset probability threshold may be a preset threshold of a probability value that the target object belongs to the target category.
When the probability value of the obtained target object belonging to the target category is greater than or equal to a preset probability threshold value, determining that the target object belongs to the target category; and when the probability value of the obtained target object belonging to the target category is smaller than a preset probability threshold value, determining that the target object does not belong to the target category.
According to the technical scheme of the embodiment of the invention, the class recognition network model is trained by utilizing multiple groups of historical parameter information to obtain the trained class recognition network model, so that index parameters of a subsequent target object can be processed based on the trained class recognition network model, the probability value of the target object belonging to the target class is rapidly and accurately obtained, whether the target object belongs to the target class is determined according to the probability value, the time for determining the target class is saved, the working efficiency is improved, meanwhile, whether the target object belongs to the target class is determined by utilizing the class recognition network model, the determination is not needed according to the experience value of a doctor, and the influences of misjudgment and subjective factors are avoided.
EXAMPLE five
Fig. 10 is a schematic structural diagram of an image processing apparatus according to a fifth embodiment of the present invention, and as shown in fig. 10, the apparatus includes: an original image acquisition module 31, a target position information determination module 32, an index parameter determination module 33, and a probability value determination module 34.
The original image acquiring module 31 is configured to acquire an original image of a target object;
a target position information determining module 32, configured to input the original image into a trained image recognition network model to obtain position information of a target tissue in the target object;
an index parameter determination module 33, configured to determine an index parameter of the target tissue in the original image based on the position information;
a probability value determining module 34, configured to input each of the index parameters into a trained category identification network model, determine a probability value that the target object belongs to a target category, and determine whether the target object belongs to the target category based on the probability value.
On the basis of the technical scheme of the embodiment of the invention, the device also comprises:
a history image obtaining module, configured to obtain at least one group of history images, where each group of history images includes: a historical scan image of the target object, and historical location information of the target tissue in the historical scan image;
the predicted position information determining module is used for inputting each group of historical scanning images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanning image according to the output result of the image recognition network model;
a first loss function determining module, configured to determine a first loss function based on each piece of predicted position information and each piece of historical position information corresponding to each piece of historical scan image;
and the image recognition network model training completion determining module is used for carrying out parameter adjustment on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that the image recognition network model training is completed.
On the basis of the technical scheme of the embodiment of the invention, the device also comprises:
and the image preprocessing module is used for converting each historical scanning image into a gray image and sampling the size of each historical scanning image converted into the gray image to obtain each historical scanning image with a target size.
On the basis of the technical scheme of the embodiment of the invention, the prediction position information determining module comprises:
an original position information determining unit, configured to determine, for an output result of any one of the historical scan images, original position information of the target tissue in the output result, where the output result is an output image including an original position information mark;
a cutting scanning image determining unit, configured to cut the output image based on original position information of the target tissue to obtain a cutting scanning image, where the cutting scanning image includes the target tissue;
a predicted position information determination unit configured to map intermediate position information of the target tissue in the cropped scan image into the output image based on the intermediate position information, and determine predicted position information of the target tissue.
Optionally, the index parameter at least includes one of the following items: trabecular meshwork iris angle and trabecular meshwork iris intermembrane area.
On the basis of the technical solution of the embodiment of the present invention, the index parameter determining module 33 is specifically configured to:
determining the trabecular meshwork iris angle based on the position information of the target tissue and the externally input room angle opening distance; determining the trabecular interteroiris area based on the location information of the target tissue and the corner opening distance.
On the basis of the technical scheme of the embodiment of the invention, the device also comprises:
a history parameter obtaining module, configured to obtain multiple sets of history parameter information, where each set of history parameter information includes: historical instruction parameters and historical labels of the target objects corresponding to the historical instruction parameters, wherein the historical labels comprise: the target object belongs to the target class and the target object does not belong to the target class;
the calculation probability value determination module is used for inputting each group of historical parameter information into a category identification network model to be trained and determining the calculation probability value of the target object corresponding to each historical instruction parameter as the target category;
the calculation history label determining module is used for determining a calculation history label corresponding to each history instruction parameter based on the calculation probability value;
the second loss function determining module is used for determining a second loss function based on each calculation history label and the history label corresponding to each history instruction parameter;
and the class recognition network model training completion determining module is used for carrying out parameter adjustment on the class recognition network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the class recognition network model training is completed.
On the basis of the technical solution of the embodiment of the present invention, the probability value determining module 34 includes:
the first judgment unit is used for determining that the target object belongs to a target category when the probability value is greater than or equal to a preset probability threshold;
and the second judging unit is used for determining that the target object does not belong to the target category when the probability value is smaller than the preset probability threshold.
The image processing device provided by the embodiment of the invention can execute the image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
Fig. 11 is a schematic structural diagram of an electronic apparatus according to a sixth embodiment of the present invention, as shown in fig. 11, the electronic apparatus includes a processor 70, a memory 71, an input device 72, and an output device 73; the number of the processors 70 in the electronic device may be one or more, and one processor 70 is taken as an example in fig. 11; the processor 70, the memory 71, the input device 72 and the output device 73 in the electronic apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 11.
The memory 71, as a computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiment of the present invention (for example, the original image acquisition module 31, the target position information determination module 32, the index parameter determination module 33, and the probability value determination module 34). The processor 70 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 71, that is, implements the image processing method described above.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 71 may further include memory located remotely from the processor 70, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus. The output device 73 may include a display device such as a display screen.
EXAMPLE seven
An embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform an image processing method.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the image processing method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer electronic device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the image processing apparatus, the included units and modules are merely divided according to the functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. An image processing method, comprising:
acquiring an original image of a target object;
inputting the original image into a trained image recognition network model to obtain position information of a target tissue in the target object;
determining an index parameter of the target tissue in the original image based on the position information of the target tissue;
inputting each index parameter into a trained category identification network model, determining a probability value of the target object belonging to a target category, and determining whether the target object belongs to the target category based on the probability value.
2. The method of claim 1, wherein the image recognition network model is trained based on a plurality of sets of historical images;
the training method of the image recognition network model comprises the following steps:
acquiring at least one group of historical images, wherein each group of historical images comprises: a historical scan image of the target object, and historical location information of the target tissue in the historical scan image;
inputting each group of historical scanning images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanning image according to the output result of the image recognition network model;
determining a first loss function based on each piece of predicted position information and each piece of historical position information corresponding to each piece of historical scanned image;
and adjusting parameters of the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that the training of the image recognition network model is finished.
3. The method of claim 2, wherein prior to said inputting each of said historical scan images into an image recognition network model to be trained, said method further comprises:
and converting each historical scanning image into a gray level image, and sampling the size of each historical scanning image after the historical scanning image is converted into the gray level image to obtain each historical scanning image with the target size.
4. The method of claim 2, wherein determining predicted location information of the target tissue in each of the historical scan images based on the output of the image recognition network model comprises:
determining original position information of the target tissue in an output result of any one of the historical scanned images, wherein the output result is the output image comprising the original position information mark;
based on the original position information of the target tissue, cutting the output image to obtain a cut scanning image, wherein the cut scanning image comprises the target tissue;
and mapping the intermediate position information into the output image based on the intermediate position information of the target tissue in the cutting scanning image, and determining the predicted position information of the target tissue.
5. The method of claim 1, wherein the metric parameter comprises at least one of: trabecular meshwork iris angle and trabecular meshwork iris intermembrane area;
the determining an index parameter of the target tissue based on the location information comprises:
determining the trabecular meshwork iris angle based on the position information of the target tissue and the externally input room angle opening distance;
determining the trabecular interteroiris area based on the location information of the target tissue and the corner opening distance.
6. The method of claim 1, wherein the class-specific network model is trained based on a plurality of sets of historical parametric information;
the training method of the class recognition network model comprises the following steps:
acquiring multiple groups of historical parameter information, wherein each group of historical parameter information comprises: historical instruction parameters and historical tags of the object organization corresponding to the historical instruction parameters, wherein the historical tags comprise: the target object belongs to the target class and the object tissue does not belong to the target class;
inputting each group of historical parameter information into a class recognition network model to be trained, and determining a calculated probability value of the target object corresponding to each historical instruction parameter as the target class;
determining a calculation history label corresponding to each history instruction parameter based on the calculation probability value;
determining a second loss function based on each calculated historical label and the historical label corresponding to each historical instruction parameter;
and adjusting parameters of the class recognition network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the class recognition network model is trained completely.
7. The method of claim 1, wherein the determining whether the target object belongs to a target class based on the probability value comprises:
when the probability value is greater than or equal to a preset probability threshold value, determining that the target object belongs to a target category;
and when the probability value is smaller than the preset probability threshold value, determining that the target object does not belong to a target category.
8. An image processing apparatus characterized by comprising:
the original image acquisition module is used for acquiring an original image of a target object;
the target position information determining module is used for inputting the original image into a trained image recognition network model to obtain the position information of a target tissue in the target object;
an index parameter determination module for determining an index parameter of the target tissue in the original image based on the position information;
and the probability value determining module is used for inputting each index parameter into a trained category identification network model, determining the probability value of the target object belonging to the target category, and determining whether the target object belongs to the target category based on the probability value.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the image processing method of any one of claims 1 to 7 when executed by a computer processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110221152.4A CN112950577B (en) | 2021-02-26 | 2021-02-26 | Image processing method, device, electronic equipment and storage medium |
JP2021200306A JP7257645B2 (en) | 2021-02-26 | 2021-12-09 | Image processing method, device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110221152.4A CN112950577B (en) | 2021-02-26 | 2021-02-26 | Image processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112950577A true CN112950577A (en) | 2021-06-11 |
CN112950577B CN112950577B (en) | 2024-01-16 |
Family
ID=76246686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110221152.4A Active CN112950577B (en) | 2021-02-26 | 2021-02-26 | Image processing method, device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7257645B2 (en) |
CN (1) | CN112950577B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116230144B (en) * | 2023-05-04 | 2023-07-25 | 小米汽车科技有限公司 | Model generation method, material information determination method, device, equipment and medium |
CN117440172B (en) * | 2023-12-20 | 2024-03-19 | 江苏金融租赁股份有限公司 | Picture compression method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100277691A1 (en) * | 2009-04-30 | 2010-11-04 | University Of Southern California | Methods for Diagnosing Glaucoma Utilizing Combinations of FD-OCT Measurements from Three Anatomical Regions of the Eye |
CN108985214A (en) * | 2018-07-09 | 2018-12-11 | 上海斐讯数据通信技术有限公司 | The mask method and device of image data |
CN110010219A (en) * | 2019-03-13 | 2019-07-12 | 杭州电子科技大学 | Optical coherence tomography image retinopathy intelligent checking system and detection method |
CN110766659A (en) * | 2019-09-24 | 2020-02-07 | 西人马帝言(北京)科技有限公司 | Medical image recognition method, apparatus, device and medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10123691B1 (en) | 2016-03-15 | 2018-11-13 | Carl Zeiss Meditec, Inc. | Methods and systems for automatically identifying the Schwalbe's line |
JP2019177032A (en) | 2018-03-30 | 2019-10-17 | 株式会社ニデック | Ophthalmologic image processing device and ophthalmologic image processing program |
US11357479B2 (en) | 2018-05-24 | 2022-06-14 | Arcscan, Inc. | Method for measuring behind the iris after locating the scleral spur |
JP7341874B2 (en) | 2018-12-26 | 2023-09-11 | キヤノン株式会社 | Image processing device, image processing method, and program |
CN112233135A (en) | 2020-11-11 | 2021-01-15 | 清华大学深圳国际研究生院 | Retinal vessel segmentation method in fundus image and computer-readable storage medium |
-
2021
- 2021-02-26 CN CN202110221152.4A patent/CN112950577B/en active Active
- 2021-12-09 JP JP2021200306A patent/JP7257645B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100277691A1 (en) * | 2009-04-30 | 2010-11-04 | University Of Southern California | Methods for Diagnosing Glaucoma Utilizing Combinations of FD-OCT Measurements from Three Anatomical Regions of the Eye |
CN108985214A (en) * | 2018-07-09 | 2018-12-11 | 上海斐讯数据通信技术有限公司 | The mask method and device of image data |
CN110010219A (en) * | 2019-03-13 | 2019-07-12 | 杭州电子科技大学 | Optical coherence tomography image retinopathy intelligent checking system and detection method |
CN110766659A (en) * | 2019-09-24 | 2020-02-07 | 西人马帝言(北京)科技有限公司 | Medical image recognition method, apparatus, device and medium |
Non-Patent Citations (2)
Title |
---|
HUAZHU FU ET AL: "Segmentation and Quantification for Angle-Closure Glaucoma Assessment in Anterior Segment OCT", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
高英哲: "基于光学相干层析成像的脑皮层血管和人眼房角检测研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112950577B (en) | 2024-01-16 |
JP7257645B2 (en) | 2023-04-14 |
JP2022132072A (en) | 2022-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240062369A1 (en) | Detection model training method and apparatus, computer device and storage medium | |
US20200356805A1 (en) | Image recognition method, storage medium and computer device | |
CN110400289B (en) | Fundus image recognition method, fundus image recognition device, fundus image recognition apparatus, and fundus image recognition storage medium | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
CN108765392B (en) | Digestive tract endoscope lesion detection and identification method based on sliding window | |
JP6842481B2 (en) | 3D quantitative analysis of the retinal layer using deep learning | |
JP7257645B2 (en) | Image processing method, device, electronic device and storage medium | |
CN109697719B (en) | Image quality evaluation method and device and computer readable storage medium | |
CN110263755B (en) | Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device | |
CN111951221A (en) | Glomerular cell image identification method based on deep neural network | |
CN112233087A (en) | Artificial intelligence-based ophthalmic ultrasonic disease diagnosis method and system | |
CN113053517B (en) | Facial paralysis grade evaluation method based on dynamic region quantitative indexes | |
WO2018137456A1 (en) | Visual tracking method and device | |
CN114821189B (en) | Focus image classification and identification method based on fundus image | |
CN110738643A (en) | Method for analyzing cerebral hemorrhage, computer device and storage medium | |
US20240005494A1 (en) | Methods and systems for image quality assessment | |
CN111160431B (en) | Method and device for identifying keratoconus based on multi-dimensional feature fusion | |
CN115840502A (en) | Three-dimensional sight tracking method, device, equipment and storage medium | |
CN112580404A (en) | Ultrasonic parameter intelligent control method, storage medium and ultrasonic diagnostic equipment | |
JPWO2019073962A1 (en) | Image processing apparatus and program | |
CN116491893A (en) | Method and device for evaluating change of ocular fundus of high myopia, electronic equipment and storage medium | |
CN109816665B (en) | Rapid segmentation method and device for optical coherence tomography image | |
US20240020839A1 (en) | Medical image processing device, medical image processing program, and medical image processing method | |
CN116030042B (en) | Diagnostic device, method, equipment and storage medium for doctor's diagnosis | |
CN113326745A (en) | Application system for judging and identifying stoma situation through image identification technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |