CN112950577B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112950577B
CN112950577B CN202110221152.4A CN202110221152A CN112950577B CN 112950577 B CN112950577 B CN 112950577B CN 202110221152 A CN202110221152 A CN 202110221152A CN 112950577 B CN112950577 B CN 112950577B
Authority
CN
China
Prior art keywords
image
position information
target
history
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110221152.4A
Other languages
Chinese (zh)
Other versions
CN112950577A (en
Inventor
刘江
刘鹏
东田理沙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern University of Science and Technology
Original Assignee
Southern University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern University of Science and Technology filed Critical Southern University of Science and Technology
Priority to CN202110221152.4A priority Critical patent/CN112950577B/en
Publication of CN112950577A publication Critical patent/CN112950577A/en
Priority to JP2021200306A priority patent/JP7257645B2/en
Application granted granted Critical
Publication of CN112950577B publication Critical patent/CN112950577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring an original image of a target object; inputting the original image into a trained image recognition network model to obtain the position information of a target tissue in the target object; determining index parameters of the target tissue in the original image based on the position information of the target tissue; and inputting each index parameter into a trained class identification network model, determining a probability value of the target object belonging to a target class, and determining whether the target object belongs to the target class or not based on the probability value. So as to realize the effect of efficiently and accurately determining the target category of the target object.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
Embodiments of the present invention relate to image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
Glaucoma is one of the leading causes of blindness in the human eye. Glaucoma is largely divided into three categories: childhood, secondary and primary glaucoma, primary glaucoma is divided into primary open-angle glaucoma and primary closed-angle glaucoma. Primary angle closure disease (primary angle closure disease, PACD) includes: (1) Primary angle closure glaucoma suspicion (primary angle closure suspect, PACS); (2) primary keratosis (primary angle closure, PAC); (3) Primary angle closure glaucoma (primary angle closure glaucoma, PACG). PACG is an irreversible disease that is clinically manifested by increased ocular pressure, decreased vision, and damaged optic nerves. In the early stage of onset (PACS and PAC stages), timely symptomatic treatment can prevent or delay the development of PACG. Therefore, early diagnosis of PACD is very important.
At present, the diagnosis of PACD is mainly performed by using an atrial angular mirror, and the method needs to anesthetize and contact eyes of a patient, and the diagnosis is performed by relying on subjective experience of doctors after eye images are acquired.
The above diagnostic methods of PACD are too dependent on the subjective experience of the physician and have poor reproducibility.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, so as to realize the effect of efficiently and accurately determining the target category of a target object.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an original image of a target object;
inputting the original image into a trained image recognition network model to obtain the position information of a target tissue in the target object;
determining index parameters of the target tissue in the original image based on the position information of the target tissue;
and inputting each index parameter into a trained class identification network model, determining a probability value of the target object belonging to a target class, and determining whether the target object belongs to the target class or not based on the probability value.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
The original image acquisition module is used for acquiring an original image of the target object;
the target position information determining module is used for inputting the original image into a trained image recognition network model to obtain the position information of target tissues in the target object;
an index parameter determining module, configured to determine an index parameter of the target tissue in the original image based on the location information;
the probability value determining module is used for inputting each index parameter into the trained class identification network model, determining the probability value of the target object belonging to the target class, and determining whether the target object belongs to the target class or not based on the probability value.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image processing method according to any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used to perform the image processing method according to any of the embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, the obtained original image of the target object is input into the trained image recognition network model, so that the position information of the target tissue in the target object can be accurately obtained, the determination according to the doctor experience value is not needed, and the influence of misjudgment and subjective factors is avoided. And then determining index parameters of the target tissue in the original image based on the position information of the target tissue, inputting all the index parameters into a trained class identification network model, and determining whether the target object belongs to the target class based on a probability value of the target object belonging to the target class output by the class identification network model, thereby realizing the effect of efficiently and accurately determining the target class of the target object, avoiding the influence of misjudgment and subjective factors without determining according to doctor experience values.
Drawings
Fig. 1 is a flowchart of an image processing method in a first embodiment of the present invention;
fig. 2 is a flowchart of an image processing method in the second embodiment of the present invention;
FIG. 3 is a schematic diagram of a process flow of an image recognition network model in a second embodiment of the present invention;
fig. 4 is a flowchart of an image processing method in the third embodiment of the present invention;
Fig. 5 is a schematic view showing a partial structure of an eyeball in a third embodiment of the present invention;
FIG. 6 is a diagram showing the determination of index parameters of a target tissue according to a third embodiment of the present invention;
FIG. 7 is a simplified diagram of index parameters of a target organization in accordance with a third embodiment of the present invention;
FIG. 8 is an AS-OCT graph of index parameters of a target tissue in accordance with a third embodiment of the present invention;
fig. 9 is a flowchart of an image processing method in the fourth embodiment of the present invention;
fig. 10 is a schematic diagram of the structure of an image processing apparatus in a fifth embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device in a sixth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the method may be applied to determining a category of a target tissue, and the method may be performed by an image processing apparatus, which may be implemented by software and/or hardware, and the image processing apparatus may be configured on an electronic computing device, and specifically includes the following steps:
S110, acquiring an original image of the target object.
The target object may be, for example, an object to be scanned in an image, for example, a person.
In the embodiment of the present invention, the preferred target object is a specific scanned object, for example, a certain portion, such as abdomen, chest, etc.
In the embodiment of the present invention, the target object may be an eyeball of a person.
The original image may be an image of the target object acquired after scanning the target object. For example, if the target object is an eyeball, the original image is a scanned image of the eyeball, and specifically, for example, an anterior ocular segment optical coherence tomography (Optical coherence tomography of anterior segment, AS-OCT) image may be used.
In the embodiment of the present invention, the original image may be a three-dimensional image or a two-dimensional image, which is not limited herein.
S120, inputting the original image into a trained image recognition network model to obtain the position information of the target tissue in the target object.
By way of example, the image recognition network model may be a model for recognizing a target in an input image, for example, may be a deep learning-based neural network model, such as a convolutional neural network.
In the embodiment of the present invention, the image recognition network model is not limited, and any model that can be used to recognize the object in the image input therein belongs to the protection scope of the embodiment of the present invention.
The target tissue may be the tissue to be identified.
In the embodiment of the present invention, if the target object is an eyeball, the target tissue may be a Scleral Spur (SS) in the eyeball.
Since, in the current eye examination, mainly glaucoma is examined, i.e., whether a patient has glaucoma or not, when PACD examination is performed using AS-OCT images, the position of scleral spur in the eyeball is mainly acquired, and clinical indexes (for example, AOD250, AOD500, AOD750, TISA250, TISA500, TISA750, etc., where AOD is an angular opening distance and TIS is an area between trabecular meshwork irises) are obtained from the position of scleral spur, and whether a patient has PACD or not can be determined from the obtained clinical indexes. Therefore, scleral spur is an important structure in PACD examination. And judging whether the patient has PACD, wherein one of the main reference indexes is the clinical indexes, namely AOD250, AOD500, AOD750, TISA250, TISA500, TISA750 and the like.
After the original image is input into the trained image recognition network model, the position information of the target tissue in the target object can be obtained based on the image recognition network model.
In the technical scheme of the embodiment of the invention, the original image is input into the trained image recognition network model, so that the setting has the advantages that the position information of the target tissue in the target object can be accurately obtained, the determination according to the experience value of a doctor is not needed, and the influence of misjudgment and subjective factors is avoided.
S130, determining index parameters of the target tissue in the original image based on the position information of the target tissue.
Illustratively, the index parameter may be an index parameter corresponding to the target organization.
In the embodiment of the present invention, taking the target tissue as the scleral spur as an example, the index parameters of the target tissue may be the atrial angular opening distance, the trabecular meshwork iris angle, the trabecular meshwork iris area, and the like.
After the position information of the target tissue is determined, the position information of the target tissue can be input into a pre-developed automatic calculation program for the anterior segment parameters, and the automatic calculation program for the anterior segment parameters can automatically obtain index parameters of the target tissue based on the position information of the target tissue.
In the embodiment of the present invention, the automatic calculation program of the anterior segment parameter may be developed in advance by an operator, and the specific automatic calculation program of the anterior segment parameter is not shown here, and any program that can obtain the index parameter of the target tissue by using the position information of the target tissue belongs to the protection scope of the embodiment of the present invention.
S140, inputting each index parameter into the trained class identification network model, determining the probability value of the target object belonging to the target class, and determining whether the target object belongs to the target class or not based on the probability value.
The category-identifying network model may be a model for identifying a category of the target object, for example. For example, the neural network model based on deep learning can be a convolutional neural network, a classifier, a multi-layer perceptron and the like.
In the embodiment of the present invention, the class identification network model is not limited, and any model that can be used to identify the class of the target object according to the index parameter of the target organization belongs to the protection scope of the embodiment of the present invention.
The target class may be a class to which the target object belongs. For example, if the target object is an eyeball and the target tissue is scleral spur, the target class here may be PACD.
After the index parameters of the target organization are determined, each index parameter can be input into the trained class identification network model, the probability value of the target object belonging to the target class can be determined based on the trained class identification network model, and whether the target object belongs to the target class can be determined based on the probability value.
After the original image is input into the trained class identification network model, the network model is identified based on the image class, a probability value of the target object belonging to the target class can be obtained, and whether the target object belongs to the target class is determined based on the probability value.
In the technical scheme of the embodiment of the invention, the original image is input into the trained image category recognition network model, so that the setting has the advantages that whether the target object belongs to the target category can be accurately determined, the determination according to the experience value of a doctor is not needed, and the influence of misjudgment and subjective factors is avoided.
According to the technical scheme provided by the embodiment of the invention, the obtained original image of the target object is input into the trained image recognition network model, so that the position information of the target tissue in the target object can be accurately obtained, the determination according to the doctor experience value is not needed, and the influence of misjudgment and subjective factors is avoided. And then determining index parameters of the target tissue in the original image based on the position information of the target tissue, inputting all the index parameters into a trained class identification network model, and determining whether the target object belongs to the target class based on a probability value of the target object belonging to the target class output by the class identification network model, thereby realizing the effect of efficiently and accurately determining the target class of the target object, avoiding the influence of misjudgment and subjective factors without determining according to doctor experience values.
Example two
Fig. 2 is a flowchart of an image processing method according to a second embodiment of the present invention, and the embodiment of the present invention may be combined with each of the alternatives in the foregoing embodiments. In an embodiment of the present invention, optionally, the training method of the image recognition network model includes: acquiring at least one set of history images, wherein each set of history images comprises: historical scanning images of the target objects and historical position information of target tissues in the historical scanning images; inputting each group of historical scanning images into an image recognition network model to be trained, and determining the predicted position information of a target tissue in each historical scanning image according to the output result of the image recognition network model; determining a first loss function based on each predicted position information and each history position information corresponding to each history scanned image; and carrying out parameter adjustment on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that training of the image recognition network model is completed.
As shown in fig. 2, the method in the embodiment of the present invention specifically includes the following steps:
s210, acquiring at least one group of historical images, wherein each group of historical images comprises: a history scan image of the target object, and history location information of the target tissue in the history scan image.
The image recognition network model is illustratively trained based on a plurality of sets of historical images.
The history image may be an image acquired by a previous scan of the target object and the resulting positional information of the target tissue based on the image acquired by the previous scan of the target object.
I.e. including in each set of history images: a history scan image of the target object, and history location information of the target tissue in the history scan image.
Specifically, a set of history images is taken as an example, and a target object is taken as an example, where the history scan image of the target object may be an acquired previous scan image of the target object. For example, an AS-OCT image of the eyeball of each patient acquired previously may be used.
The historical location information may be location information of the target tissue in the historical scan image determined from the historical scan image. For example, a doctor may observe the history scan image, and then mark the position of the target tissue in the history scan image, so as to obtain the history position information of the target tissue in the history scan image. For example, taking the target object as an eyeball, the target tissue is scleral spur.
When training the image recognition network model, firstly, multiple groups of historical images are acquired so as to train the image recognition network model by utilizing the multiple groups of historical images.
S220, inputting each group of historical scanning images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanning image according to the output result of the image recognition network model.
For example, for any history image, the preset location information may be a location of the target tissue in the history image predicted based on the image recognition network model to be trained.
In an embodiment of the present invention, a process flow diagram of the image recognition network model is described with reference to fig. 3. For any historically scanned image, the output of the image recognition network model may be: the image recognition network model outputs an image of the target tissue marked in the history scanned image, namely, the rightmost image in stage1 in fig. 3, and in the rightmost image in stage1 in fig. 3, a box a is a target tissue.
It should be noted that, in the embodiment of the present invention, the target object is an eyeball, and the target tissue is exemplified by scleral spur, in which, according to the anatomical structure of the eyeball, the scleral spur is located on both the left and right sides. The anatomy of a particular eyeball is prior art and will not be described in detail here. The specific way of identifying scleral spur may be: the corneal endothelial layer is considered as a line, the scleral inner layer is considered as a line, and the intersection of the two lines is the scleral spur.
After a plurality of groups of history images are acquired, history scanning images in each group of history images are input into an image recognition network model to be trained, wherein the leftmost diagram in fig. 3 is any history scanning image, and the boxes in the leftmost diagram in fig. 3 are the target tissues.
It should be noted that, in the embodiment of the present invention, after a plurality of sets of history scan images are acquired, the history scan images may be preprocessed before being input into the image recognition network model to be trained. A specific pre-processing may be to downsample the history image to reduce the processing pressure of the computer.
Alternatively, the pretreatment may be: and converting each history scanning image into a gray level image, and sampling the size of each history scanning image converted into the gray level image to obtain each history scanning image of the target size.
The target size may be, for example, a size to which the size of each history image after conversion to a grayscale image is sampled according to a user's demand (or a requirement of a subsequent image recognition network model to be trained).
In the embodiment of the present invention, the history scan image is 21232×1866 of the 3-channel AS-OCT image, and in order to reduce the processing pressure of the computer, the data size needs to be reduced. The specific way may be to convert each history image into a gray image, so that only one channel of data may be taken later, and then downsampling the size of each history image after being converted into a gray image to a certain pixel, for example, 800×800 pixels may be performed.
In the embodiment of the invention, in order to obtain more accurate predicted position information of the target tissue and reduce the quantization error of the positioning of the target tissue, a certain strategy can be adopted. The specific strategy is as follows:
optionally, according to the output result of the image recognition network model, the predicted position information of the target tissue in each history scan image may be determined specifically: for the output result of any one of the history scanning images, determining the original position information of the target tissue in the output result; cutting the output image based on the original position information of the target tissue to obtain a cutting scanning image; based on the intermediate position information of the target tissue in the cropping scan image, the intermediate position information is mapped into the output image, and predicted position information of the target tissue is determined.
For example, the raw location information may be location information of the target tissue in the history scan image determined from the results output by the image recognition network model.
In the embodiment of the invention, the output result is an output image including the original position information mark. That is, the result of the image recognition network model output is an image including the original position information flag, such as the rightmost diagram in stage1 in fig. 3.
The cropping scanned image may be an image obtained by cropping an image output by the image recognition network model.
In the embodiment of the invention, the cut scan image is obtained by cutting the image output by the image recognition network model, and the image output by the image recognition network model is provided with the target tissue, and correspondingly, the cut scan image is also provided with the target tissue.
The intermediate position information may be position information of the target tissue in the crop scan image.
As shown in fig. 3, after the rightmost image (output image) in stage1 is obtained, the image is cropped to obtain a cropped scan image. The specific clipping method is to respectively collect the left and right target tissues in the output image as the reference, that is, respectively clip the left target tissue and the right target tissue in the output image to obtain clipping scan images (i.e., the leftmost image in stage2 in fig. 3, wherein the upper and lower images are respectively the clipping scan image of the left target tissue and the clipping scan image of the right target tissue).
After the clipping scanning image is obtained, according to the clipping scanning image, the position information of the target tissue in the clipping scanning image, namely the middle position information, is obtained, and the rightmost image in stage2 in fig. 3 is obtained. In the rightmost view in stage2 in fig. 3, the upper view is the position information of the left target tissue in the crop scan image of the left target tissue, and the lower view is the position information of the right target tissue in the crop scan image of the right target tissue.
After the intermediate position information of the target tissue is obtained, the intermediate position information of the target tissue is mapped into an image output by the initial image recognition network model, and the predicted position information of the target tissue can be obtained, so that the obtained predicted position information of the target tissue is more accurate.
It can be understood that the above-described predicted position information of the determination target tissue is divided into two stages (stage 1 and stage 2). The stage1 is configured to obtain rough position information of the target tissue in the history scan image, and then crop the target tissue on the output image according to the position information of the target tissue obtained in stage1, so as to obtain a smaller image compared with the output image, that is, a cropping scan image (for example, the output image may be an image of 800×800 pixels, and the cropping scan image may be an image of 400×400 pixels). Therefore, the original resolution can be reserved, the position of the target tissue can be accurately positioned, and the memory of a display card can be saved. And then determining the position information of the target tissue in the cropping scanning image, namely, the intermediate position information, and finally mapping the intermediate position information to an output image to obtain accurate predicted position information of the target tissue.
In the actual operation process, if the user feels that the predicted position information of the target tissue obtained after passing through stage1 and stage2 is still inaccurate, the intermediate position information obtained in stage2 may be repeatedly corrected, and the specific correction process is that the stage2 process is repeated (i.e. the clipping scan image is to be clipped again, then a smaller image than the clipping scan image is obtained, and the position information of the target tissue in the smaller clipping scan image is determined), until the predicted position information of the target tissue satisfactory to the user is obtained.
In the embodiment of the present invention, the predicted position information of the target tissue is determined specifically according to the intermediate position information obtained in stage2, which may be determined based on the following formula:
wherein P is predicted position information, P S1 For the original position information obtained in stage1, P S2 For the intermediate position information obtained by stage2, the size is the pixel size of the clip scan image.
In the embodiment of the present invention, the network models of Stage1 and Stage2 are both the partition network UNET. The network structure of UNET consists of a convolutional-pooling layer constituent Encoder (Encoder) 4 times and an upsampling layer constituent Decoder (Decoder) 4 times. For the encoder and decoder layers of the same layer, a jump connection is used to transfer the information of the image to the deep network.
In the embodiment of the invention, the labels of stage1 and stage2 are generated by using Gaussian distribution according to the position of the target tissue.
S230, determining a first loss function based on each piece of predicted position information and each piece of history position information corresponding to each history scanning image.
For example, the first loss function may be a loss function of the image recognition network model to be trained determined based on each predicted position information and each historical position information corresponding to each historical scan image.
After the predicted position information corresponding to each history scanning image is obtained, the predicted position information corresponding to each history scanning image is compared with the history position information corresponding to each history scanning image, and a loss function of the image recognition network model to be trained, namely a first loss function, can be determined according to the comparison result.
And S240, carrying out parameter adjustment on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that training of the image recognition network model is completed.
The first preset loss function threshold may be a preset threshold of the first loss function, for example. When the first loss function is smaller than the threshold value, the training of the image recognition network model to be trained is proved to be completed.
And carrying out parameter adjustment on the image recognition network model to be trained according to the obtained first loss function until the first loss function obtained by any iteration is smaller than a first preset loss function threshold value, and determining that the training of the image recognition network model to be trained is completed.
In this way, the original image of the subsequent target object can be processed based on the trained image recognition network model, so that the position information of the target tissue in the target object can be obtained quickly and accurately.
S250, acquiring an original image of the target object.
S260, inputting the original image into the trained image recognition network model to obtain the position information of the target tissue in the target object.
S270, determining index parameters of the target tissue in the original image based on the position information of the target tissue.
S280, inputting each index parameter into a trained class identification network model, determining a probability value of the target object belonging to the target class, and determining whether the target object belongs to the target class or not based on the probability value.
According to the technical scheme, the image recognition network model is trained by utilizing the plurality of groups of historical images, so that the trained image recognition network model is obtained, the original image of the subsequent target object can be processed based on the trained image recognition network model, so that the position information of the target tissue in the target object can be obtained rapidly and accurately, the image recognition time is saved, the working efficiency is improved, meanwhile, the position information of the target tissue is obtained by utilizing the image recognition network model, the determination according to the experience value of a doctor is not needed, and the influence of misjudgment and subjective factors is avoided.
Example III
Fig. 4 is a flowchart of an image processing method according to a third embodiment of the present invention, and the embodiments of the present invention may be combined with each of the alternatives in the foregoing embodiments. In an embodiment of the present invention, optionally, the determining, based on the location information, an index parameter of the target tissue includes: determining an iris angle of the trabecular meshwork based on the position information of the target tissue and the position information of the reference tissue; determining an atrial angle opening distance and an area between the trabecular meshwork irises based on the position information of the target tissue and the distance from the position information of the target tissue to the apex angle of the trabecular meshwork irises; based on the location information of the target tissue, and the area between the trabecular meshwork irises, the angular opening distance is determined.
As shown in fig. 4, the method in the embodiment of the present invention specifically includes the following steps:
s310, acquiring at least one group of historical images, wherein each group of historical images comprises: a history scan image of the target object, and history location information of the target tissue in the history scan image.
S320, inputting each group of historical scanning images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanning image according to the output result of the image recognition network model.
S330, determining a first loss function based on each piece of predicted position information and each piece of history position information corresponding to each history scanning image.
And S340, carrying out parameter adjustment on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that training of the image recognition network model is completed.
S350, acquiring an original image of the target object.
S360, inputting the original image into a trained image recognition network model to obtain the position information of the target tissue in the target object.
S370, determining the trabecular meshwork iris angle based on the position information of the target tissue and the outside input room angle opening distance; based on the location information of the target tissue, and the angular opening distance, the area between the trabecular meshwork irises is determined.
Illustratively, in the embodiment of the present invention, the index parameter of the target tissue may be, but is not limited to, at least one of the following: trabecular meshwork iris angle and trabecular meshwork iris-to-iris area.
In an embodiment of the present invention, the target tissue is exemplified by scleral spur.
In fig. 5, 1 is a cornea, 2 is a scleral spur, 3 is a sclera, 4 is a ciliary body, and 5 is an iris, referring to a partial schematic view of an eyeball as described in fig. 5.
According to the structure of the eyeball as shown in fig. 5, the index parameter determination of the target tissue as shown in fig. 6 is schematically shown, wherein in fig. 6, the point 2 is the scleral spur, the point 3 is the sclera, and the point 4 is the ciliary body. In fig. 6 (a), the angular opening distance (angle opening distance, AOD) may be selected according to the user's requirement, for example, 500um or 750um, which is not limited herein.
As shown in fig. 6 (a), the AOD is an AOD500 where the AOD extends outward from the scleral spur (where N is the distance to the scleral spur), for example, N is 500. The outward extension here refers to the right extension of the view (a) in fig. 6.
As shown in fig. 6 (b), the corneal inner layer and the iris inner layer are drawn to be extended lines according to the AOD value, the two extended lines (extended line P and extended line Q in fig. 6 (b)) intersect at a point, and the included angle between the extended line P and the extended line Q is the trabecular meshwork iris angle (trabecular iris angle, TIA).
As shown in fig. 6 (c), the area of the boundary of the target tissue to the AOD is referred to as the trabecular meshwork iris area (TIAS) based on the value of the AOD and the positional information of the target tissue.
According to the calculation method, each index parameter of the target tissue can be obtained. Fig. 7 is a schematic diagram of each index parameter of the target tissue, fig. 8 is an AS-OCT diagram of each index parameter of the target tissue, and fig. 8 shows SS AS scleral spur.
In this way, according to the position information of the target tissue, the index parameter of the target tissue can be calculated, so that whether the target tissue belongs to the target class can be judged based on the index parameter of the target tissue later. In addition, the opening and closing conditions of the room angle can be roughly judged.
S380, inputting each index parameter into the trained class identification network model, determining the probability value of the target object belonging to the target class, and determining whether the target object belongs to the target class or not based on the probability value.
According to the technical scheme, the trabecular meshwork iris angle is determined according to the position information of the target tissue and the room angle opening distance input from the outside, and the area between the trabecular meshwork irises is determined based on the position information of the target tissue and the room angle opening distance, so that the index parameters of the target tissue can be calculated according to the position information of the target tissue, and whether the target tissue belongs to the target class can be judged based on the index parameters of the target tissue.
Example IV
Fig. 9 is a flowchart of an image processing method according to a fourth embodiment of the present invention, and the embodiments of the present invention may be combined with each of the alternatives in the foregoing embodiments. In an embodiment of the present invention, optionally, the training method of the class identification network model includes: obtaining multiple sets of historical parameter information, wherein each set of historical parameter information comprises: a history instruction parameter, and a history tag of a target object corresponding to the history instruction parameter, wherein the history tag comprises: the target object belongs to a target category, and the target object does not belong to the target category; inputting each set of history parameter information into a class identification network model to be trained, and determining a calculation probability value of which a target object corresponding to each history instruction parameter is a target class; determining a calculation history label corresponding to each history instruction parameter based on the calculation probability value; determining a second loss function based on the calculation history labels and the history labels corresponding to the history instruction parameters; and carrying out parameter adjustment on the class identification network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the training of the class identification network model is completed.
As shown in fig. 9, the method in the embodiment of the present invention specifically includes the following steps:
s401, acquiring at least one group of historical images, wherein each group of historical images comprises: a history scan image of the target object, and history location information of the target tissue in the history scan image.
S402, inputting each group of historical scanning images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each historical scanning image according to the output result of the image recognition network model.
S403, determining a first loss function based on each piece of predicted position information and each piece of history position information corresponding to each history scanning image.
S404, parameter adjustment is carried out on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and the completion of training of the image recognition network model is determined.
S405, acquiring a plurality of sets of history parameter information, wherein each set of history parameter information comprises: a history instruction parameter, and a history tag of a target object corresponding to the history instruction parameter, wherein the history tag comprises: the target object belongs to a target class, and the target object does not belong to a target class.
For example, before determining the probability value that the target object belongs to the target class using the class identification network model, the class identification network model is first trained so that the trained class identification network model can be used to determine the probability value that the target object belongs to the target class.
The history parameter information may be information related to obtaining an index parameter of the target tissue.
Specifically, taking any set of history parameter information as an example, each set of history parameter information includes: a historical instruction parameter, and a historical tag of a target object corresponding to the historical instruction parameter.
The historical instruction parameters may be index parameters of the target tissue acquired before. Specifically, for example, the index parameter of the target tissue may be obtained from the positional information of the target tissue in the history scan image.
The history tag may be a tag of a target object corresponding to a history instruction parameter, where the tag may be: the target object belongs to the target category, and the target object does not belong to the two labels of the target category.
S406, inputting each set of history parameter information into a class identification network model to be trained, and determining a calculation probability value of which the target object corresponding to each history instruction parameter is a target class.
The calculated probability value may be a probability value of the target object corresponding to the historical instruction parameter predicted by the category identification network model being the target category.
And inputting the acquired multiple sets of historical parameter information into a class identification network model to be trained, and determining a calculation probability value of the target object corresponding to each historical instruction parameter as the target class based on the class identification network model.
S407, determining a calculation history label corresponding to each history instruction parameter based on the calculation probability value.
For example, the computation history tag may be a tag corresponding to a history instruction parameter determined based on the computation probability value.
According to the obtained calculation probability value, the calculation history label corresponding to each history instruction parameter can be determined.
Specifically, a threshold value of the calculated probability value is preset, and when the obtained calculated probability value is greater than the threshold value, the label corresponding to the historical instruction parameter is determined as follows: the target object belongs to a target class. When the obtained calculated probability value is smaller than the threshold value, determining that the label corresponding to the historical instruction parameter is: the target object does not belong to the target class.
Specifically, for example, a calculation probability value of a certain historical instruction parameter is 0.8 based on a class identification network model to be trained, a preset threshold value of the calculation probability value is 0.5, a target object is an eyeball, a target tissue is scleral spur, and a target class is PACD. Then 0.8 > 0.5, proving that the target object (eyeball) corresponding to the historical instruction parameter has PACD disease.
S408, determining a second loss function based on the history labels corresponding to the history instruction parameters.
For example, the second loss function may be a loss function of the class identification network model to be trained determined based on the calculation history label corresponding to each history instruction parameter and the history label corresponding to each history instruction parameter.
After the computation history labels corresponding to the history instruction parameters are obtained, the computation history labels corresponding to the history instruction parameters are compared with the history labels corresponding to the history instruction parameters, and the loss function of the class identification network model to be trained, namely the second loss function, can be determined according to the comparison result.
And S409, carrying out parameter adjustment on the class identification network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the training of the class identification network model is completed.
The second preset loss function threshold may be a preset threshold of the second loss function, for example. When the second loss function is smaller than the threshold value, the training of the class identification network model to be trained is proved to be completed.
And carrying out parameter adjustment on the class identification network model to be trained according to the obtained second loss function until the second loss function obtained by any iteration is smaller than a second preset loss function threshold value, and determining that the class identification network model to be trained is trained.
In this way, the index parameters of the subsequent target object can be processed based on the trained class identification network model, so as to quickly and accurately obtain the probability value of the target object belonging to the target class, and determine whether the target object belongs to the target class according to the probability value.
S410, acquiring an original image of the target object.
S411, inputting the original image into the trained image recognition network model to obtain the position information of the target tissue in the target object.
S412, determining the trabecular meshwork iris angle based on the position information of the target tissue and the outside input room angle opening distance; based on the location information of the target tissue, and the angular opening distance, the area between the trabecular meshwork irises is determined.
S413, inputting each index parameter into the trained class identification network model, determining the probability value of the target object belonging to the target class, and determining whether the target object belongs to the target class or not based on the probability value.
Optionally, the determining, based on the probability value, whether the target object belongs to the target category includes: when the probability value is greater than or equal to a preset probability threshold value, determining that the target object belongs to the target category; and when the probability value is smaller than a preset probability threshold value, determining that the target object does not belong to the target category.
The preset probability threshold may be a threshold of a preset probability value that the target object belongs to the target class, for example.
When the obtained probability value of the target object belonging to the target category is greater than or equal to a preset probability threshold value, determining that the target object belongs to the target category; and when the obtained probability value of the target object belonging to the target category is smaller than a preset probability threshold value, determining that the target object does not belong to the target category.
According to the technical scheme, the class identification network model is trained by utilizing multiple groups of historical parameter information, so that a trained class identification network model is obtained, index parameters of subsequent target objects can be processed based on the trained class identification network model, probability values of the target objects belonging to the target classes can be obtained rapidly and accurately, whether the target objects belong to the target classes can be determined according to the probability values, time for determining the target classes is saved, working efficiency is improved, meanwhile, whether the target objects belong to the target classes is determined by utilizing the class identification network model, determination according to doctor experience values is not needed, and influence of misjudgment and subjective factors is avoided.
Example five
Fig. 10 is a schematic structural diagram of an image processing apparatus according to a fifth embodiment of the present invention, as shown in fig. 10, the apparatus includes: an original image acquisition module 31, a target position information determination module 32, an index parameter determination module 33, and a probability value determination module 34.
Wherein, the original image acquisition module 31 is used for acquiring an original image of the target object;
a target location information determining module 32, configured to input the original image into a trained image recognition network model, to obtain location information of a target tissue in the target object;
an index parameter determining module 33, configured to determine an index parameter of the target tissue in the original image based on the position information;
the probability value determining module 34 is configured to input each of the index parameters into a trained class identification network model, determine a probability value of the target object belonging to the target class, and determine whether the target object belongs to the target class based on the probability value.
On the basis of the technical scheme of the embodiment of the invention, the device further comprises:
a history image acquisition module, configured to acquire at least one set of history images, where each set of history images includes: a history scan image of the target object, and history position information of the target tissue in the history scan image;
The prediction position information determining module is used for inputting each group of historical scanning images into an image recognition network model to be trained, and determining the prediction position information of the target tissue in each historical scanning image according to the output result of the image recognition network model;
a first loss function determining module, configured to determine a first loss function based on each of the predicted position information and each of the historical position information corresponding to each of the historical scan images;
and the image recognition network model training completion determining module is used for carrying out parameter adjustment on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that the image recognition network model training is completed.
On the basis of the technical scheme of the embodiment of the invention, the device further comprises:
the image preprocessing module is used for converting each historical scanning image into a gray level image, and sampling the size of each historical scanning image converted into the gray level image to obtain each historical scanning image with the target size.
On the basis of the technical scheme of the embodiment of the invention, the prediction position information determining module comprises:
An original position information determining unit, configured to determine, for an output result of any one of the history scan images, original position information of the target tissue in the output result, where the output result is an output image including the original position information mark;
a clipping scan image determining unit, configured to clip the output image based on original position information of the target tissue, to obtain a clipping scan image, where the clipping scan image includes the target tissue;
and a predicted position information determining unit configured to determine predicted position information of the target tissue by mapping the intermediate position information into the output image based on the intermediate position information of the target tissue in the cut scan image.
Optionally, the index parameter includes at least one of the following: trabecular meshwork iris angle and trabecular meshwork iris-to-iris area.
On the basis of the technical solution of the embodiment of the present invention, the index parameter determining module 33 is specifically configured to:
determining the trabecular meshwork iris angle based on the position information of the target tissue and the outside input room angle opening distance; and determining the area between the trabecular meshwork irises based on the position information of the target tissue and the atrial opening distance.
On the basis of the technical scheme of the embodiment of the invention, the device further comprises:
the historical parameter acquisition module is used for acquiring a plurality of groups of historical parameter information, wherein each group of historical parameter information comprises: a history instruction parameter, and a history tag of the target object corresponding to the history instruction parameter, wherein the history tag comprises: the target object belongs to the target category, and the target object does not belong to the target category;
the calculation probability value determining module is used for inputting each group of history parameter information into a class identification network model to be trained and determining the calculation probability value of the target object corresponding to each history instruction parameter as the target class;
the calculation history label determining module is used for determining calculation history labels corresponding to the history instruction parameters based on the calculation probability values;
the second loss function determining module is used for determining a second loss function based on each calculation history label and the history label corresponding to each history instruction parameter;
and the class identification network model training completion determining module is used for carrying out parameter adjustment on the class identification network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the class identification network model training is completed.
On the basis of the technical solution of the embodiment of the present invention, the probability value determining module 34 includes:
the first judging unit is used for determining that the target object belongs to a target category when the probability value is larger than or equal to a preset probability threshold value;
and the second judging unit is used for determining that the target object does not belong to the target category when the probability value is smaller than the preset probability threshold value.
The image processing device provided by the embodiment of the invention can execute the image processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example six
Fig. 11 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention, as shown in fig. 11, the electronic device includes a processor 70, a memory 71, an input device 72, and an output device 73; the number of processors 70 in the electronic device may be one or more, one processor 70 being taken as an example in fig. 11; the processor 70, the memory 71, the input means 72 and the output means 73 in the electronic device may be connected by a bus or other means, in fig. 11 by way of example.
The memory 71 is a computer-readable storage medium that can be used to store software programs, computer-executable programs, and modules such as program instructions/modules (e.g., the original image acquisition module 31, the target position information determination module 32, the index parameter determination module 33, and the probability value determination module 34) corresponding to the image processing method in the embodiment of the present invention. The processor 70 executes various functional applications of the electronic device and data processing, that is, implements the above-described image processing method, by running software programs, instructions, and modules stored in the memory 71.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 71 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 71 may further include memory remotely located relative to processor 70, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. The output means 73 may comprise a display device such as a display screen.
Example seven
The seventh embodiment of the present invention also provides a storage medium containing computer-executable instructions for performing an image processing method when executed by a computer processor.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the image processing method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer electronic device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the above-described embodiment of the image processing apparatus, each unit and module included is divided according to the functional logic only, but is not limited to the above-described division, as long as the corresponding functions can be realized; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (7)

1. An image processing method, comprising:
acquiring an original image of a target object, wherein the original image is a scanned image of an eyeball;
Inputting the original image into a trained image recognition network model to obtain the position information of a target tissue in the target object;
determining index parameters of the target tissue in the original image based on the position information of the target tissue;
inputting each index parameter into a trained class identification network model, determining a probability value of the target object belonging to a target class, and determining whether the target object belongs to the target class based on the probability value, wherein the target class is the class to which the target object belongs;
wherein the index parameter at least comprises one of the following: the opening distance of the room angle, the angle of the trabecular meshwork iris and the area between the trabecular meshwork irises;
the determining, based on the location information of the target tissue, an index parameter of the target tissue in the original image includes:
determining the trabecular meshwork iris angle based on the position information of the target tissue and the position information of a reference tissue;
determining an atrial angle opening distance and an area between the trabecular meshwork irises based on the position information of the target tissue and the distance from the position information of the target tissue to the apex angle of the trabecular meshwork irises;
Determining the angular opening distance based on the position information of the target tissue and the area between the trabecular meshwork irises;
the image recognition network model is obtained based on training of a plurality of groups of historical images;
the training method of the image recognition network model comprises the following steps:
acquiring at least one set of history images, wherein each set of history images comprises: a history scan image of the target object, and history position information of the target tissue in the history scan image;
inputting each group of history scanning images into an image recognition network model to be trained, and determining the predicted position information of the target tissue in each history scanning image according to the output result of the image recognition network model;
determining a first loss function based on each of the predicted position information and each of the historical position information corresponding to each of the historical scan images;
parameter adjustment is carried out on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and the image recognition network model training is determined to be completed;
wherein the determining the predicted position information of the target tissue in each history scan image according to the output result of the image recognition network model includes:
For the output result of any one of the history scanning images, determining the original position information of the target tissue in the output result, wherein the output result is an output image comprising an original position information mark;
cutting the output image based on the original position information of the target tissue to obtain a cutting scanning image, wherein the cutting scanning image comprises the target tissue;
based on the intermediate position information of the target tissue in the cropping scan image, mapping the intermediate position information into the output image, and determining predicted position information of the target tissue.
2. The method of claim 1, wherein prior to said inputting each set of said historical scan images into an image recognition network model to be trained, said method further comprises:
converting each history scanning image into a gray level image, and sampling the size of each history scanning image converted into the gray level image to obtain each history scanning image with a target size.
3. The method of claim 1, wherein the class-identifying network model is trained based on a plurality of sets of historical parameter information;
The training method of the category identification network model comprises the following steps:
obtaining multiple sets of historical parameter information, wherein each set of historical parameter information comprises: a history instruction parameter, and a history tag of the target object corresponding to the history instruction parameter, wherein the history tag comprises: the target object belongs to the target category, and the target object does not belong to the target category;
inputting each set of history parameter information into a class identification network model to be trained, and determining a calculated probability value of the target object corresponding to each history instruction parameter as the target class;
determining a calculation history label corresponding to each history instruction parameter based on the calculation probability value;
determining a second loss function based on each calculated history tag and a history tag corresponding to each history instruction parameter;
and carrying out parameter adjustment on the class identification network model based on the second loss function until the second loss function of any iteration is smaller than a second preset loss function threshold value, and determining that the class identification network model training is completed.
4. The method of claim 1, wherein determining whether the target object belongs to a target class based on the probability value comprises:
When the probability value is greater than or equal to a preset probability threshold value, determining that the target object belongs to a target category;
and when the probability value is smaller than the preset probability threshold value, determining that the target object does not belong to the target category.
5. An image processing apparatus, comprising:
the original image acquisition module is used for acquiring an original image of a target object, wherein the original image is a scanned image of an eyeball;
the target position information determining module is used for inputting the original image into a trained image recognition network model to obtain the position information of target tissues in the target object;
an index parameter determining module, configured to determine an index parameter of the target tissue in the original image based on the location information;
the probability value determining module is used for inputting each index parameter into a trained class identification network model, determining the probability value of the target object belonging to a target class, and determining whether the target object belongs to the target class or not based on the probability value, wherein the target class is the class to which the target object belongs;
wherein the index parameter at least comprises one of the following: the opening distance of the room angle, the angle of the trabecular meshwork iris and the area between the trabecular meshwork irises;
The index parameter determining module is specifically configured to:
determining the trabecular meshwork iris angle based on the position information of the target tissue and the position information of a reference tissue;
determining an atrial angle opening distance and an area between the trabecular meshwork irises based on the position information of the target tissue and the distance from the position information of the target tissue to the apex angle of the trabecular meshwork irises;
determining the angular opening distance based on the position information of the target tissue and the area between the trabecular meshwork irises;
the image recognition network model is obtained based on training of a plurality of groups of historical images; the apparatus further comprises:
a history image acquisition module, configured to acquire at least one set of history images, where each set of history images includes: a history scan image of the target object, and history position information of the target tissue in the history scan image;
the prediction position information determining module is used for inputting each group of historical scanning images into an image recognition network model to be trained, and determining the prediction position information of the target tissue in each historical scanning image according to the output result of the image recognition network model;
A first loss function determining module, configured to determine a first loss function based on each of the predicted position information and each of the historical position information corresponding to each of the historical scan images;
the image recognition network model training completion determining module is used for carrying out parameter adjustment on the image recognition network model based on the first loss function until the first loss function of any iteration is smaller than a first preset loss function threshold value, and determining that the image recognition network model training is completed;
wherein the apparatus further comprises a predicted location information determination module comprising:
an original position information determining unit, configured to determine, for an output result of any one of the history scan images, original position information of a target tissue in the output result, where the output result is an output image including the original position information mark;
a clipping scan image determining unit, configured to clip the output image based on original position information of the target tissue, to obtain a clipping scan image, where the clipping scan image includes the target tissue;
And a predicted position information determining unit configured to determine predicted position information of the target tissue by mapping the intermediate position information into the output image based on the intermediate position information of the target tissue in the cut scan image.
6. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any of claims 1-4.
7. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the image processing method of any of claims 1-4.
CN202110221152.4A 2021-02-26 2021-02-26 Image processing method, device, electronic equipment and storage medium Active CN112950577B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110221152.4A CN112950577B (en) 2021-02-26 2021-02-26 Image processing method, device, electronic equipment and storage medium
JP2021200306A JP7257645B2 (en) 2021-02-26 2021-12-09 Image processing method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110221152.4A CN112950577B (en) 2021-02-26 2021-02-26 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112950577A CN112950577A (en) 2021-06-11
CN112950577B true CN112950577B (en) 2024-01-16

Family

ID=76246686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221152.4A Active CN112950577B (en) 2021-02-26 2021-02-26 Image processing method, device, electronic equipment and storage medium

Country Status (2)

Country Link
JP (1) JP7257645B2 (en)
CN (1) CN112950577B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116230144B (en) * 2023-05-04 2023-07-25 小米汽车科技有限公司 Model generation method, material information determination method, device, equipment and medium
CN117440172B (en) * 2023-12-20 2024-03-19 江苏金融租赁股份有限公司 Picture compression method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985214A (en) * 2018-07-09 2018-12-11 上海斐讯数据通信技术有限公司 The mask method and device of image data
CN110010219A (en) * 2019-03-13 2019-07-12 杭州电子科技大学 Optical coherence tomography image retinopathy intelligent checking system and detection method
CN110766659A (en) * 2019-09-24 2020-02-07 西人马帝言(北京)科技有限公司 Medical image recognition method, apparatus, device and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7905599B2 (en) * 2009-04-30 2011-03-15 University Of Southern California Methods for diagnosing glaucoma utilizing combinations of FD-OCT measurements from three anatomical regions of the eye
US10123691B1 (en) 2016-03-15 2018-11-13 Carl Zeiss Meditec, Inc. Methods and systems for automatically identifying the Schwalbe's line
JP2019177032A (en) 2018-03-30 2019-10-17 株式会社ニデック Ophthalmologic image processing device and ophthalmologic image processing program
US11357479B2 (en) 2018-05-24 2022-06-14 Arcscan, Inc. Method for measuring behind the iris after locating the scleral spur
JP7341874B2 (en) 2018-12-26 2023-09-11 キヤノン株式会社 Image processing device, image processing method, and program
CN112233135A (en) 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985214A (en) * 2018-07-09 2018-12-11 上海斐讯数据通信技术有限公司 The mask method and device of image data
CN110010219A (en) * 2019-03-13 2019-07-12 杭州电子科技大学 Optical coherence tomography image retinopathy intelligent checking system and detection method
CN110766659A (en) * 2019-09-24 2020-02-07 西人马帝言(北京)科技有限公司 Medical image recognition method, apparatus, device and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Segmentation and Quantification for Angle-Closure Glaucoma Assessment in Anterior Segment OCT;Huazhu Fu et al;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20170930;第36卷(第9期);第1930-1938页 *
基于光学相干层析成像的脑皮层血管和人眼房角检测研究;高英哲;《中国优秀硕士学位论文全文数据库医药卫生科技辑》;20200315(第3期);第E070-13页 *

Also Published As

Publication number Publication date
JP2022132072A (en) 2022-09-07
CN112950577A (en) 2021-06-11
JP7257645B2 (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN110400289B (en) Fundus image recognition method, fundus image recognition device, fundus image recognition apparatus, and fundus image recognition storage medium
US11270169B2 (en) Image recognition method, storage medium and computer device
CN112950577B (en) Image processing method, device, electronic equipment and storage medium
JP6842481B2 (en) 3D quantitative analysis of the retinal layer using deep learning
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
US20240074658A1 (en) Method and system for measuring lesion features of hypertensive retinopathy
CN112233087A (en) Artificial intelligence-based ophthalmic ultrasonic disease diagnosis method and system
Sun et al. Optic disc segmentation from retinal fundus images via deep object detection networks
CN114627067B (en) Wound area measurement and auxiliary diagnosis and treatment method based on image processing
CN114821189B (en) Focus image classification and identification method based on fundus image
CN113053517B (en) Facial paralysis grade evaluation method based on dynamic region quantitative indexes
CN110738643A (en) Method for analyzing cerebral hemorrhage, computer device and storage medium
US20240005494A1 (en) Methods and systems for image quality assessment
CN112580404A (en) Ultrasonic parameter intelligent control method, storage medium and ultrasonic diagnostic equipment
JPWO2019073962A1 (en) Image processing apparatus and program
CN111160431B (en) Method and device for identifying keratoconus based on multi-dimensional feature fusion
CN109816665B (en) Rapid segmentation method and device for optical coherence tomography image
CN116030042B (en) Diagnostic device, method, equipment and storage medium for doctor's diagnosis
US20240020839A1 (en) Medical image processing device, medical image processing program, and medical image processing method
KR20220138069A (en) Method, apparatus and software program for cervical cancer diagnosis using image analysis of artificial intelligence based technology
CN112634221A (en) Image and depth-based cornea level identification and lesion positioning method and system
Rashid et al. A Detectability Analysis of Retinitis Pigmetosa Using Novel SE-ResNet Based Deep Learning Model and Color Fundus Images
CN111640127A (en) Accurate clinical diagnosis navigation method for orthopedics department
CN112991289B (en) Processing method and device for standard section of image
CN114820537A (en) Dry eye FBUT detection method and system based on deep learning and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant