CN113850203B - Adhesion detection model training method, adhesion detection method and related device - Google Patents

Adhesion detection model training method, adhesion detection method and related device Download PDF

Info

Publication number
CN113850203B
CN113850203B CN202111144365.8A CN202111144365A CN113850203B CN 113850203 B CN113850203 B CN 113850203B CN 202111144365 A CN202111144365 A CN 202111144365A CN 113850203 B CN113850203 B CN 113850203B
Authority
CN
China
Prior art keywords
loss function
image
dark
initial model
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111144365.8A
Other languages
Chinese (zh)
Other versions
CN113850203A (en
Inventor
方慧卉
许言午
刘军伟
黄艳
张秀兰
李飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Zhongshan Ophthalmic Center
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Zhongshan Ophthalmic Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd, Zhongshan Ophthalmic Center filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111144365.8A priority Critical patent/CN113850203B/en
Publication of CN113850203A publication Critical patent/CN113850203A/en
Priority to PCT/CN2022/121945 priority patent/WO2023051563A1/en
Application granted granted Critical
Publication of CN113850203B publication Critical patent/CN113850203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The disclosure provides an adhesion detection model training method, an adhesion detection device, electronic equipment, a computer readable storage medium and a computer program product, and relates to the technical field of artificial intelligence such as computer vision and deep learning. One embodiment of the method comprises: the method comprises the steps of obtaining a labeling result of whether a light and dark image pair shot by a front-segment optical coherence tomography technology on the same eye object in a light environment and a dark environment and the eye object are adhered, extracting light and dark features by using a feature extraction module of an initial model, inputting the light and dark features into a comparison module of the initial model, taking the labeling result as the output of the initial model, determining a comparison loss function for supervision training based on the difference features between the light features and the dark features, training the initial model, jumping out when the comparison loss function meets a preset first jumping-out condition, and obtaining an adhesion detection model. The adhesion detection model provided by the embodiment can be used for identifying whether adhesion exists in the eye image.

Description

Adhesion detection model training method, adhesion detection method and related device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the field of artificial intelligence technologies such as computer vision and deep learning, and in particular, to an adhesion detection model training method and an adhesion detection method, and a corresponding apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Optical Coherence Tomography (OCT) is a new type of Tomography technology with the greatest development prospect in recent years, especially has an attractive application prospect in biopsy and imaging of biological tissues, has been tried to be applied to clinical diagnosis in ophthalmology, dentistry and dermatology, is another technical breakthrough after X-ray and nuclear magnetic resonance technology, and has been rapidly developed in recent years.
On the basis, the developed Anterior Segment Optical Coherence Tomography (AS-OCT) has the characteristics of non-contact and simple operation, and has become a new image mode for evaluating the Anterior Segment of the eye because the AS-OCT has good imaging performance on the iris, cornea and the like in the eye.
Disclosure of Invention
The embodiment of the disclosure provides an adhesion detection model training method, an adhesion detection device, electronic equipment, a computer readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides an adhesion detection model training method, including: acquiring a bright-dark image pair shot for the same eye object by a front-segment optical coherence tomography technology and a labeling result of whether the eye object is adhered or not; wherein, the bright image in the bright-dark image pair is obtained by shooting in a bright environment, and the dark image is obtained by shooting in a dark environment; respectively extracting the light features of the light image and the dark features of the dark image by using a feature extraction module of the initial model; inputting the light feature and the dark feature into a comparison module of the initial model, taking the labeling result as the output of the initial model, determining a comparison loss function for supervising and training the initial model based on the difference feature between the light feature and the dark feature, and then training the initial model; and responding to the fact that the comparison loss function meets a preset first jumping-out condition, and outputting the initial model trained to the current state as an adhesion detection model.
In a second aspect, an embodiment of the present disclosure provides an adhesion detection model training device, including: a training sample acquisition unit configured to acquire a bright-dark image pair photographed by a previous-stage optical coherence tomography on the same eye object and a labeling result of whether the eye object is adhered; wherein, the bright image in the pair of bright and dark images is obtained by shooting in bright environment, and the dark image is obtained by shooting in dark environment; a feature extraction unit configured to extract a light feature of the light image and a dark feature of the dark image using a feature extraction module of an initial model, respectively; a contrast characteristic training unit configured to input the bright characteristic and the dark characteristic into a contrast module of the initial model, take the labeling result as an output of the initial model, and train the initial model after determining a contrast loss function for supervising and training the initial model based on a difference characteristic between the bright characteristic and the dark characteristic; and the model generation unit is configured to respond to the comparison loss function meeting a preset first jumping-out condition, and output the initial model trained to be current as the adhesion detection model.
In a third aspect, an embodiment of the present disclosure provides an adhesion detection method, including: acquiring an eye image to be detected; the eye image to be detected is an eye image obtained by adopting an anterior optical coherence tomography technology; calling an adhesion detection model to carry out adhesion detection on the eye image to be detected; wherein the adhesion detection model is obtained according to the adhesion detection model training method as described in any implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides an adhesion detection apparatus, including: an eye image to be detected acquisition unit configured to acquire an eye image to be detected; and a calling model detection unit configured to call an adhesion detection model to detect the eye image to be detected, wherein the adhesion detection model is obtained according to the adhesion detection model training device described in any one of the implementation manners of the second aspect.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for training a blocking detection model as described in any implementation of the first aspect or the method for detecting blocking as described in any implementation of the third aspect when executed.
In a sixth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement the adhesion detection model training method as described in any implementation manner of the first aspect or the adhesion detection method as described in any implementation manner of the third aspect when executed.
In a seventh aspect, the present disclosure provides a computer program product including a computer program, which when executed by a processor is capable of implementing the adhesion detection model training method as described in any implementation manner of the first aspect or the adhesion detection method as described in any implementation manner of the third aspect.
The adhesion detection model training and adhesion detection method provided by the embodiment of the disclosure obtains a bright-dark image pair shot by a front-segment optical coherence tomography technology on the same eye object in a bright environment and a dark environment and a labeling result of whether the eye object is adhered, extracts bright features and dark features by using a feature extraction module of an initial model, inputs the bright features and the dark features into a comparison module of the initial model, takes the labeling result as the output of the initial model, determines a contrast loss function for supervision training based on the difference features between the bright features and the dark features, trains the initial model, and jumps out when the contrast loss function meets a preset first jump-out condition to obtain an adhesion detection model.
The initial model can be trained based on the difference between the bright and dark image pairs to obtain an adhesion detection model, and the adhesion detection model can detect whether adhesion exists in an eye image shot based on a front-segment optical coherence tomography technology.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present disclosure may be applied;
FIG. 2 is a flow chart of a method for training an adhesion detection model according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another adhesion detection model training method provided by the embodiments of the present disclosure;
FIG. 4 is a flowchart of yet another adhesion detection model training method provided by an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for training an adhesion detection model in an application scenario according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a structure of an adhesion detection model training apparatus according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an adhesion detection apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device suitable for executing an adhesion detection model training method and/or an adhesion detection method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the present application may be applied for training adhesion detection models and adhesion detection methods, apparatus, electronic devices, and computer-readable storage media.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 and the server 105 may be installed with various applications for implementing information communication therebetween, such as a model training application, an adhesion detection application, an image recognition application, and the like.
The terminal apparatuses 101, 102, 103 and the server 105 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal devices 101, 102, and 103 are software, they may be installed in the electronic devices listed above, and they may be implemented as multiple software or software modules, or may be implemented as a single software or software module, and are not limited in this respect. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server; when the server is software, the server may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited herein.
The server 105 may provide various services through various built-in applications, taking an adhesion detection application that may provide eye adhesion detection for a user as an example, when the server 105 runs the adhesion detection application, the following effects may be achieved: after an eye image to be detected which is uploaded by a user and shot by adopting a front-segment optical coherence tomography technology is obtained, an adhesion detection model is called to detect whether adhesion exists in the eye image to be detected.
The adhesion detection model can be obtained by training a model training application built in the server 105 according to the following steps: acquiring a bright-dark image pair shot for the same eye object by a front-segment optical coherence tomography technology and a labeling result of whether the eye object is adhered or not; wherein, the bright image in the bright-dark image pair is obtained by shooting in a bright environment, and the dark image is obtained by shooting in a dark environment; respectively extracting the light features of the light image and the dark features of the dark image by using a feature extraction module of the initial model; inputting the bright features and the dark features into a comparison module of the initial model, taking the labeling result as the output of the initial model, determining a contrast loss function for supervising and training the initial model based on the difference features between the bright features and the dark features, and then training the initial model; and responding to the fact that the comparison loss function meets a preset first jumping-out condition, and outputting the initial model trained to the current state as an adhesion detection model.
Since the adhesion detection model needs to occupy more computation resources and stronger computation capability for obtaining the adhesion detection model through training, the method for training the adhesion detection model provided in the following embodiments of the present application is generally executed by the server 105 having stronger computation capability and more computation resources, and accordingly, the apparatus for training the adhesion detection model is generally disposed in the server 105. However, it should be noted that when the terminal devices 101, 102, and 103 also have the computation capability and computation resource meeting the requirements, the terminal devices 101, 102, and 103 may also complete the above operations performed by the server 105 through the adhesion detection model training application installed thereon, and then output the same result as the server 105. Accordingly, the adhesion detection model training device may also be provided in the terminal apparatuses 101, 102, 103. In such a case, the exemplary system architecture 100 may also not include the server 105 and the network 104.
Of course, the server used to train the adhesion detection model may be different from the server used to invoke the trained adhesion detection model. Particularly, the adhesion detection model trained by the server 105 may also obtain a lightweight adhesion detection model suitable for being embedded in the terminal devices 101, 102, and 103 in a model distillation manner, that is, the lightweight adhesion detection model in the terminal devices 101, 102, and 103 may be flexibly selected and used according to the recognition accuracy of the actual requirement, or a more complex adhesion detection model in the server 105 may be selected and used.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 is a flowchart of an adhesion detection model training method according to an embodiment of the disclosure, where the process 200 includes the following steps:
step 201, acquiring a bright-dark image pair shot for the same eye object by the anterior optical coherence tomography technology, and a labeling result of whether the eye object is adhered or not.
In the present embodiment, a bright-dark image pair captured by the anterior optical coherence tomography technique for the same eye object, the bright-dark image pair being composed of at least one bright image captured for the eye object in a bright environment and at least one dark image captured for the eye object in a dark environment, is acquired by an executing subject (for example, the server 105 shown in fig. 1) of the adhesion detection model training method, and an annotation result indicating whether there is adhesion between irises in the eye object is acquired.
The bright environment and the dark environment can be determined according to brightness, light conditions and the like in the environment, can also be determined according to the pupil state of the eye object, and determine whether the image obtained by shooting the eye object belongs to a bright image or a dark image according to the relationship between the pupil expansion or contraction ratio of the eye object and a preset threshold value ratio.
It should be noted that the bright-dark image pair may be directly obtained from a local storage device by the execution subject, or may be obtained from a non-local storage device (for example, terminal devices 101, 102, 103 shown in fig. 1). The local storage device may be a data storage module arranged in the execution main body, such as a server hard disk, in which case the bright-dark image pair can be quickly read locally; the non-local storage device may also be any other electronic device arranged to store data, such as some user terminals, in which case the executing entity may obtain the desired bright-dark image pair by sending an acquisition command to the electronic device.
Step 202, a feature extraction module of the initial model is used for respectively extracting the light features of the light image and the dark features of the dark image.
In this embodiment, after a pair of bright and dark images captured of the same eye object by the anterior optical coherence tomography is acquired, a bright image and a dark image in the pair of bright and dark images are input to the feature extraction module of the initial model, respectively, and a bright feature corresponding to the bright image and a dark feature corresponding to the dark image are extracted by the feature extraction module.
The feature extraction module of the initial model can be constructed based on a neural network, an algorithm and the like which can be used for extracting features in an image, such as a three-dimensional depth network, a convolutional neural network and the like, can extract light features and dark features of the same dimensionality, and can calculate differences between the light features and the dark features by using a comparison module subsequently.
And 203, inputting the light features and the dark features into a comparison module of the initial model, taking the labeling result as the output of the initial model, determining a contrast loss function for supervising and training the initial model based on the difference features between the light features and the dark features, and then training the initial model.
In this embodiment, after the extraction of the bright features and the dark features is completed, the extracted bright features and the extracted dark features are input into a comparison module of the initial model, the labeling result of whether the eye object is adhered or not obtained in step 201 is used as the output of the initial model, a comparison loss function for supervising and training the initial model is determined based on the difference features between the bright features and the dark features, then the initial model is trained, and the comparison loss function is supervised to obtain the training progress of the initial model.
The comparison module includes a full connection layer and a Linear rectification function (Rectified Linear Unit, for short, reLU), and performs difference comparison after down-sampling and nonlinear processing on input bright and dark features.
And 204, responding to the fact that the comparison loss function meets a preset first jumping-out condition, and outputting the initial model trained to be current as an adhesion detection model.
The adhesion detection model training method provided by the embodiment of the disclosure can perform multiple times of iterative training on an initial model based on different bright and dark images, completes training on the initial model and obtains an adhesion detection model when determining that a contrast loss function for supervising and training the initial model changes to a first jumping-out condition based on a difference characteristic between the bright characteristic and the dark characteristic, and the adhesion detection model can detect whether adhesion exists in an eye image shot based on a front-segment optical coherence tomography technology.
In some optional implementation manners of this embodiment, in order to improve the training quality of the adhesion detection model, the initial model may be further instructed to perform a relationship between specific information in the light image and specific information in the dark image in the training process, so as to instruct the initial model to pay attention to important information according to a requirement, the acquired light image and dark image may be further subjected to operations such as amplification and clipping, and after local information in the light image and the dark image is acquired, the adhesion detection model is synchronously trained, so as to relate more information with higher value.
Specifically, the bright image and the dark image are input to an image amplification module of the initial model to generate an amplified bright image and an amplified dark image, an amplified bright feature of the amplified bright image and an amplified dark feature of the amplified dark image are extracted, the amplified bright feature and the amplified dark feature are input to an amplified contrast module of the initial model, the labeling result is used as the output of the initial model, a local contrast loss function for supervising and training the initial model is determined based on a difference feature between the amplified bright feature and the amplified dark feature, the initial model is trained, when the contrast loss function meets a preset first skip condition, response adjustment is performed until the current contrast loss function meets the preset first skip condition and the current local contrast loss function meets a preset third skip condition, the initial model trained to the current is output as an adhesion detection model, so that the influence of the high-value feature and the global feature on the adhesion detection model is considered at the same time, and the quality of the adhesion detection model is improved.
Referring to fig. 3, fig. 3 is another adhesion detection model training method provided by the embodiment of the present disclosure, wherein in the process 200 shown in fig. 2, a splicing feature obtained by splicing a light feature and a dark feature is further introduced to train an initial model, and the initial model is trained by the splicing feature obtained by splicing the light feature and the dark feature, so as to enhance the detection capability of the obtained adhesion detection model for the full-scale and global features, and obtain an adhesion detection model with stronger detection capability, where the process 300 includes the following steps:
step 301, acquiring a bright-dark image pair shot for the same eye object by the anterior optical coherence tomography and a labeling result indicating whether the eye object is adhered.
Step 302, a feature extraction module of the initial model is used to extract a light feature of the light image and a dark feature of the dark image respectively.
And 303, inputting the light features and the dark features into a comparison module of the initial model, taking the labeling result as the output of the initial model, determining a contrast loss function for supervising and training the initial model based on the difference features between the light features and the dark features, and then training the initial model.
The above steps 301 to 303 are similar to the steps 201 to 203 in the embodiment shown in fig. 2, and reference may be made to the description of corresponding parts in the embodiment shown in fig. 2, which is not repeated herein.
And 304, inputting splicing characteristics obtained after splicing the bright characteristics and the dark characteristics into a classification module of the initial model, taking the labeling result as the output of the initial model, determining a classification loss function for supervising and training the initial model based on the splicing characteristics, and then training the initial model.
In this embodiment, after the light features and the dark features are obtained, the light features and the dark features are spliced to obtain spliced features (optionally, the feature sequences corresponding to the light features and the dark features are spliced front and back, so that all information of the light features and the dark features is included in the obtained spliced features), the obtained spliced features are input into a classification module of an initial model, a labeling result is used as output of the initial model, and a classification loss function for supervising and training the initial model is determined based on the spliced features, and then the initial model is trained.
Step 305, in response to that the current comparison loss function meets a preset first jump-out condition and the current classification loss function meets a preset second jump-out condition, outputting the initial model trained to the current as the adhesion detection model.
In this embodiment, similar to the embodiment corresponding to fig. 2, the initial model may be trained repeatedly and iteratively based on different bright and dark images, when the contrast loss function changes to the first jump-out condition and the current classification loss function satisfies the preset second jump-out condition, the training of the initial model is completed, and an adhesion detection model is obtained, so as to obtain an adhesion detection model that can simultaneously determine whether there is adhesion in the eye image to be detected based on the bright and dark feature contrast, the global detection of the stitching feature, and the like, where the adhesion detection model can detect whether there is adhesion in the eye image shot based on the anterior optical coherence tomography.
It should be understood that, in this embodiment, the execution sequence of the step 303 and the step 304 is only an exemplary illustration for facilitating understanding, and does not constitute a limitation on the actual execution sequence, and the sequence may be adaptively adjusted according to actual situations (for example, the step 303 is executed after the step 304 is executed preferentially), and in some alternative embodiments, the step 303 and the step 304 may be combined into the same step for execution, so as to achieve the purpose of simultaneously training the initial model and improving the efficiency and quality of model training based on two ways.
In some embodiments, the attention of the obtained adhesion detection model may also be adjusted by setting an attenuation mechanism of the loss function to balance the weight relationship between the contrast loss function and the classification loss function, and the attenuation of the contrast loss function may be set, for example, as follows:
Figure GDA0003838516460000101
wherein, escape () is attenuation strategy, l is contrast loss function, T is current training iteration number max To train the maximum number of iterations. The loss function attenuation strategy can gradually reduce the weight of the comparison loss function along with the deepening of the training process so as to improve the weight proportion of the classification loss function in the adhesion detection model training process.
Referring to fig. 4, on the basis of the embodiment corresponding to fig. 2 or fig. 3, in order to solve the problems of unbalanced sample quantity and low training quality of the adhesion detection model caused by an excessively large difference between the quantity of the pairs of images labeled as the adhesion sample and the quantity of the pairs of images labeled as the non-adhesion sample in the concentrated adhesion samples of the training sample images, fig. 4 is a further method for training the adhesion detection model according to the embodiment of the present disclosure, wherein the process 400 includes the following steps:
step 401, acquiring a bright-dark image pair shot for the same eye object by the anterior optical coherence tomography and a labeling result indicating whether the eye object is adhered.
Step 402, a feature extraction module of the initial model is used to extract light features of the light image and dark features of the dark image respectively.
And 403, inputting the light features and the dark features into a comparison module of the initial model, taking the labeling result as the output of the initial model, determining a contrast loss function for supervising and training the initial model based on the difference features between the light features and the dark features, and then training the initial model.
The above steps 401 to 403 are similar to the steps 201 to 203 in the embodiment shown in fig. 2, and reference may be made to the description of corresponding parts in the embodiment shown in fig. 2, which is not repeated herein.
Step 404, in response to that a difference between the number of pairs of images labeled as adhesion sample images and the number of pairs of images labeled as non-adhesion sample images in the training image set is greater than a preset threshold, generating a loss function weight between a first contrast loss function obtained based on the adhesion sample images and a second contrast loss function obtained based on the non-adhesion sample images.
In this embodiment, the adjustment of the loss function weight between the first contrast loss function obtained based on the stuck sample image pair and the second contrast loss function obtained based on the non-stuck sample image pair may be based on:
Figure GDA0003838516460000111
wherein s controls
Figure GDA0003838516460000112
Alpha controls the weight of the adhesive and non-adhesive samples, d represents the Euclidean distance of the matched features in the same image pair, d n And d p Respectively corresponding to non-adhesion samples and adhesion samples, and controlling the similarity of normal samples by using M as a distance parameter. Beta is a p And beta n As focusing factors for adherent and non-adherent samples, respectively, to rebalance difficult samplesThe formula is as follows:
β n =1-sigmoid(d n );β p =sigmoid(d p )-0.5
focusing factor beta * The amount of information (beta) contained in a pair of data can be measured n ∈(0,0.5],β p E [0,0.5)). D is firstly converted by sigmoid function * Scaled to the range [0.5,1 ], and then by calculating sigmoid (d) * ) The information amount of each pair of images is measured by comparing the difference between the optimal values (the optimal value of the similar data pair is 0.5, and the optimal value of the dissimilar data pair is 1). Specifically, the method comprises the following steps: (1) Sigmoid (d) at this time is caused when a similar pair (stuck sample) is a difficult sample p ) Close to 1, i.e. beta p Close to 0.5, the loss weight of the stuck sample is improved; (2) Sigmoid (d) results when a dissimilar pair (i.e., non-adherent sample) is a pair of difficult samples n ) Near 0.5, corresponding beta n Then close to 0.5 the loss weight of the non-stuck sample is increased.
Step 405 generates an optimized contrast loss function based on the first contrast loss function, the second contrast loss function, and the loss function weights.
In this embodiment, an optimized contrast loss function is generated based on the loss function weights determined in step 404 and the first and second contrast loss functions.
And step 406, in response to that the current optimized contrast loss function meets a preset fourth jump-out condition, outputting the initial model trained to the current as the adhesion detection model.
In this embodiment, the optimized contrast loss function generated in step 405 is monitored, and when the optimized contrast loss function changes to meet a preset fourth skip condition, training of an initial model is completed to obtain an adhesion detection model, where the adhesion detection model can detect whether there is adhesion in an eye image captured based on a previous-segment optical coherence tomography.
In some optional implementation manners of this embodiment, when the initial model is trained based on the local contrast loss function, the local contrast loss function may also be optimized by using the weight of the loss function, so as to improve the quality of the obtained adhesion detection model, and at this time, the training method of the adhesion detection model further includes: adjusting the local contrast loss function based on the loss function weight to generate an optimized local contrast loss function; and the step of responding to the fact that the comparison loss function meets a preset first jumping-out condition, outputting a trained current initial model as an adhesion detection model, comprising the following steps of: and responding to the situation that the current optimized comparison loss function meets the fourth jumping-out condition and the current optimized local comparison loss function meets the preset fifth jumping-out condition, and outputting the initial model trained to the current as the adhesion detection model.
Specifically, when the initial model is trained by using the local contrast loss function, the local contrast loss function can be adjusted based on the weight of the loss function, so as to improve the quality of the obtained adhesion detection model.
Further in some embodiments, the method may also be implemented by
Figure GDA0003838516460000121
Wherein
Figure GDA0003838516460000122
Represents an optimized local contrast loss function,
Figure GDA0003838516460000123
and representing the optimized loss function, and summarizing the optimized local contrast loss function and the optimized contrast loss function so as to synchronize the locally optimized contrast loss function and the optimized contrast loss function.
Each of the above embodiments explains how to train to obtain an adhesion detection model from various aspects, and in order to highlight the effect of the adhesion detection model trained from an actual use scene as much as possible, the present disclosure also specifically provides a scheme for solving an actual problem by using the trained adhesion detection model, and a method for detecting adhesion includes the following steps: acquiring an eye image to be detected, wherein the eye image to be detected is an eye image obtained by adopting a front-segment optical coherence tomography technology, and calling an adhesion detection model to carry out adhesion detection on the eye image to be detected; wherein, the adhesion detection model is obtained according to the training method of the adhesion detection model described in any one of the embodiments of fig. 2 to fig. 4.
For further understanding, the present disclosure further provides a specific implementation scheme in combination with a specific application scenario, and the process may specifically refer to fig. 5:
the method comprises the steps of obtaining a bright-dark image pair, extracting bright features of a bright image and dark features of a dark image in the bright-dark image pair, inputting the bright features and the dark features of the features into a contrast module and an amplification module of an initial model respectively, and inputting the spliced bright features and dark features into a classification module of the initial model.
And with the labeling result as the output of the initial model, after determining a contrast loss function for supervised training of the initial model based on the difference feature between the light feature and the dark feature, determining a classification loss function for supervised training of the initial model based on the splicing feature, and determining a local contrast loss function for supervised training of the initial model based on the difference feature between the amplified light feature and the amplified dark feature, starting training of the initial model.
And when the comparison loss function meets a preset first jumping-out condition, the classification loss function meets a preset second jumping-out condition and the local comparison loss function meets a preset third jumping-out condition, jumping to output a current initial model trained to be an adhesion detection model.
Subsequently, when the eye image to be detected is obtained, the adhesion detection model is called to carry out adhesion detection on the eye image to be detected.
With further reference to fig. 6 and 7, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an adhesion detection model training apparatus and an embodiment of an adhesion detection apparatus, respectively, where the embodiment of the adhesion detection model training apparatus corresponds to the embodiment of the adhesion detection model training method shown in fig. 2, and the embodiment of the adhesion detection apparatus corresponds to the embodiment of the adhesion detection method. The device can be applied to various electronic equipment.
As shown in fig. 6, the adhesion detection model training apparatus 600 of the present embodiment may include: a training sample acquisition unit 601, a feature extraction unit 602, a comparison feature training unit 603, and a model generation unit 604. The training sample acquisition unit 601 is configured to acquire a bright-dark image pair captured by a front-segment optical coherence tomography technology on the same eye object and a labeling result indicating whether the eye object is adhered or not; wherein, the bright image in the bright-dark image pair is obtained by shooting in a bright environment, and the dark image is obtained by shooting in a dark environment; a feature extraction unit 602 configured to extract a light feature of the light image and a dark feature of the dark image by using a feature extraction module of the initial model, respectively; a contrast feature training unit 603 configured to input the light feature and the dark feature into a contrast module of the initial model, take the labeling result as an output of the initial model, and train the initial model after determining a contrast loss function for supervising and training the initial model based on a difference feature between the light feature and the dark feature; a model generating unit 604 configured to output the initial model trained to be current as the adhesion detection model in response to the contrast loss function satisfying a preset first jump-out condition.
In this embodiment, in the adhesion detection model training apparatus 600: the specific processes of the training sample obtaining unit 601, the feature extracting unit 602, the comparison feature training unit 603, and the model generating unit 604 and the technical effects thereof can refer to the related descriptions of steps 201 to 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of this embodiment, the adhesion detection model training apparatus 600 further includes: the classification characteristic training unit is configured to input splicing characteristics obtained after splicing the light characteristics and the dark characteristics into a classification module of the initial model, take the labeling result as the output of the initial model, determine a classification loss function for supervising and training the initial model based on the splicing characteristics, and train the initial model; the model generating unit 604 is further configured to output the initial model trained to be current as the adhesion detection model in response to the current comparison loss function satisfying a preset first jump-out condition and the current classification loss function satisfying a preset second jump-out condition.
In some optional implementations of this embodiment, the adhesion detection model training apparatus 600 further includes: an image magnification unit configured to input the bright image and the dark image to an image magnification module of the initial model, generating an enlarged bright image and an enlarged dark image; an enlarged feature extraction unit configured to extract an enlarged light feature of the enlarged light image and an enlarged dark feature of the enlarged dark image; the amplified characteristic training unit is configured to input the amplified light characteristic and the amplified dark characteristic to an amplified comparison module of the initial model, take the labeling result as the output of the initial model, and train the initial model after a local contrast loss function for supervising and training the initial model is determined based on the difference characteristic between the amplified light characteristic and the amplified dark characteristic; the model generating unit 604 is further configured to, in response to the current contrast loss function satisfying a preset first skip condition and the current local contrast loss function satisfying a preset third skip condition, output the initial model trained to the current as the adhesion detection model.
In some optional implementations of this embodiment, the adhesion detection model training apparatus 600 further includes: a loss function weight generating unit configured to generate a loss function weight between a first contrast loss function obtained based on a stuck sample image pair and a second contrast loss function obtained based on a non-existing stuck sample image pair in response to a difference between a number of pairs of images labeled as stuck sample images and a number of pairs of images labeled as non-stuck sample images in the training image set being greater than a preset threshold, wherein the sample image pair includes at least one light image and at least one dark image captured for the same eye object; a contrast loss function optimization unit configured to generate an optimized contrast loss function based on the first contrast loss function, the second contrast loss function, and the loss function weight; and the model generating unit 604 is further configured to, in response to the current optimized contrast loss function satisfying a preset fourth jump-out condition, output the initial model trained to the current as the adhesion detection model.
In some optional implementations of this embodiment, the adhesion detection model training apparatus 600 further includes: a local contrast loss function optimization unit configured to adjust the local contrast loss function based on the loss function weight to generate an optimized local contrast loss function; and the model generating unit 604 is further configured to, in response to the current optimized contrast loss function satisfying the fourth skip condition and the current optimized local contrast loss function satisfying a preset fifth skip condition, output the initial model trained to the current as the adhesion detection model.
As shown in fig. 7, the adhesion detecting apparatus 700 of the present embodiment may include: an eye image acquisition unit to be detected 701 and a calling model detection unit 702. The eye image acquisition unit 701 to be detected is configured to acquire an eye image to be detected; a calling model detecting unit 702 configured to call an adhesion detection model to detect the eye image to be detected; wherein the adhesion detection model is obtained according to the adhesion detection model training device 600.
In the present embodiment, in the adhesion detection apparatus 700: the specific processing of the eye image to be detected acquisition unit 701 and the calling model detection unit 702 and the technical effects brought by the processing can respectively correspond to the related descriptions in the method embodiments, and are not described herein again.
The present embodiment exists as an embodiment of an apparatus corresponding to the above method embodiment, and the adhesion detection model training apparatus and the adhesion detection apparatus provided in the present embodiment can train an initial model based on a difference between a bright image and a dark image to obtain an adhesion detection model, and the adhesion detection model can detect whether there is adhesion in an eye image captured based on a previous-stage optical coherence tomography technique.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to implement the adhesion detection model training method or the adhesion detection method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, a readable storage medium is further provided, where the readable storage medium stores computer instructions, and the computer instructions are configured to enable a computer to implement the adhesion detection model training method or the adhesion detection method described in any of the above embodiments when executed.
The embodiment of the disclosure provides a computer program product, and when being executed by a processor, the computer program can implement the adhesion detection model training method or the adhesion detection method described in any one of the above embodiments.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the adhesion detection model training method and/or the adhesion detection method. For example, in some embodiments, the adhesion detection model training method and/or the adhesion detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the adhesion detection model training method and/or the adhesion detection method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the adhesion detection model training method and/or the adhesion detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in the conventional physical host and Virtual Private Server (VPS) service.
According to the technical scheme, the initial model can be trained based on the difference between the bright and dark image pairs to obtain the adhesion detection model, and the adhesion detection model can be used for detecting whether adhesion exists in the eye image shot based on the front-segment optical coherence tomography technology.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. An adhesion detection model training method, comprising:
acquiring a light and dark image pair shot by a front-segment optical coherence tomography technology on the same eye object and a labeling result of whether the eye object is adhered or not; wherein the bright image in the bright-dark image pair is obtained by shooting in a bright environment, and the dark image in the dark environment;
generating a loss function weight between a first contrast loss function obtained based on the adhesive sample image pairs and a second contrast loss function obtained based on the non-adhesive sample image pairs in response to a difference between a number of pairs of images labeled as adhesive sample images and a number of pairs labeled as non-adhesive sample images in a training image set being greater than a preset threshold, wherein the sample image pairs include at least one light image and at least one dark image shot for the same eye object;
generating an optimized contrast loss function based on the first contrast loss function, the second contrast loss function, and the loss function weights;
respectively extracting the light features of the light image and the dark features of the dark image by using a feature extraction module of an initial model;
inputting the bright image and the dark image to an image amplification module of the initial model to generate an amplified bright image and an amplified dark image;
extracting the amplified light features of the amplified light image and the amplified dark features of the amplified dark image;
inputting the bright features and the dark features into a comparison module of the initial model, taking the labeling result as the output of the initial model, and training the initial model after determining a contrast loss function for supervising and training the initial model based on the difference features between the bright features and the dark features;
inputting the amplified light features and the amplified dark features to an amplified contrast module of the initial model, taking the labeling result as the output of the initial model, and training the initial model after determining a local contrast loss function for supervising and training the initial model based on the difference features between the amplified light features and the amplified dark features;
and outputting the initial model trained to the current as the adhesion detection model in response to the comparison loss function meeting a preset first jump-out condition, the current local comparison loss function meeting a preset third jump-out condition and the current optimized comparison loss function meeting a preset fourth jump-out condition.
2. The method of claim 1, further comprising:
inputting the spliced features obtained after splicing the bright features and the dark features into a classification module of the initial model, taking the labeling result as the output of the initial model, determining a classification loss function for supervising and training the initial model based on the spliced features, and then training the initial model; and
responding to the comparison loss function meeting a preset first jumping-out condition, the current local comparison loss function meeting a preset third jumping-out condition and the current optimized comparison loss function meeting a preset fourth jumping-out condition, and outputting a trained initial model to the current initial model as an adhesion detection model, wherein the method comprises the following steps:
and responding to the comparison loss function meeting a preset first jumping-out condition, the current classification loss function meeting a preset second jumping-out condition, the current local comparison loss function meeting a preset third jumping-out condition and the current optimized comparison loss function meeting a preset fourth jumping-out condition, and outputting the initial model trained to the current as an adhesion detection model.
3. The method of claim 1, further comprising:
adjusting the local contrast loss function based on the loss function weight to generate an optimized local contrast loss function; and
responding to the fact that the comparison loss function meets a preset first jumping-out condition, the current local comparison loss function meets a preset third jumping-out condition, and the current optimized comparison loss function meets a preset fourth jumping-out condition, outputting a trained initial model to the current initial model as an adhesion detection model, and the method comprises the following steps:
and outputting the initial model trained to the current as the adhesion detection model in response to the comparison loss function meeting a preset first jump-out condition, the current optimized local comparison loss function meeting a preset fifth jump-out condition and the current optimized comparison loss function meeting a preset fourth jump-out condition.
4. An adhesion detection method comprising:
acquiring an eye image to be detected; the eye image to be detected is an eye image obtained by shooting by adopting a front-segment optical coherence tomography technology;
calling an adhesion detection model to carry out adhesion detection on the eye image to be detected; wherein the adhesion detection model is obtained according to the training method of the adhesion detection model of any one of claims 1 to 3.
5. An adhesion detection model training device, comprising:
a training sample acquisition unit configured to acquire a light and dark image pair captured on the same ocular object by a preceding segment optical coherence tomography and a labeling result of whether the ocular object is adhered or not; wherein the bright image in the bright-dark image pair is obtained by shooting in a bright environment, and the dark image in the dark environment;
a loss function weight generating unit configured to generate a loss function weight between a first contrast loss function obtained based on a sticky sample image pair and a second contrast loss function obtained based on a non-sticky sample image pair in response to a difference between a number labeled as a sticky sample image pair and a number labeled as a non-sticky sample image pair in the training image set being greater than a preset threshold, wherein the sample image pair includes at least one bright image and at least one dark image captured for the same eye object;
a contrast loss function optimization unit configured to generate an optimized contrast loss function based on the first contrast loss function, the second contrast loss function, and the loss function weight;
a feature extraction unit configured to extract a light feature of the light image and a dark feature of the dark image respectively by using a feature extraction module of an initial model;
an image magnification unit configured to input the bright image and the dark image to an image magnification module of the initial model, generating an enlarged bright image and an enlarged dark image;
an enlarged feature extraction unit configured to extract an enlarged light feature of the enlarged light image and an enlarged dark feature of the enlarged dark image;
a contrast feature training unit configured to input the light features and the dark features into a contrast module of the initial model, take the labeling result as an output of the initial model, and train the initial model after determining a contrast loss function for supervising and training the initial model based on a difference feature between the light features and the dark features;
an amplified feature training unit configured to input the amplified light features and the amplified dark features to an amplified contrast module of the initial model, take the labeling result as an output of the initial model, and train the initial model after determining a local contrast loss function for supervising and training the initial model based on a difference feature between the amplified light features and the amplified dark features;
and the model generation unit responds to the situation that the comparison loss function meets a preset first jumping-out condition, the current local comparison loss function meets a preset third jumping-out condition and the current optimized comparison loss function meets a preset fourth jumping-out condition, and outputs the initial model trained to the current as the adhesion detection model.
6. The apparatus of claim 5, further comprising:
a classification feature training unit configured to input the spliced features obtained by splicing the light features and the dark features into a classification module of the initial model, use the labeling result as the output of the initial model, and train the initial model after determining a classification loss function for supervising and training the initial model based on the spliced features;
the model generation unit is further configured to, in response to the comparison loss function satisfying a preset first jump-out condition, the current classification loss function satisfying a preset second jump-out condition, the current local comparison loss function satisfying a preset third jump-out condition, and the current optimization comparison loss function satisfying a preset fourth jump-out condition, output a trained to current initial model as an adhesion detection model.
7. The apparatus of claim 5, further comprising:
a local contrast loss function optimization unit configured to adjust the local contrast loss function based on the loss function weight, and generate an optimized local contrast loss function; and
the model generation unit is further configured to output the initial model trained to be current as an adhesion detection model in response to the comparison loss function satisfying a preset first jump-out condition, the current optimized local comparison loss function satisfying a preset fifth jump-out condition, and the current optimized comparison loss function satisfying a preset fourth jump-out condition.
8. An adhesion detection device comprising:
an eye image to be detected acquisition unit configured to acquire an eye image to be detected;
a calling model detection unit configured to call an adhesion detection model to detect the eye image to be detected; wherein the adhesion detection model is obtained according to the adhesion detection model training apparatus of any one of claims 5 to 7.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the adhesion detection model training method of any one of claims 1-3 or the adhesion detection method of claim 4.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the adhesion detection model training method of any one of claims 1-3 or the adhesion detection method of claim 4.
CN202111144365.8A 2021-09-28 2021-09-28 Adhesion detection model training method, adhesion detection method and related device Active CN113850203B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111144365.8A CN113850203B (en) 2021-09-28 2021-09-28 Adhesion detection model training method, adhesion detection method and related device
PCT/CN2022/121945 WO2023051563A1 (en) 2021-09-28 2022-09-28 Adhesion detection model training method, adhesion detection method, and related apparatuses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111144365.8A CN113850203B (en) 2021-09-28 2021-09-28 Adhesion detection model training method, adhesion detection method and related device

Publications (2)

Publication Number Publication Date
CN113850203A CN113850203A (en) 2021-12-28
CN113850203B true CN113850203B (en) 2023-01-03

Family

ID=78980403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111144365.8A Active CN113850203B (en) 2021-09-28 2021-09-28 Adhesion detection model training method, adhesion detection method and related device

Country Status (2)

Country Link
CN (1) CN113850203B (en)
WO (1) WO2023051563A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850203B (en) * 2021-09-28 2023-01-03 北京百度网讯科技有限公司 Adhesion detection model training method, adhesion detection method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111067477A (en) * 2019-12-28 2020-04-28 山东第一医科大学(山东省医学科学院) Anterior segment image analysis system for angle of eye of ophthalmology
CN111862020A (en) * 2020-07-13 2020-10-30 南方科技大学 Method, device, server and storage medium for predicting physiological age of anterior segment
CN112712531A (en) * 2020-12-29 2021-04-27 中国科学院宁波材料技术与工程研究所慈溪生物医学工程研究所 House corner classification method of AS-OCT image based on convolution cyclic neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8687866B2 (en) * 2011-03-04 2014-04-01 Nanyang Technological University Methods and systems for processing images of the anterior chamber angle of an eye
US10860888B2 (en) * 2018-01-05 2020-12-08 Whirlpool Corporation Detecting objects in images
CN109543526B (en) * 2018-10-19 2022-11-08 谢飞 True and false facial paralysis recognition system based on depth difference characteristics
US10997720B2 (en) * 2019-08-21 2021-05-04 Ping An Technology (Shenzhen) Co., Ltd. Medical image classification method and related device
CN111260665B (en) * 2020-01-17 2022-01-21 北京达佳互联信息技术有限公司 Image segmentation model training method and device
CN113850203B (en) * 2021-09-28 2023-01-03 北京百度网讯科技有限公司 Adhesion detection model training method, adhesion detection method and related device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111067477A (en) * 2019-12-28 2020-04-28 山东第一医科大学(山东省医学科学院) Anterior segment image analysis system for angle of eye of ophthalmology
CN111862020A (en) * 2020-07-13 2020-10-30 南方科技大学 Method, device, server and storage medium for predicting physiological age of anterior segment
CN112712531A (en) * 2020-12-29 2021-04-27 中国科学院宁波材料技术与工程研究所慈溪生物医学工程研究所 House corner classification method of AS-OCT image based on convolution cyclic neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
前节OCT观察明暗光线下眼前节静态参数及其动态变化的研究;吴若欣等;《眼科》;20170725(第04期);全文 *

Also Published As

Publication number Publication date
CN113850203A (en) 2021-12-28
WO2023051563A1 (en) 2023-04-06

Similar Documents

Publication Publication Date Title
CN112906502A (en) Training method, device and equipment of target detection model and storage medium
CN113033537A (en) Method, apparatus, device, medium and program product for training a model
CN113436100B (en) Method, apparatus, device, medium, and article for repairing video
CN113362314B (en) Medical image recognition method, recognition model training method and device
CN113705362B (en) Training method and device of image detection model, electronic equipment and storage medium
JP2023531350A (en) A method for incrementing a sample image, a method for training an image detection model and a method for image detection
CN113657269A (en) Training method and device for face recognition model and computer program product
CN111523593B (en) Method and device for analyzing medical images
CN108399401B (en) Method and device for detecting face image
CN113850203B (en) Adhesion detection model training method, adhesion detection method and related device
CN113011309A (en) Image recognition method, apparatus, device, medium, and program product
CN113627361B (en) Training method and device for face recognition model and computer program product
CN114266937A (en) Model training method, image processing method, device, equipment and storage medium
CN114186681A (en) Method, apparatus and computer program product for generating model clusters
CN113827240A (en) Emotion classification method and emotion classification model training method, device and equipment
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN112732553A (en) Image testing method and device, electronic equipment and storage medium
CN114783597B (en) Method and device for diagnosing multi-class diseases, electronic equipment and storage medium
CN116402914A (en) Method, device and product for determining stylized image generation model
CN113554550B (en) Training method and device for image processing model, electronic equipment and storage medium
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment
CN116229095A (en) Model training method, visual task processing method, device and equipment
CN113656422A (en) Method and device for updating human face base
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN114120410A (en) Method, apparatus, device, medium and product for generating label information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211223

Address after: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

Applicant after: ZHONGSHAN OPHTHALMIC CENTER, SUN YAT-SEN University

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant