CN117853424A - Training method, identification method, system, equipment and medium for coherent topology model - Google Patents

Training method, identification method, system, equipment and medium for coherent topology model Download PDF

Info

Publication number
CN117853424A
CN117853424A CN202311749500.0A CN202311749500A CN117853424A CN 117853424 A CN117853424 A CN 117853424A CN 202311749500 A CN202311749500 A CN 202311749500A CN 117853424 A CN117853424 A CN 117853424A
Authority
CN
China
Prior art keywords
coherent
feature
topology
training set
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311749500.0A
Other languages
Chinese (zh)
Inventor
汪卫星
陈洪
林铭炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Normal University
Guangdong Polytechnic Institute
Original Assignee
Fujian Normal University
Guangdong Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Normal University, Guangdong Polytechnic Institute filed Critical Fujian Normal University
Priority to CN202311749500.0A priority Critical patent/CN117853424A/en
Publication of CN117853424A publication Critical patent/CN117853424A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a training method, an identification method, a system, equipment and a medium of a coherent topology model, wherein the training method is used for acquiring a corneal nerve data set; marking pretreatment and topology pretreatment are respectively carried out on the cornea nerve data set to obtain a marked image training set and a topology image training set; performing segmentation feature extraction processing on the labeling image training set to obtain a labeling feature set, and performing coherent feature extraction processing on the topology image training set to obtain a coherent topology feature set; according to the coherent topological feature set, linear fusion processing is carried out on the labeling feature set to obtain a feature training set; and according to the characteristic training set, carrying out parameter updating on the initialized coherent topology model to obtain a trained coherent topology model. The training method can effectively improve the segmentation accuracy and precision of the cornea nerve image, effectively reduce the loss of the detail structure of the cornea nerve and keep the topological consistency of the cornea nerve. The invention relates to the technical field of image recognition.

Description

Training method, identification method, system, equipment and medium for coherent topology model
Technical Field
The invention relates to the technical field of image recognition, in particular to a training method, a recognition method, a system, equipment and a medium of a coherent topology model.
Background
The cornea contains abundant sensory nerves and plays an important role in regulating cornea feel, epithelial integrity, cell proliferation, wound healing and the like. Normal human corneal nerve fibers are curved structures with small curvature and consistent directions, and usually these curved structures are shown to have more branches. Clinically, the corneal nerve fiber density, the corneal nerve fiber length, the corneal nerve branch density and the corneal nerve curvature are important indexes for judging whether the corneal nerve is normal or not, and the judgment is difficult directly according to an original corneal nerve image, so that the corneal nerve needs to be segmented, and the difficulty in judging the corneal nerve is reduced.
At present, people usually divide the cornea nerves by adopting a mode based on machine learning, but because the contrast of the cornea nerve image is lower and a large amount of random noise exists, after the cornea nerves are divided by adopting the traditional mode of dividing the cornea nerves by machine learning, the loss of the detail structure of the cornea nerves is larger, so that higher errors occur in the diameter, bifurcation angle, vascular bending degree and change of the cornea nerves in the divided cornea images, and the dividing accuracy of the cornea images is lower; in addition, when the traditional mode of dividing the cornea nerves based on machine learning is used for dividing the cornea nerves, a curve-shaped structure of the cornea nerves cannot be well reflected, and the limitation of the characteristics of the cornea nerves is high.
Accordingly, there is a need for solving and optimizing the problems associated with the prior art.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the related art to a certain extent.
Therefore, a first object of the embodiments of the present invention is to provide a training method for a coherent topology model, which can effectively improve the segmentation accuracy and segmentation precision of the coherent topology model on a corneal nerve image, effectively reduce the loss of a corneal nerve detail structure, and better maintain the topological consistency of a corneal nerve.
A second object of the embodiments of the present application is to provide a method for identifying a coherent topology model.
A third object of an embodiment of the present application is to provide a training system for a coherent topology model.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the application comprises the following steps:
in a first aspect, an embodiment of the present application provides a training method for a coherent topology model, including:
obtaining a corneal nerve dataset;
labeling pretreatment is carried out on the cornea nerve data set to obtain a labeled image training set, and topology pretreatment is carried out on the cornea nerve data set to obtain a topology image training set;
performing segmentation feature extraction processing on the annotation image training set to obtain an annotation feature set, and performing coherent feature extraction processing on the topology image training set to obtain a coherent topology feature set;
according to the coherent topological feature set, carrying out linear fusion processing on the labeling feature set to obtain a feature training set;
and according to the characteristic training set, updating parameters of the initialized coherent topology model to obtain a trained coherent topology model.
In addition, according to the training method of the above embodiment of the present application, the following additional technical features may be further provided:
further, in an embodiment of the present application, the labeling preprocessing is performed on the corneal nerve data set to obtain a labeled image training set, including:
labeling all the corneal nerve picture data in the corneal nerve data set to obtain a first intermediate data set;
carrying out noise adding processing on the first intermediate data set to obtain a second intermediate data set;
performing random clipping treatment on the second intermediate data set to obtain a third intermediate data set;
and carrying out random enhancement processing on the third intermediate data set to obtain the annotation image training set.
Further, in an embodiment of the present application, the performing topology preprocessing on the corneal nerve data set to obtain a topology image training set includes:
graying the cornea nerve data set to obtain a gray data set;
and carrying out binarization processing on the gray data set to obtain the topological image training set.
Further, in an embodiment of the present application, the performing a segmentation feature extraction process on the labeled image training set to obtain a labeled feature set includes:
performing downsampling processing on the marked image training set to obtain a downsampled training set;
performing feature fusion processing on the downsampled training set to obtain a fusion training set;
and carrying out up-sampling treatment on the fusion training set to obtain the labeling feature set.
Further, in an embodiment of the present application, the performing coherent feature extraction processing on the topological image training set to obtain a coherent topological feature set includes:
performing multidimensional filtering processing on the topological image training set to obtain a continuous coherent image set;
and performing normalization and statistics processing on the continuous coherent image set to obtain the coherent topological feature set.
Further, in an embodiment of the present application, the updating parameters of the initialized coherent topology model according to the feature training set includes:
inputting the feature training set into the initialized coherent topology model for recognition to obtain a feature recognition result output by the initialized coherent topology model;
performing pixel loss processing on the feature identification result to obtain a pixel loss function;
performing similar loss processing on the feature identification result to obtain a similar loss function;
performing coherent topology loss processing on the feature identification result to obtain a coherent topology loss function;
performing gradient integration processing on the pixel loss function, the similar loss function and the coherent topology loss function to obtain an overall loss function;
and according to the integral loss function, updating parameters of the initialized coherent topology model.
In a second aspect, an embodiment of the present application provides a method for identifying a coherent topology model, including:
obtaining a cornea nerve image to be detected;
inputting the cornea nerve image to be detected into the trained coherent topology model according to the first aspect to obtain a cornea nerve segmentation result.
In a third aspect, an embodiment of the present application provides a training system for a coherent topology model, including:
the acquisition module is used for acquiring a corneal nerve data set;
the preprocessing module is used for carrying out labeling preprocessing on the cornea nerve data set to obtain a labeled image training set, and carrying out topology preprocessing on the cornea nerve data set to obtain a topology image training set;
the feature extraction module is used for carrying out segmentation feature extraction processing on the marked image training set to obtain a marked feature set, and carrying out coherent feature extraction processing on the topological image training set to obtain a coherent topological feature set;
the fusion module is used for carrying out linear fusion processing on the labeling feature set according to the coherent topology feature set to obtain a feature training set;
and the updating module is used for updating parameters of the initialized coherent topology model according to the characteristic training set to obtain a trained coherent topology model.
In a fourth aspect, embodiments of the present application further provide a computer device, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
In a fifth aspect, embodiments of the present application further provide a computer readable storage medium having stored therein a processor executable program for implementing the above-described method when executed by the processor.
The advantages and benefits of the present application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present application.
The embodiment of the application discloses a training method, an identification method, a system, equipment and a medium of a coherent topology model, wherein the training method is used for acquiring a corneal nerve data set; labeling pretreatment is carried out on the cornea nerve data set to obtain a labeled image training set, and topology pretreatment is carried out on the cornea nerve data set to obtain a topology image training set; performing segmentation feature extraction processing on the annotation image training set to obtain an annotation feature set, and performing coherent feature extraction processing on the topology image training set to obtain a coherent topology feature set; according to the coherent topological feature set, carrying out linear fusion processing on the labeling feature set to obtain a feature training set; and according to the characteristic training set, updating parameters of the initialized coherent topology model to obtain a trained coherent topology model. According to the training method, the coherent topological feature set and the labeling feature set are input into the initialized coherent topological model together for training and updating, so that the trained coherent topological model can effectively capture multi-scale topological features of the cornea nerves, the segmentation accuracy and segmentation precision of the coherent topological model on cornea nerve images are effectively improved, the loss of detail structures of the cornea nerves is effectively reduced, and the curve-shaped structures of the cornea nerves can be effectively reflected by better keeping the topological consistency of the cornea nerves.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that, in the following description, the drawings are only for convenience and clarity of expressing some of the embodiments in the technical solutions of the present application, and other drawings may be obtained according to the drawings without the need of inventive labor for those skilled in the art.
Fig. 1 is a schematic flow chart of a training method of a coherent topology model according to an embodiment of the present application;
fig. 2 is a network frame diagram of a neural network corresponding to a segmentation feature extraction process according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a training system for a coherent topology model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
At present, people usually divide the cornea nerves by adopting a mode based on machine learning, but because the contrast of the cornea nerve image is lower and a large amount of random noise exists, after the cornea nerves are divided by adopting the traditional mode of dividing the cornea nerves by machine learning, the loss of the detail structure of the cornea nerves is larger, so that higher errors occur in the diameter, bifurcation angle, vascular bending degree and change of the cornea nerves in the divided cornea images, and the dividing accuracy of the cornea images is lower; in addition, when the traditional mode of dividing the cornea nerves based on machine learning is used for dividing the cornea nerves, a curve-shaped structure of the cornea nerves cannot be well reflected, and the limitation of the characteristics of the cornea nerves is high.
Therefore, the embodiment of the invention provides a training method of a coherent topological model, which can effectively improve the segmentation accuracy and segmentation precision of the coherent topological model on the cornea neural image, effectively reduce the loss of the cornea neural detail structure and better maintain the topological consistency of the cornea nerves.
Referring to fig. 1, in an embodiment of the present application, a training method for a coherent topology model includes:
step 110, obtaining a corneal nerve data set;
in this step, the corneal nerve data set includes several corneal nerve images, which may be obtained by a corneal confocal microscope (corneal confocal microscopy, CCM), or may be obtained by obtaining a public corneal base epithelial image set on a network as the corneal nerve data set, and various specific obtaining manners are available, which will not be described in detail herein.
Step 120, labeling pretreatment is carried out on the cornea nerve data set to obtain a labeled image training set, and topology pretreatment is carried out on the cornea nerve data set to obtain a topology image training set;
the step 120 of labeling pretreatment on the corneal nerve data set to obtain a labeled image training set includes:
step 121, labeling all the corneal nerve image data in the corneal nerve data set to obtain a first intermediate data set;
step 122, performing noise adding processing on the first intermediate data set to obtain a second intermediate data set;
step 123, performing random clipping processing on the second intermediate data set to obtain a third intermediate data set;
and 124, performing random enhancement processing on the third intermediate data set to obtain the labeled image training set.
In the embodiment of the application, at least one mode of manual labeling or machine labeling and the like can be used for labeling all cornea nerve picture data in the cornea nerve data set, and then all the cornea nerve picture data (namely cornea nerve images) after labeling are used as a first intermediate data set; then, in practical application, the resolution of the cornea neural image used for detection is generally low and has a large amount of noise, so that each cornea neural image of the obtained first intermediate data set can be randomly denoised, and the specific type of the denoised noise can be at least one of Gaussian noise, spiced salt noise or poisson noise, so that a second intermediate data set is obtained; also, for the second intermediate data set, the random cropping process may perform random rotation and random cropping on each corneal neuroimage, including performing random cropping on the corneal neuroimage, etc., and then taking all the cropped corneal neuroimages as the third intermediate data set; for the third intermediate data set, the random enhancement processing may be filtering the corneal nerve image, and specifically may be any one of median filtering, mean filtering, gaussian filtering, bilateral filtering, laplace filtering, and the like, which will not be described in detail herein.
It can be further understood that, for label labeling processing, the embodiment of the application can also use 3*3 operator to expand all cornea nerve picture data while labeling the cornea nerve picture data, so that training of the coherent topology model is more stable, and redundant description of the application is omitted.
Step 120, performing topology preprocessing on the corneal nerve dataset to obtain a topological image training set, including:
126, carrying out graying treatment on the cornea nerve data set to obtain a gray data set;
and 127, performing binarization processing on the gray data set to obtain the topological image training set.
In this embodiment, for a certain corneal nerve image in the corneal nerve data set, the resolution of the corneal nerve image is 384 pixels×384 pixels, the corneal nerve image may be first grayed, so that the corneal nerve image is converted into a gray scale image, and an array of 384×384 is constructed by gray scale values corresponding to all pixels in the gray scale image; and then, filtering the array by acquiring a preset binarization threshold value to obtain a binarized array, and taking the binarized array as topology image training data in a topology image training set. It can be understood that the specific binarization threshold value can be set according to actual situations, and redundant description is omitted herein; in addition, for other cornea neural images, the corresponding topological image training data can be obtained by cycling the above contents, and then all the topological image training data are integrated to obtain a topological image training set.
130, performing segmentation feature extraction processing on the marked image training set to obtain a marked feature set, and performing coherent feature extraction processing on the topological image training set to obtain a coherent topological feature set;
step 130, performing segmentation feature extraction processing on the labeled image training set to obtain a labeled feature set, including:
step 131, performing downsampling processing on the labeled image training set to obtain a downsampled training set;
step 132, performing feature fusion processing on the downsampled training set to obtain a fusion training set;
and 133, performing up-sampling processing on the fusion training set to obtain the labeling feature set.
Referring to fig. 2, in the embodiment of the present application, an input corneal nerve data set may be placed in a neural network for feature extraction, where the neural network may specifically include a pooling layer, a downsampling layer, a feature fusion layer, an upsampling layer, and a convolution layer, and the feature of the corneal nerve image under different scales is extracted, so that information loss may be reduced. The present embodiment employs a 4-layer encoding and decoding network, where the encoder design employs convolutional layers, each comprising two consecutive 3x3 convolutional and Sigmoid activation functions. Each layer of encoder downsamples once, the channel number is doubled, the downsampling is performed for 4 times, and the output channel numbers are 64, 128, 256 and 512 respectively. The feature fusion layer performs multi-layer perceptron (multilayer perceptron, MLP), pooling (Pooling) and channel Attention (Attention) operations on the down-sampled features, fuses the features output by the multi-layer perceptron operations, the features output by the Pooling operations and the features output by the channel Attention operations, and outputs the fused features to the decoding layer. The decoding layer and the encoding layer are symmetrical in structure, up-sampling is performed for 4 times, and characteristics extracted from the cornea neural image in the neural network are output.
Step 130, performing coherent feature extraction processing on the topological image training set to obtain a coherent topological feature set, including:
step 134, performing multidimensional filtering processing on the topological image training set to obtain a continuous coherent image set;
and 135, performing normalization and statistics on the continuous coherent image set to obtain the coherent topology feature set.
In the embodiment of the application, the coherent topological feature in the coherent topological feature set can be obtained through topological complex formation, and in the embodiment of the application, cubic (cubic) complex formation in various topological complex formation is taken as an example, specifically, for certain topological image training data in the topological image training set, multidimensional filtering, particularly radial filtering and high filtering, can be performed on the topological image training data in a group form on algebraic structure, so that a cubic continuous coherent graph corresponding to the cubic complex formation is given; then, by normalizing the cubic continuous tonemap to obtain a normalized cubic continuous tonemap, the tonemap topology features may be given by continuous statistics corresponding to the generation (birth) and extinction (death) point pairs of the normalized cubic continuous tonemap, where the continuous statistics specifically include Bottleneck (Bottleneck) distance, waserstein distance, scene (Landscape) statistics, betti (Betti) curves, and the like, and for the rest of the topological image training data, similar to the foregoing, the application will not be repeated herein.
Step 140, performing linear fusion processing on the labeling feature set according to the coherent topology feature set to obtain a feature training set;
in the embodiment of the application, after the coherent topological feature set and the labeling feature set are obtained, the coherent topological feature in each coherent topological feature set and the labeling feature corresponding to the coherent topological feature can be linearly fused to obtain the segmentation likelihood map with the same pixel size as the input image, and then all the segmentation likelihood maps are used as the feature training set. Specifically, the linear fusion may be implemented by a fully-connected network layer, and various specific implementation manners are available, which will not be described herein in detail.
And 150, updating parameters of the initialized coherent topology model according to the characteristic training set to obtain a trained coherent topology model.
Step 150, performing parameter update on the initialized coherent topology model according to the feature training set, including:
step 151, inputting the feature training set to the initialized coherent topology model for recognition, and obtaining a feature recognition result output by the initialized coherent topology model;
step 152, performing pixel loss processing on the feature recognition result to obtain a pixel loss function;
step 153, performing similar loss processing on the feature identification result to obtain a similar loss function;
154, performing coherent topology loss processing on the feature recognition result to obtain a coherent topology loss function;
step 155, performing gradient integration processing on the pixel loss function, the similar loss function and the coherent topology loss function to obtain an overall loss function;
and step 156, updating parameters of the initialized coherent topology model according to the integral loss function.
In the embodiment of the present application, the pixel loss function may be obtained by calculating and predicting the square of the pixel difference between the original corneal neural image and the feature image corresponding to the coherent topological feature through a Least Mean Square Error (LMSE) algorithm, and various specific implementation manners are provided, which will not be described in detail herein; the similarity loss function is used to represent the class similarity of the pixel, and the equivalent formula of the similarity loss function can be expressed as:
wherein L is Dice N is the total number of pixels, M is the total number of categories, lambda m Weight of the m-th category, K m,n Is the m-th category and the prediction probability, ζ, of the n-th pixel mm,n The m-th category and the n-th pixel, the true pixel categoryAnd K m,n And zeta m,n Is in the range of [0,1 ]]And (3) inner part.
It can be understood that the coherent topology loss function can be determined by calculating a continuous coherent map of the split likelihood map and a continuous coherent map of the corresponding label of the original corneal nerve image, and then determining the topology loss information difference image between the two continuous coherent maps, and in particular, the equivalent formula of the coherent topology loss function can be expressed as follows:
wherein L is Topo For the coherent topology loss function, dgm (f) is the set of topology loss information difference images,to generate arbitrary matches between point sets and vanishing point sets for persistent homographs.
It can be further understood that after the coherent topology loss function, the similar loss function and the pixel loss function are obtained, a coefficient can be given to each loss function, and the loss functions given with the coefficients are accumulated to obtain an overall loss function, and the specific coefficients can be set according to actual conditions, so that redundant description of the application is omitted. After the overall loss function is obtained, the gradient descent direction can be calculated through the minimum loss function, and direction propagation training is carried out, so that the parameter updating of the initialized coherent topology model is completed.
It should be noted that, in the training process of the coherent topology model, the original corneal nerve data may be divided according to a preset proportion, specifically according to 7:3 is divided into a training set and a verification set, and then the model is updated according to a K-fold cross verification method so as to achieve a better training effect.
In addition, the embodiment of the application provides a method for identifying a coherent topology model, which comprises the following steps:
step 210, obtaining a cornea nerve image to be detected;
step 220, inputting the corneal nerve image to be detected into the trained coherent topology model to obtain a corneal nerve segmentation result.
It can be understood that, for the cornea neural image related to the eyes, the cornea neural image to be identified can be input into a trained coherent topological model, then a segmentation result of the coherent topological model on the cornea neural image is obtained, multi-scale topological features can be captured, segmentation capability of the model on the cornea neural image is improved, and when the cornea neural image has higher noise image, the coherent topological model still has higher segmentation precision and topological consistency.
Referring to fig. 3, the embodiment of the application further provides a training system for a coherent topology model, including:
an acquisition module 101 for acquiring a corneal nerve dataset;
the preprocessing module 102 is used for performing labeling preprocessing on the corneal nerve data set to obtain a labeled image training set, and performing topology preprocessing on the corneal nerve data set to obtain a topology image training set;
the feature extraction module 103 is configured to perform segmentation feature extraction processing on the labeled image training set to obtain a labeled feature set, and perform coherent feature extraction processing on the topological image training set to obtain a coherent topological feature set;
the fusion module 104 is configured to perform linear fusion processing on the labeling feature set according to the coherent topology feature set to obtain a feature training set;
and the updating module 105 is configured to update parameters of the initialized coherent topology model according to the feature training set, so as to obtain a trained coherent topology model.
It can be understood that the content in the above method embodiment is applicable to the system embodiment, and the functions specifically implemented by the system embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
Referring to fig. 4, an embodiment of the present application further provides a computer device, including:
at least one processor 201;
at least one memory 202 for storing at least one program;
the at least one program, when executed by the at least one processor 201, causes the at least one processor 201 to implement the method embodiments described above.
Similarly, it can be understood that the content in the above method embodiment is applicable to the embodiment of the present apparatus, and the functions specifically implemented by the embodiment of the present apparatus are the same as those of the embodiment of the foregoing method, and the achieved beneficial effects are the same as those achieved by the embodiment of the foregoing method.
The present embodiment also provides a computer readable storage medium, in which a program executable by the processor 201 is stored, the program executable by the processor 201 being configured to implement the above-mentioned method embodiments when executed by the processor 201.
Similarly, the content in the above method embodiment is applicable to the present computer-readable storage medium embodiment, and the functions specifically implemented by the present computer-readable storage medium embodiment are the same as those of the above method embodiment, and the beneficial effects achieved by the above method embodiment are the same as those achieved by the above method embodiment.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of this application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the present application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or one or more of the functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Thus, those of ordinary skill in the art will be able to implement the present application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, descriptions of the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present application have been described in detail, the present application is not limited to the embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (10)

1. A method for training a coherent topology model, comprising:
obtaining a corneal nerve dataset;
labeling pretreatment is carried out on the cornea nerve data set to obtain a labeled image training set, and topology pretreatment is carried out on the cornea nerve data set to obtain a topology image training set;
performing segmentation feature extraction processing on the annotation image training set to obtain an annotation feature set, and performing coherent feature extraction processing on the topology image training set to obtain a coherent topology feature set;
according to the coherent topological feature set, carrying out linear fusion processing on the labeling feature set to obtain a feature training set;
and according to the characteristic training set, updating parameters of the initialized coherent topology model to obtain a trained coherent topology model.
2. The method for training a coherent topology model according to claim 1, wherein said labeling preprocessing the corneal nerve dataset to obtain a labeled image training set comprises:
labeling all the corneal nerve picture data in the corneal nerve data set to obtain a first intermediate data set;
carrying out noise adding processing on the first intermediate data set to obtain a second intermediate data set;
performing random clipping treatment on the second intermediate data set to obtain a third intermediate data set;
and carrying out random enhancement processing on the third intermediate data set to obtain the annotation image training set.
3. The method for training a coherent topology model according to claim 1, wherein said topologically preprocessing said corneal nerve dataset to obtain a training set of topological images comprises:
graying the cornea nerve data set to obtain a gray data set;
and carrying out binarization processing on the gray data set to obtain the topological image training set.
4. The method for training a coherent topology model according to claim 1, wherein the performing segmentation feature extraction processing on the labeled image training set to obtain a labeled feature set comprises:
performing downsampling processing on the marked image training set to obtain a downsampled training set;
performing feature fusion processing on the downsampled training set to obtain a fusion training set;
and carrying out up-sampling treatment on the fusion training set to obtain the labeling feature set.
5. The method for training a coherent topology model according to claim 1, wherein the performing coherent feature extraction processing on the training set of topology images to obtain a coherent topology feature set includes:
performing multidimensional filtering processing on the topological image training set to obtain a continuous coherent image set;
and performing normalization and statistics processing on the continuous coherent image set to obtain the coherent topological feature set.
6. The method for training the coherent topology model according to claim 1, wherein said performing parameter update on the initialized coherent topology model according to the feature training set comprises:
inputting the feature training set into the initialized coherent topology model for recognition to obtain a feature recognition result output by the initialized coherent topology model;
performing pixel loss processing on the feature identification result to obtain a pixel loss function;
performing similar loss processing on the feature identification result to obtain a similar loss function;
performing coherent topology loss processing on the feature identification result to obtain a coherent topology loss function;
performing gradient integration processing on the pixel loss function, the similar loss function and the coherent topology loss function to obtain an overall loss function;
and according to the integral loss function, updating parameters of the initialized coherent topology model.
7. A method for identifying a coherent topology model, comprising:
obtaining a cornea nerve image to be detected;
inputting the cornea nerve image to be detected into the trained coherent topology model according to any one of claims 1-6, and obtaining a cornea nerve segmentation result.
8. A training system for a coherent topology model, comprising:
the acquisition module is used for acquiring a corneal nerve data set;
the preprocessing module is used for carrying out labeling preprocessing on the cornea nerve data set to obtain a labeled image training set, and carrying out topology preprocessing on the cornea nerve data set to obtain a topology image training set;
the feature extraction module is used for carrying out segmentation feature extraction processing on the marked image training set to obtain a marked feature set, and carrying out coherent feature extraction processing on the topological image training set to obtain a coherent topological feature set;
the fusion module is used for carrying out linear fusion processing on the labeling feature set according to the coherent topology feature set to obtain a feature training set;
and the updating module is used for updating parameters of the initialized coherent topology model according to the characteristic training set to obtain a trained coherent topology model.
9. A computer device, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any of claims 1-7.
10. A computer readable storage medium, in which a processor executable program is stored, characterized in that the processor executable program is for implementing the method according to any of claims 1-7 when being executed by the processor.
CN202311749500.0A 2023-12-18 2023-12-18 Training method, identification method, system, equipment and medium for coherent topology model Pending CN117853424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311749500.0A CN117853424A (en) 2023-12-18 2023-12-18 Training method, identification method, system, equipment and medium for coherent topology model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311749500.0A CN117853424A (en) 2023-12-18 2023-12-18 Training method, identification method, system, equipment and medium for coherent topology model

Publications (1)

Publication Number Publication Date
CN117853424A true CN117853424A (en) 2024-04-09

Family

ID=90527977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311749500.0A Pending CN117853424A (en) 2023-12-18 2023-12-18 Training method, identification method, system, equipment and medium for coherent topology model

Country Status (1)

Country Link
CN (1) CN117853424A (en)

Similar Documents

Publication Publication Date Title
CN110738697B (en) Monocular depth estimation method based on deep learning
WO2022000426A1 (en) Method and system for segmenting moving target on basis of twin deep neural network
CN108492271B (en) Automatic image enhancement system and method fusing multi-scale information
WO2023070447A1 (en) Model training method, image processing method, computing processing device, and non-transitory computer readable medium
CN112258488A (en) Medical image focus segmentation method
CN114066884B (en) Retinal blood vessel segmentation method and device, electronic device and storage medium
CN112602114A (en) Image processing method and device, neural network and training method, and storage medium
CN115082293A (en) Image registration method based on Swin transducer and CNN double-branch coupling
CN111753789A (en) Robot vision SLAM closed loop detection method based on stack type combined self-encoder
CN112270366B (en) Micro target detection method based on self-adaptive multi-feature fusion
CN116912257B (en) Concrete pavement crack identification method based on deep learning and storage medium
CN110674774A (en) Improved deep learning facial expression recognition method and system
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN114004811A (en) Image segmentation method and system based on multi-scale residual error coding and decoding network
CN112150497A (en) Local activation method and system based on binary neural network
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN115457568A (en) Historical document image noise reduction method and system based on generation countermeasure network
CN113822287A (en) Image processing method, system, device and medium
CN111915612A (en) Image segmentation method and system based on multi-encoder convolutional neural network
Zhang et al. A Context-Aware Road Extraction Method for Remote Sensing Imagery based on Transformer Network
CN113538530A (en) Ear medical image segmentation method and device, electronic equipment and storage medium
CN110458849B (en) Image segmentation method based on feature correction
CN117853424A (en) Training method, identification method, system, equipment and medium for coherent topology model
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN112801909B (en) Image fusion denoising method and system based on U-Net and pyramid module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination