CN110176007A - Crystalline lens dividing method, device and storage medium - Google Patents

Crystalline lens dividing method, device and storage medium Download PDF

Info

Publication number
CN110176007A
CN110176007A CN201910412464.6A CN201910412464A CN110176007A CN 110176007 A CN110176007 A CN 110176007A CN 201910412464 A CN201910412464 A CN 201910412464A CN 110176007 A CN110176007 A CN 110176007A
Authority
CN
China
Prior art keywords
crystalline lens
shape template
initial
lens structure
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910412464.6A
Other languages
Chinese (zh)
Inventor
童云飞
刘江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Cixi Institute of Biomedical Engineering CIBE of CAS
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Cixi Institute of Biomedical Engineering CIBE of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Cixi Institute of Biomedical Engineering CIBE of CAS filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201910412464.6A priority Critical patent/CN110176007A/en
Publication of CN110176007A publication Critical patent/CN110176007A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present invention provides a kind of crystalline lens dividing method, device and storage medium, this method comprises: extracting crystalline lens image-region from original image;By presetting neural network model, the initial lens structure in crystalline lens image-region is obtained;Edge-smoothing processing is carried out to initial lens structure using shape template, the lens structure after being divided, which is by being trained to crystalline lens sample.The automatic segmentation to lens structure can be realized by default neural network model and shape template, to improve the accuracy of lens structure segmentation while reducing cost of labor.

Description

Crystalline lens dividing method, device and storage medium
Technical field
The present embodiments relate to art of image analysis more particularly to a kind of crystalline lens dividing method, device and storage to be situated between Matter.
Background technique
As common blinding eye disease, cataract is to lead to phacoscotasmus for some reason, to influence view Film imaging, causes patient not see thing.Wherein, leading portion optical coherence tomography (Anterior Segment Optical Coherence Tomography, referred to as: AS-OCT) can be used to auxiliary diagnosis include a variety of ophthalmology diseases such as cataract.Tool Body, AS-OCT is a kind of diagnostic mode of non-intrusion type fanout free region, is measured in white using the density of crystalline lens different structure The severity of the ophthalmology diseases such as barrier.
Currently, the lens structure segmentation based on AS-OCT image is all performed manually by mostly.But manual segmentation crystalline lens The problem that structure has accuracy low, and cost of labor is higher.
Summary of the invention
The embodiment of the present invention provides a kind of crystalline lens dividing method, device and storage medium, by dividing crystalline lens automatically Structure improves the accuracy of lens structure segmentation while reducing cost of labor.
In a first aspect, the embodiment of the present invention provides a kind of crystalline lens dividing method, comprising:
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using shape template, the crystalline lens knot after being divided Structure, the shape template are by being trained to crystalline lens sample.
In a kind of possible embodiment, the shape template includes first shape template and the second shape template, wherein The first shape template is used to carry out the crystalline lens pronucleus of the initial lens structure edge-smoothing processing, and described second Shape template is used to carry out edge-smoothing processing to core after the crystalline lens of the initial lens structure.
In a kind of possible embodiment, the number of the shape template be it is multiple, it is described using shape template to described Initial lens structure carries out edge-smoothing processing, the lens structure after being divided, comprising: calculates multiple shape moulds The similarity of plate and the initial lens structure;The maximum shape template of similarity is chosen, to the initial lens structure Carry out edge-smoothing processing, the lens structure after being divided.
In a kind of possible embodiment, the phase for calculating the multiple shape templates and the initial lens structure Like degree, comprising:
For shape template described in each, the shape template and the initial crystalline lens knot are obtained by following steps The similarity of structure:
The product for calculating the normalized parameter of the shape template and the initial lens structure is the first median, institute State most narrow spacing of the symmetry axis in rotary course in corresponding all distances that normalized parameter is the initial lens structure From;
Calculate first median and default bias amount and be the second median;
Boundary coding is carried out to the initial lens structure according to formula (1), obtains target value;
According to the target value and second median, the similarity is determined;
F (c, θ)=| | c-Pθ| | formula (1)
Wherein, c indicates the center point coordinate of the initial lens structure;PθIndicate pair of the initial lens structure Claim the intersecting point coordinate on the boundary of axis and the initial lens structure, wherein θ indicates the angle of symmetry axis relative datum line, institute Symmetry axis is stated since reference line, is rotated with predetermined angle;| | | | indicate norm sign;{ f (c, θ) } indicates the target value.
In a kind of possible embodiment, the shape template, training is obtained in the following manner:
According to the boundary point coordinate of the crystalline lens sample, the center point coordinate of the crystalline lens sample is obtained;
Obtain the intersecting point coordinate of the symmetry axis of the crystalline lens sample and the boundary of the crystalline lens sample, wherein described Symmetry axis is rotated since reference line with predetermined angle;
According to the center point coordinate and the intersecting point coordinate, obtain the central point of the crystalline lens sample to intersection point away from From;
The distance is normalized using the normalized parameter of the crystalline lens sample, obtains the crystalline lens The nuclear boundary of sample, the normalized parameter are the minimum range in the symmetry axis rotary course in the corresponding distance;
Extract the intermediate region of the nuclear boundary;
The corresponding intermediate region of the described crystalline lens sample of M is clustered using default clustering algorithm, obtains N number of shape Template, M and N are positive integer, and M is greater than N.
In a kind of possible embodiment, the default clustering algorithm includes any of following clustering algorithm: K mean value Algorithm, fuzzy C-mean algorithm FCM clustering algorithm.
It is described that crystalline lens image-region is extracted from original image in a kind of possible embodiment, comprising: to use Canny edge detecting technology extracts the crystalline lens image-region from the original image.
Second aspect, the embodiment of the present invention provide a kind of crystalline lens segmenting device, comprising:
Extraction module, for extracting crystalline lens image-region from original image;
Processing module, for obtaining initial crystalline in the crystalline lens image-region by presetting neural network model Body structure;And edge-smoothing processing is carried out to the initial lens structure using preset algorithm, the crystalline lens after being divided Structure.
The third aspect, the embodiment of the present invention provide a kind of crystalline lens segmenting device, including memory and processor, Yi Jicun Store up the computer program executed on the memory for the processor;It is real that the processor executes the computer program Now following operation:
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using preset algorithm, the crystalline lens knot after being divided Structure.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, including computer-readable instruction, when When processor reads and executes the computer-readable instruction, so that the processor performs the following operations:
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using preset algorithm, the crystalline lens knot after being divided Structure.
Crystalline lens dividing method, device and storage medium provided in an embodiment of the present invention, are extracted from original image first Crystalline lens image-region;Later, by presetting neural network model, the initial crystalline lens knot in crystalline lens image-region is obtained Structure, and edge-smoothing processing is carried out to initial lens structure using shape template, the lens structure after being divided, the shape Shape template is by being trained to crystalline lens sample.It can be realized pair by default neural network model and shape template The automatic segmentation of lens structure, to improve the accuracy of lens structure segmentation while reducing cost of labor.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow chart for the crystalline lens dividing method that one embodiment of the invention provides;
Fig. 2 is the application exemplary diagram for the crystalline lens dividing method that one embodiment of the invention provides;
Fig. 3 shows a kind of lenticular nuclear structure;
Fig. 4 is the exemplary diagram of the shape template before the normalized that one embodiment of the invention provides;
Fig. 5 (a) is the exemplary diagram for the first shape template that one embodiment of the invention provides;
Fig. 5 (b) is the exemplary diagram for the second shape template that one embodiment of the invention provides;
Fig. 6 is the structural schematic diagram for the crystalline lens segmenting device that one embodiment of the invention provides;
Fig. 7 be another embodiment of the present invention provides crystalline lens segmenting device structural schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
The specification of the embodiment of the present invention, claims and term " first " in above-mentioned attached drawing and " second " etc. are to use In distinguishing similar object, without being used to describe a particular order or precedence order.It should be understood that the data used in this way exist It can be interchanged in appropriate situation, so that the embodiment of the present invention described herein for example can be in addition to illustrating herein or describing Those of other than sequence implement.In addition, term " includes " and " having " and their any deformation, it is intended that covering is not Exclusive includes, for example, the process, method, system, product or equipment for containing a series of steps or units be not necessarily limited to it is clear Step or unit those of is listed on ground, but is not clearly listed or for these process, methods, product or is set Standby intrinsic other step or units.
Currently, for the grade classification of cataract, in the world using LOCSII phacoscotasmus classification standard.This point Class standard human intervention is larger, and the different doctor of experience has a certain difference the classification of different structure, therefore, accurately It is partitioned into lenticular structure and calculates turbidity automatically, it appears is particularly important.
Since a large amount of medical images of hand labeled are the tasks of a cumbersome and easy error, it is based on above-mentioned, this hair Bright embodiment provides a kind of crystalline lens dividing method, device and storage medium, by default neural network model and shape template, The automatic segmentation to lens structure is realized, to improve the accurate of lens structure segmentation while reducing cost of labor Degree.Wherein, shape template can improve the coarse boundary of the initial lens structure after presetting Processing with Neural Network, to obtain The higher lens structure of accuracy.
Fig. 1 is the flow chart for the crystalline lens dividing method that one embodiment of the invention provides.The embodiment provides a kind of crystalline Body dividing method, the crystalline lens dividing method can be executed by crystalline lens segmenting device.The crystalline lens segmenting device can pass through The mode of software and/or hardware is realized.Illustratively, which can include but is not limited to computer, server Equal electronic equipments.Wherein, server can be a server, or the server cluster consisted of several servers, or Person is a cloud computing service center.
As shown in Figure 1, the embodiment provide crystalline lens dividing method the following steps are included:
S101, crystalline lens image-region is extracted from original image.
Wherein, original image can be the target image that actual acquisition arrives, which not only includes crystalline lens image Region can also include other eyeballs, for example, cornea, vitreum, etc..Here " crystalline lens image-region " is real Region where the lens structure to be split of border.Supplementary explanation, crystalline lens image-region are less than the big of original image It is small.
Optionally, which may include: that crystalline lens figure is extracted from original image using canny edge detecting technology As region.Crystalline lens image-region is extracted by canny edge detecting technology, interference letter extra in original image can be reduced Breath.
Wherein, canny edge detecting technology is a kind of multistage edge detection algorithm, and target is to find an optimal edge Detection algorithm.Specifically, optimal edge detection is meant that:
(1) optimal detection: algorithm can identify the actual edge in image as much as possible, missing inspection true edge it is general The probability of rate and erroneous detection non-edge is all as small as possible;
(2) oplimal Location criterion: the position of the positional distance actual edge point of the marginal point detected is nearest, or by It is minimum in the degree for the true edge that influence of noise causes the edge detected to deviate object;
(3) test point and marginal point correspond: the marginal point and actual edge point of operator detection should be corresponded.
It should be noted that canny edge detecting technology is only a kind of example, mentioned from original image with illustrating how to realize Crystalline lens image-region is taken, but the embodiment of the present invention is not limited system, crystalline substance can be extracted from original image with other technologies Shape body image-region.
S102, the initial lens structure by presetting neural network model, in acquisition crystalline lens image-region.
Wherein, the full convolutional neural networks model of U-shaped that default neural network model can obtain for preparatory training.U-shaped is complete Convolutional neural networks can make default neural network model have preferable stability and scalability, the learning characteristic of deep learning Ability it is preferable.
Illustratively, which can be with specifically: crystalline lens image-region is inputted default neural network model, is obtained pre- If the output of neural network model is initial lens structure.
How the application of the embodiment major embodiment to default neural network model uses default neural network model, Its training process can refer to related description, and details are not described herein again.
S103, edge-smoothing processing is carried out to initial lens structure using shape template, the crystalline lens after being divided Structure.
Wherein, shape template is by being trained to crystalline lens sample.
It is appreciated that through default neural network model treated initial lens structure, boundary be it is irregular, especially It is the segmentation of lens nucleus, and therefore, the embodiment of the present invention improves the coarse boundary in lens nucleus region using shape template, is obtained Obtain last segmentation result.
With reference to Fig. 2, the segmentation process that an original image is handled through above-mentioned S101 to S103 is shown.Wherein, 1 AS-OCT is indicated Image;2 indicate area-of-interest (Region of Interest, referred to as: ROI), i.e., crystalline lens is divided by crystalline lens, the example Three regions: the region cortex (cortex), the region core (nucleus), pupil region;3 indicate the full convolutional neural networks of U-shaped, use In prediction cut zone;4 indicate shape template, for improving the coarse boundary of lenticular core region.As shown in Fig. 2, having one Part core region is classified as cortical area by mistake, in order to solve this problem, designs a kind of shape template, Lai Gaishan crystalline lens The coarse segmentation of nuclear periphery.
Hereinafter, illustrating default neural network model in conjunction with Fig. 2.With reference to Fig. 2, which includes: Coded portion (left-hand component) and decoded portion (right-hand component).
Wherein, each uncoiling lamination includes a cascade in decoded portion, and input exports complete from upper uncoiling lamination The information of convolutional layer is corresponded in office's information and coded portion.Uncoiling lamination extracts information from corresponding convolutional layer, and merges local spy The information of sign avoids local interference to more effectively handle the information of object to be split.
Coded portion includes six convolutional layers, and each convolutional layer includes two to three sub- convolutional layers, each sub- convolutional layer It will use an activation primitive (Relu) and the maximum pond (MaxPooling) of 2x2;In order to keep effective restored image and Feature is extracted, decoded portion also includes six uncoiling laminations, and each uncoiling lamination includes a cascade, comes from corresponding characteristic layer Up-sampled with space, be later with two convolution sums, one activation primitive (Relu), input from upper one layer of global information with The information of respective layer during network code.It should also be noted that convolutional layer may include the sub- convolutional layer of predetermined number, this is pre- If mutually cascading between the sub- convolutional layer of number, sub- convolutional layer is using Relu activation primitive and maximum pondization processing.
Default neural network model has used six layers of network structure, biggish in the crystalline lens image area size of input In the case of, deeper network significantly more efficient can extract global information, to accurately divide lens structure, relatively be applicable in Existing network structure.
For example, the size of crystalline lens image-region is 1024*1024, what Conv<3x3>with Relu was indicated is convolution kernel For 3*3, activation primitive Relu, the intersection entropy loss used (Cross Entropy Loss) function is each side- Output layers of output can achieve the effect of the feature using different levels in side-output layers of use<1x1>convolution kernel Fruit can further promote segmentation effect.
Using the complete lenticular pupil region of convolutional neural networks model prediction of U-shaped, cortical area and core region, so as to It is enough to reach preferable training on small data, it can be to avoid over-fitting;And the full convolutional neural networks model of U-shaped can be by making The cut zone for obtaining clear boundary is connected with jump.
In the embodiment, crystalline lens image-region is extracted first from original image;Then, by presetting neural network mould Type obtains the initial lens structure in crystalline lens image-region, and carries out side to initial lens structure using shape template Edge smoothing processing, the lens structure after being divided, the shape template are by being trained to crystalline lens sample. The automatic segmentation to lens structure can be realized by default neural network model and shape template, thus reducing cost of labor While, improve the accuracy of lens structure segmentation.
In the above-described embodiments, in a kind of possible implementation, the shape template may include first shape template With the second shape template.Wherein, first shape template is used to carry out edge-smoothing to the crystalline lens pronucleus of initial lens structure Processing, the second shape template are used to carry out edge-smoothing processing to core after the crystalline lens of initial lens structure.
It is appreciated that the number of the shape template is multiple.Optionally, S103, using shape template to initial crystalline Body structure carries out edge-smoothing processing, and the lens structure after being divided may include: the multiple shape templates of calculating and initial The similarity of lens structure;The maximum shape template of similarity is chosen, edge-smoothing processing is carried out to initial lens structure, Lens structure after being divided.Illustratively, the crystalline lens of multiple first shape templates and initial lens structure is calculated The similarity of pronucleus chooses the maximum first shape template of similarity, carries out side to the crystalline lens pronucleus of initial lens structure Edge smoothing processing;Calculate the similarity of multiple second shape templates with core after the crystalline lens of initial lens structure;It chooses similar Maximum second shape template is spent, edge-smoothing processing is carried out to core after the crystalline lens of initial lens structure, after obtaining segmentation Lens structure.
Further, the similarity for calculating multiple shape templates and initial lens structure may include:
For each shape template, the similarity of shape template Yu initial lens structure is obtained by following steps:
The product for calculating the normalized parameter of shape template and initial lens structure is the first median, normalization ginseng Number is minimum range of the symmetry axis of initial lens structure in rotary course in corresponding all distances;
Calculate the first median and default bias amount and be the second median;
Boundary coding is carried out to initial lens structure according to formula (1), obtains target value;
According to target value and the second median, the similarity is determined;
F (c, θ)=| | c-Pθ| | formula (1)
Wherein, c indicates the center point coordinate of the initial lens structure;PθIndicate pair of the initial lens structure Claim the intersecting point coordinate on the boundary of axis and the initial lens structure, wherein θ indicates the angle of symmetry axis relative datum line, right Claim axis since reference line, is rotated with predetermined angle;| | | | indicate norm sign;{ f (c, θ) } indicates the target value.
For example, the number of shape template is N, N is positive integer, is expressed as { f (cn, θ) }, n=1,2,3 ..., N, cn Indicate that the center point coordinate of shape template n, θ indicate the rotation angle of symmetry axis;Initial lens structure is expressed as St={ xj, yj, the center point coordinate of initial lens structure isL indicates the edge sampling of initial lens structure Point number, normalized parameter are expressed as zt, wherein zt=| | c-p1| |, p1Indicate the symmetry axis rotation of initial lens structure In the process when minimum range in corresponding distance, the intersecting point coordinate of symmetry axis and boundary leads to then for each shape template It crosses following steps and obtains the similarity of shape template Yu initial lens structure:
One, the first median T is calculatedn: Tn={ f (cn,θ)}×zt
Two, the second median T ' is calculatedn: T 'n=Tn+ offset, offset indicate default bias amount, value be -10, - 9,…,9,10}。
Three, target value { f (c, θ) } is calculated.
Four, according to target value and the second median, similarity is determined.Specifically, target value and the second median are calculated Difference: Dn=f (c, θ)-T 'n, DnIt is smaller, illustrate that shape template at this time is more similar to initial lens structure, to obtain each The similarity of shape template and initial lens structure.
Above embodiments illustrate how to use shape template, it will be illustrated next and how to train to obtain shape template.Specifically Ground, training obtains shape template in the following manner:
According to the boundary point coordinate of crystalline lens sample, the center point coordinate of crystalline lens sample is obtained;Obtain crystalline lens sample Symmetry axis and crystalline lens sample boundary intersecting point coordinate, wherein symmetry axis is rotated since reference line with predetermined angle; According to center point coordinate and intersecting point coordinate, the central point of crystalline lens sample is obtained to the distance of intersection point;Use crystalline lens sample Normalized parameter, which is adjusted the distance, to be normalized, and the nuclear boundary of crystalline lens sample is obtained, and normalized parameter is symmetry axis rotation Minimum range in corresponding distance in the process;Extract the intermediate region of nuclear boundary;Using default clustering algorithm to M crystalline lens The corresponding intermediate region of sample is clustered, and obtains N number of shape template, M and N are positive integer, and M is greater than N.
Lenticular structure is the structure of a similar onion, and lens nucleus structure is smooth curved-surface structure, by The inspiration of this inspiration designs lenticular nuclear structure as shown in Figure 3 herein, with central point, with the friendship of symmetry axis and boundary Point, the distance between encoded.Wherein, different figure layers shares identical central point, and can be by away from central point Distance is distinguished from each other.
With reference to Fig. 3, the boundary representation of crystalline lens sample m is Si={ xi,yi, n=1,2,3 ..., N, crystalline lens sample m's Center point coordinate is expressed as(xi,yi) be choose crystalline lens sample m i-th of sample point coordinate, The corresponding shape template of crystalline lens sample m is defined as following formula:
f(cm, θ)=| | cm-pθ||
Wherein, PθIndicate the intersecting point coordinate of the symmetry axis of crystalline lens sample m and the boundary of crystalline lens sample m, θ is from reference line (dotted line in Fig. 3) start, with 5 degree of predetermined angle rotations, in this way, the boundary in different images can be compiled as Shown in Fig. 4.In Fig. 4, horizontal axis indicates that θ, the longitudinal axis are indicated | | cm-pθ||.Use the normalized parameter z of crystalline lens sample mm=| | cm-pm1| | shape template shown in Fig. 4 is normalized, the shape template as shown in Fig. 5 (a) and Fig. 5 (b) is obtained, In, Fig. 5 (a) indicates first shape template (symmetry axis rotates counterclockwise, 0 degree -180 degree), and Fig. 5 (b) indicates the second shape template (symmetry axis rotates counterclockwise, and 180 degree -360 is spent).
The intermediate region of nuclear boundary is thick line portion in Fig. 3;It is corresponding to M crystalline lens sample using default clustering algorithm Intermediate region is clustered, and obtains N number of shape template, M and N are positive integer, and M is greater than N.
Optionally, the default clustering algorithm may include any of following clustering algorithm: K mean value (K-mean) is calculated Method, fuzzy C-mean algorithm (Fuzzy C-means, referred to as: FCM) clustering algorithm, etc..Wherein, poly- for K-mean algorithm and FCM The detailed description of class algorithm can refer to the relevant technologies, and details are not described herein again.
It is compared with current crystalline lens dividing method, the present invention at least has the advantages that
(1) present invention designs a kind of based on the full-automatic lens structure dividing method of deep learning, due to data acquisition phase To difficulty, so existing crystalline lens partitioning scheme is all relied on using the mode manually divided, consistency and accuracy The experience of segmentation personnel, so, an automatic splitting scheme is very significant for effectively stable segmentation.
(2) according to the matched method of Structural Feature Design shape template of lens nucleus, make segmentation result close to really Physical structure considers the feature in lenticular internal structure, therefore, designs a shape template and comes in learning training sample Then shape is corrected test data, can effectively be split to structure.
(3) divide lens structure using the full convolutional neural networks of U-shaped, network can be good at being trained and learning number Feature in.
(4) strong antijamming capability has preferable generalization ability.
Fig. 6 is the structural schematic diagram for the crystalline lens segmenting device that one embodiment of the invention provides.As shown in fig. 6, crystalline lens Segmenting device 60 includes: extraction module 61 and processing module 62.Wherein:
The extraction module 61, for extracting crystalline lens image-region from original image.
The processing module 62, connect with extraction module 61, for obtaining extraction module 61 by presetting neural network model The obtained initial lens structure in crystalline lens image-region;And using shape template to the initial lens structure into The processing of row edge-smoothing, the lens structure after being divided, the shape template is by being trained to crystalline lens sample It obtains.
Optionally, shape template may include first shape template and the second shape template.Wherein, the first shape mould Plate is used to carry out the crystalline lens pronucleus of the initial lens structure edge-smoothing processing, second shape template for pair Core carries out edge-smoothing processing after the crystalline lens of the initial lens structure.
In the above-described embodiments, the number of the shape template is multiple, and processing module 62 is for using shape template Edge-smoothing processing is carried out to the initial lens structure, when lens structure after divide, specifically: calculating is multiple The similarity of the shape template and the initial lens structure;The maximum shape template of similarity is chosen, to described initial Lens structure carries out edge-smoothing processing, the lens structure after being divided.
Optionally, processing module 62 is similar to the initial lens structure for calculating multiple shape templates When spending, it is specifically used for:
For shape template described in each, the shape template and the initial crystalline lens knot are obtained by following steps The similarity of structure:
The product for calculating the normalized parameter of the shape template and the initial lens structure is the first median, institute State most narrow spacing of the symmetry axis in rotary course in corresponding all distances that normalized parameter is the initial lens structure From;
Calculate first median and default bias amount and be the second median;
Boundary coding is carried out to the initial lens structure according to formula (1), obtains target value;
According to the target value and second median, the similarity is determined;
F (c, θ)=| | c-Pθ| | formula (1)
Wherein, c indicates the center point coordinate of the initial lens structure;PθIndicate pair of the initial lens structure Claim the intersecting point coordinate on the boundary of axis and the initial lens structure, wherein θ indicates the angle of symmetry axis relative datum line, institute Symmetry axis is stated since reference line, is rotated with predetermined angle;| | | | indicate norm sign;{ f (c, θ) } indicates the target value.
Further, the shape template can train in the following manner acquisition:
According to the boundary point coordinate of the crystalline lens sample, the center point coordinate of the crystalline lens sample is obtained;
Obtain the intersecting point coordinate of the symmetry axis of the crystalline lens sample and the boundary of the crystalline lens sample, wherein described Symmetry axis is rotated since reference line with predetermined angle;
According to the center point coordinate and the intersecting point coordinate, obtain the central point of the crystalline lens sample to intersection point away from From;
The distance is normalized using the normalized parameter of the crystalline lens sample, obtains the crystalline lens The nuclear boundary of sample, the normalized parameter are the minimum range in the symmetry axis rotary course in the corresponding distance;
Extract the intermediate region of the nuclear boundary;
The corresponding intermediate region of the described crystalline lens sample of M is clustered using default clustering algorithm, obtains N number of shape Template, M and N are positive integer, and M is greater than N.
Wherein, the default clustering algorithm includes any of following clustering algorithm: K mean algorithm, and FCM cluster is calculated Method, etc..
In addition, extraction module 61 can be specifically used for: using canny edge detecting technology, extracted from original image brilliant Shape body image-region.
The crystalline lens segmenting device that the embodiment provides extracts crystalline lens image-region first from original image;Then, By presetting neural network model, the initial lens structure in crystalline lens image-region is obtained, and using shape template to first Beginning lens structure carries out edge-smoothing processing, and the lens structure after being divided, which is by crystalline lens What sample was trained.It can realize and the automatic of lens structure is divided by presetting neural network model and shape template It cuts, to improve the accuracy of lens structure segmentation while reducing cost of labor.
Fig. 7 be another embodiment of the present invention provides crystalline lens segmenting device structural schematic diagram.As shown in fig. 7, crystalline Body segmenting device 70 includes memory 71 and processor 72, and is stored in the calculating executed on memory 71 for processor 72 Machine program.Processor 72 executes computer program and crystalline lens segmenting device 70 is made to realize following operation:
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using shape template, the crystalline lens knot after being divided Structure, the shape template are by being trained to crystalline lens sample.
It should be noted that the embodiment of the present invention is not limited for the number of memory 71 and processor 72, All can be one or more, Fig. 7 is illustrated for one;It, can be by more between memory 71 and processor 72 Kind mode is carried out wired or is wirelessly connected.
In some embodiments, the shape template includes first shape template and the second shape template, wherein described first Shape template is used to carry out edge-smoothing processing, second shape template to the crystalline lens pronucleus of the initial lens structure Edge-smoothing processing is carried out for core after the crystalline lens to the initial lens structure.
Optionally, the number of the shape template is multiple, described to use shape template to the initial lens structure Edge-smoothing processing is carried out, the lens structure after being divided may include: the multiple shape templates of calculating and described first The similarity of beginning lens structure;The maximum shape template of similarity is chosen, it is flat to carry out edge to the initial lens structure Sliding processing, the lens structure after being divided.
Further, the similarity for calculating the multiple shape templates and the initial lens structure, can wrap It includes: for each described shape template, obtaining the shape template and the initial lens structure by following steps Similarity: the product for calculating the normalized parameter of the shape template and the initial lens structure is the first median, institute State most narrow spacing of the symmetry axis in rotary course in corresponding all distances that normalized parameter is the initial lens structure From;Calculate first median and default bias amount and be the second median;According to formula (1) to the initial crystalline lens Structure carries out boundary coding, obtains target value;According to the target value and second median, the similarity is determined.
Optionally, the shape template can train in the following manner acquisition: according to the boundary of the crystalline lens sample Point coordinate, obtains the center point coordinate of the crystalline lens sample;Obtain the symmetry axis and the crystalline lens of the crystalline lens sample The intersecting point coordinate on the boundary of sample, wherein the symmetry axis is rotated since reference line with predetermined angle;According to the center Point coordinate and the intersecting point coordinate, obtain the central point of the crystalline lens sample to the distance of intersection point;Use the lens-like The distance is normalized in this normalized parameter, obtains the nuclear boundary of the crystalline lens sample, the normalization Parameter is the minimum range in the symmetry axis rotary course in the corresponding distance;Extract the middle area of the nuclear boundary Domain;The corresponding intermediate region of the described crystalline lens sample of M is clustered using default clustering algorithm, obtains N number of shape template, M and N is positive integer, and M is greater than N.
Wherein, the default clustering algorithm may include any of following clustering algorithm: K mean algorithm, Fuzzy C are equal Value FCM clustering algorithm.
In addition, above-mentioned extract crystalline lens image-region from original image, it may include: using canny edge detection skill Art extracts crystalline lens image-region from original image.
On the basis of the above, further, crystalline lens segmenting device 70 can also export the lens structure after segmentation.Cause This, crystalline lens segmenting device 70 can also include display screen 73.The display screen 73 is used to export the lens structure after segmentation.
Wherein, display screen 73 can be capacitance plate, electromagnetic screen or infrared screen.In general, display screen 73 is used for basis The instruction of processor 72 shows data, is also used to receive the touch operation for acting on display screen 73, and corresponding signal is sent To processor 72 or the other component of crystalline lens segmenting device 70.It optionally, further include red when display screen 73 is infrared screen The surrounding of display screen 73 is arranged in outer touching box, the infrared touch frame, can be also used for receiving infrared signal, and this is infrared Signal is sent to the other component of processor 72 or crystalline lens segmenting device 70.
The embodiment of the present invention also provides a kind of computer readable storage medium, including computer-readable instruction, works as processor When reading and executing the computer-readable instruction, so that the processor is executed such as the step in above-mentioned any embodiment.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: read-only memory (Read-Only Memory, referred to as: ROM), random access memory (Random Access Memory, referred to as: RAM), magnetic or disk etc. The various media that can store program code.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (10)

1. a kind of crystalline lens dividing method characterized by comprising
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using shape template, the lens structure after being divided, The shape template is by being trained to crystalline lens sample.
2. the method according to claim 1, wherein the shape template includes first shape template and the second shape Shape template, wherein the first shape template is used to carry out edge-smoothing to the crystalline lens pronucleus of the initial lens structure Processing, second shape template are used to carry out edge-smoothing processing to core after the crystalline lens of the initial lens structure.
3. the method according to claim 1, wherein the number of the shape template be it is multiple, it is described use shape Shape template carries out edge-smoothing processing to the initial lens structure, the lens structure after being divided, comprising:
Calculate the similarity of multiple shape templates Yu the initial lens structure;
The maximum shape template of similarity is chosen, edge-smoothing processing is carried out to the initial lens structure, after obtaining segmentation Lens structure.
4. according to the method described in claim 3, it is characterized in that, described calculate multiple shape templates and the initial crystalline substance The similarity of shape body structure, comprising:
For shape template described in each, the shape template and the initial lens structure are obtained by following steps Similarity:
The product for calculating the normalized parameter of the shape template and the initial lens structure is the first median, described to return One change parameter is minimum range of the symmetry axis of the initial lens structure in rotary course in corresponding all distances;
Calculate first median and default bias amount and be the second median;
Boundary coding is carried out to the initial lens structure according to formula (1), obtains target value;
According to the target value and second median, the similarity is determined;
F (c, θ)=| | c-Pθ| | formula (1)
Wherein, c indicates the center point coordinate of the initial lens structure;PθIndicate the symmetry axis of the initial lens structure With the intersecting point coordinate on the boundary of the initial lens structure, wherein θ indicates the angle of symmetry axis relative datum line, described right Claim axis since reference line, is rotated with predetermined angle;| | | | indicate norm sign;{ f (c, θ) } indicates the target value.
5. training obtains in the following manner the method according to claim 1, wherein the shape template:
According to the boundary point coordinate of the crystalline lens sample, the center point coordinate of the crystalline lens sample is obtained;
Obtain the intersecting point coordinate of the symmetry axis of the crystalline lens sample and the boundary of the crystalline lens sample, wherein described symmetrical Axis is rotated since reference line with predetermined angle;
According to the center point coordinate and the intersecting point coordinate, the central point of the crystalline lens sample is obtained to the distance of intersection point;
The distance is normalized using the normalized parameter of the crystalline lens sample, obtains the crystalline lens sample Nuclear boundary, the normalized parameter is the minimum range in the symmetry axis rotary course in the corresponding distance;
Extract the intermediate region of the nuclear boundary;
The corresponding intermediate region of the described crystalline lens sample of M is clustered using default clustering algorithm, obtains N number of shape mould Plate, M and N are positive integer, and M is greater than N.
6. according to the method described in claim 5, it is characterized in that, the default clustering algorithm includes in following clustering algorithm Any one:
K mean algorithm, fuzzy C-mean algorithm FCM clustering algorithm.
7. method according to any one of claims 1-5, which is characterized in that described to extract crystalline lens from original image Image-region, comprising:
Using canny edge detecting technology, the crystalline lens image-region is extracted from the original image.
8. a kind of crystalline lens segmenting device characterized by comprising
Extraction module, for extracting crystalline lens image-region from original image;
Processing module, for obtaining the initial crystalline lens knot in the crystalline lens image-region by presetting neural network model Structure;And edge-smoothing processing is carried out to the initial lens structure using shape template, the crystalline lens knot after being divided Structure, the shape template are by being trained to crystalline lens sample.
9. a kind of crystalline lens segmenting device, which is characterized in that including memory and processor, and be stored on the memory The computer program executed for the processor;
The processor executes the computer program and realizes following operation:
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using shape template, the lens structure after being divided, The shape template is by being trained to crystalline lens sample.
10. a kind of computer readable storage medium, which is characterized in that including computer-readable instruction, when processor is read and is held When the row computer-readable instruction, so that the processor performs the following operations:
Crystalline lens image-region is extracted from original image;
By presetting neural network model, the initial lens structure in the crystalline lens image-region is obtained;
Edge-smoothing processing is carried out to the initial lens structure using shape template, the lens structure after being divided, The shape template is by being trained to crystalline lens sample.
CN201910412464.6A 2019-05-17 2019-05-17 Crystalline lens dividing method, device and storage medium Pending CN110176007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910412464.6A CN110176007A (en) 2019-05-17 2019-05-17 Crystalline lens dividing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910412464.6A CN110176007A (en) 2019-05-17 2019-05-17 Crystalline lens dividing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN110176007A true CN110176007A (en) 2019-08-27

Family

ID=67691473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910412464.6A Pending CN110176007A (en) 2019-05-17 2019-05-17 Crystalline lens dividing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110176007A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210404A (en) * 2019-12-24 2020-05-29 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Method and device for classifying lens segmentation difficulty
CN115712363A (en) * 2022-11-21 2023-02-24 北京中科睿医信息科技有限公司 Interface color display method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776717A (en) * 2005-12-01 2006-05-24 上海交通大学 Method for identifying shoes print at criminal scene
CN101256631A (en) * 2007-02-26 2008-09-03 富士通株式会社 Method, apparatus, program and readable storage medium for character recognition
CN101719272A (en) * 2009-11-26 2010-06-02 上海大学 Three-dimensional image segmentation method based on three-dimensional improved pulse coupled neural network
CN102254172A (en) * 2011-06-16 2011-11-23 电子科技大学 Method for segmenting fingerprint image based on cellular neural network and morphology
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN106023220A (en) * 2016-05-26 2016-10-12 史方 Vehicle exterior part image segmentation method based on deep learning
CN106780454A (en) * 2016-12-08 2017-05-31 苏州汉特士视觉科技有限公司 Vision positioning method and automatic feed dividing feeding device based on edge back projection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776717A (en) * 2005-12-01 2006-05-24 上海交通大学 Method for identifying shoes print at criminal scene
CN101256631A (en) * 2007-02-26 2008-09-03 富士通株式会社 Method, apparatus, program and readable storage medium for character recognition
CN101719272A (en) * 2009-11-26 2010-06-02 上海大学 Three-dimensional image segmentation method based on three-dimensional improved pulse coupled neural network
CN102254172A (en) * 2011-06-16 2011-11-23 电子科技大学 Method for segmenting fingerprint image based on cellular neural network and morphology
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN106023220A (en) * 2016-05-26 2016-10-12 史方 Vehicle exterior part image segmentation method based on deep learning
CN106780454A (en) * 2016-12-08 2017-05-31 苏州汉特士视觉科技有限公司 Vision positioning method and automatic feed dividing feeding device based on edge back projection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
看准资讯: "黄金比例把明星脸都变成了什么鬼", 《搜狐网》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210404A (en) * 2019-12-24 2020-05-29 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Method and device for classifying lens segmentation difficulty
CN115712363A (en) * 2022-11-21 2023-02-24 北京中科睿医信息科技有限公司 Interface color display method, device, equipment and medium

Similar Documents

Publication Publication Date Title
Li et al. Computer‐assisted diagnosis for diabetic retinopathy based on fundus images using deep convolutional neural network
TWI715117B (en) Method, device and electronic apparatus for medical image processing and storage mdeium thereof
Gao et al. Classification of CT brain images based on deep learning networks
US10223610B1 (en) System and method for detection and classification of findings in images
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
Banar et al. Towards fully automated third molar development staging in panoramic radiographs
CN104573309B (en) Device and method for computer-aided diagnosis
Van Rikxoort et al. Automatic lung segmentation from thoracic computed tomography scans using a hybrid approach with error detection
Shao et al. Brain ventricle parcellation using a deep neural network: Application to patients with ventriculomegaly
US20210236080A1 (en) Cta large vessel occlusion model
CN109872325B (en) Full-automatic liver tumor segmentation method based on two-way three-dimensional convolutional neural network
CN106682435A (en) System and method for automatically detecting lesions in medical image through multi-model fusion
JP2021002338A (en) Method and system for image segmentation and identification
CN107977952A (en) Medical image cutting method and device
Sreelakshmy et al. [Retracted] An Automated Deep Learning Model for the Cerebellum Segmentation from Fetal Brain Images
Almotiri et al. A multi-anatomical retinal structure segmentation system for automatic eye screening using morphological adaptive fuzzy thresholding
US20240185428A1 (en) Medical Image Analysis Using Neural Networks
Yang et al. A deep learning segmentation approach in free‐breathing real‐time cardiac magnetic resonance imaging
David et al. Retinal Blood Vessels and Optic Disc Segmentation Using U‐Net
US11475568B2 (en) Method for controlling display of abnormality in chest x-ray image, storage medium, abnormality display control apparatus, and server apparatus
CN110176007A (en) Crystalline lens dividing method, device and storage medium
CN110009641A (en) Lens segmentation method, device and storage medium
Li et al. A deep-learning method for the end-to-end prediction of intracranial aneurysm rupture risk
Arzhaeva et al. Computer‐aided detection of interstitial abnormalities in chest radiographs using a reference standard based on computed tomography
Zheng et al. Adaptive segmentation of vertebral bodies from sagittal MR images based on local spatial information and Gaussian weighted chi-square distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190827

RJ01 Rejection of invention patent application after publication