The content of the invention
For the problem present on, the present invention proposes a kind of tissue classification procedure based on cardiovascular IVOCT images
And device, specific embodiment are as follows.
The embodiment of the present invention provides a kind of tissue classification procedure based on cardiovascular IVOCT images, wherein, the method bag
Include:
Step 1, obtain multiple marked IVOCT images;
Step 2, establish IVOCT image pattern collection, and the IVOCT image patterns collection is divided into training sample set and test specimens
This collection;
Step 3, structure convolutional neural networks structure;
Step 4, be trained using convolutional neural networks described in the training sample set pair, to obtain CNN models;
The test sample collection, is inputted the CNN models, the corresponding organization type figure of acquisition different tissues by step 5.
In one embodiment of the invention, the step 2 includes,
Step 21, respectively the IVOCT image marked to each Zhang Suoshu carry out the conversion of diversified forms, to obtain multiple
Image after conversion, a sample is set to by each in the image after multiple described conversion;Wherein,
The diversified forms conversion include cut out, translate, overturning, rotating, deforming and one kind in gray-value variation or
Person combines;
Step 22, by the multiple sample be set to the IVOCT image patterns collection.
In one embodiment of the invention, further included before the step 4:
The organization type included according to lesion vessels sets training label, and the trained label includes segmentation tag and border
Label, the segmentation tag and the boundary label are used to be trained the convolutional neural networks.
In one embodiment of the invention, the organization type that the lesion vessels include includes N kinds, correspondingly, sets
Training label, including:
N kinds segmentation tag and N kind boundary labels are set, and wherein N is the positive integer more than 1.
In one embodiment of the invention, the CNN models include input terminal, the first output terminal and the second output terminal;
First output terminal is used to export N segmentation figures;
Second output terminal is used to export N boundary graphs;
The segmentation figure is used to show organizes class included in the cardiovascular IVOCT images by input terminal input
The structure of type;The boundary graph is used to show tissue included in the cardiovascular IVOCT images by input terminal input
The profile of type;
Wherein, the segmentation figure and the boundary graph are binary map.
In one embodiment of the invention, further included after the step 5:
The corresponding segmentation figure of each organization type and the boundary graph, be overlapped by step 6, with described in acquisition
The structure chart of organization type;
The structure chart of histological types, be combined by step 7, to obtain the cardiovascular organization classification chart.
An alternative embodiment of the invention provides a kind of tissue classifying apparatus based on cardiovascular IVOCT images, including number
Word signal processing unit and storage unit, the storage unit are used to store process instruction, wherein, the process instruction is described
Digital signal processing unit performs, and realizes such as the step in any of the above-described method.
Beneficial effects of the present invention are:
1st, by establishing convolutional neural networks structure, a constricted path and two path expanders is made it have, is formed horizontal
To Y-network structure, then by network training, optimize CNN models, in the embodiment of the present invention, CNN models have two outputs
End, the profile to tissue and internal structure are shown solve in the prior art respectively respectively, and the resolution ratio of output image is omited
It is small, after parts of images information loss, cause organizational boundary to show unclear technical problem.
2nd, the present invention is directed to the tissue typing of cardiovascular IVOCT images, establishes new optimization CNN models, passes through first
The structure of different types of tissue, inhomogeneity during the output of the second output terminal is cardiovascular in output terminal output angiocarpy IVOCT images
The boundary profile of the tissue of type so that the border of each tissue and profile show respectively, then by boundary graph and segmentation figure into
Row superposition forms complete organization chart, and compensate for output image resolution ratio reduces the image information loss brought, knot of tissue
Structure reduction degree is high, and image display effect is more preferable.
Embodiment one
As Figure 1-Figure 5, Fig. 1 is the flow chart of tissue classification procedure provided in an embodiment of the present invention;Fig. 2 is the present invention
The structure chart for the convolutional neural networks that embodiment provides;Fig. 3 (a) is the schematic diagram of CNN models provided in an embodiment of the present invention;Figure
3 (b) is the use virtual graph of CNN models provided in an embodiment of the present invention;Fig. 4 (a) is CNN models provided in an embodiment of the present invention
Input terminal input cardiovascular IVOCT images;Fig. 4 (b) is that the first output terminal of CNN models provided in an embodiment of the present invention is defeated
The tissue segmentation figure gone out;Fig. 4 (c) is organizational boundary's figure that the second output terminal of CNN models provided in an embodiment of the present invention exports;
Fig. 5 is the cardiovascular organization classification chart after combination provided in an embodiment of the present invention.The embodiment of the present invention provides one kind and is based on painstaking effort
The tissue classification procedure of pipe IVOCT images, wherein the described method includes:
Step 1, obtain multiple marked IVOCT images;
Specifically, in the embodiment of the present invention, multiple marked IVOCT images are obtained, specifically in clinical detection process
In, medical worker carries out the various organization types in the IVOCT images that have obtained hand labeled and classification, actually each
Portion of tissue type may be only included by opening IVOCT images, and can not include whole organization types, it is therefore desirable to collect multiple
IVOCT images, to achieve the purpose that to obtain the structure chart of whole organization types.
Step 2, establish IVOCT image pattern collection, and the IVOCT image patterns collection is divided into training sample set and test specimens
This collection;
Further, marked IVOCT images are set to an image pattern set, specifically, due to medical worker
The image number of hand labeled is limited, and carrying out subsequent network training needs substantial amounts of training sample and test sample, therefore needs
Marked IVOCT images are expanded, to increase number of samples, be conducive to carry out network training, specifically, carrying out sample
The method of this expansion is:
Step 21, respectively the IVOCT image marked to each Zhang Suoshu carry out the conversion of diversified forms, to obtain multiple
Image after conversion, a sample is set to by each in the image after multiple described conversion;
Specifically the IVOCT anamorphose mode marked to each has:Cut out, translate, overturn, rotation, deformation and
The modes such as gray-value variation, further include the multiple combinations form of aforesaid way, such as image are cut out and is translated, cut out and turn over
Turn, cut out and rotate, cut out and deform, cut out and gray-value variation, cut out, translate and overturn, cut out, translate and rotate, cut
Cut out, translation and deformation, cut out, translate and gray-value variation, cut out, translate, overturn and rotate, cut out, translate, overturn and become
Shape, cuts out, translates, overturning and gray-value variation, cuts out, translates, overturns, rotates and deforms, cut out, translate, overturning, rotating and
Gray-value variation, cuts out, translates, overturning, rotating, deforming and gray-value variation, translation and upset, translation and rotation etc., it is necessary to
Illustrate, the mode cut out further includes the different modes such as beveling, trimming, diamond shape, and upset is also turn 90 degrees including overturning 30 degree, turning over
Etc. different situations, it follows that an IVOCT image can be extended for multiple images in the embodiment of the present invention, so by one
Open IVOCT images and be extended for multiple images, realize and the compensated in advance of output image loss is acted on, solve in the prior art
Output image resolution ratio is small, causes the technology that image information is lost to be asked.By being inputted to the various deformation of an image, ensure that
Output image can completely reproduce.
Step 22, by the multiple sample be set to the IVOCT image patterns collection.
Each deformed IVOCT image is set to a sample, therefore large number of sample can be obtained, those
Sample forms the IVOCT image pattern collection after expanding;Then a selection part is concentrated to be used as training sample from the image pattern, its
Remaining part is allocated as test sample, wherein, training sample is used to be trained sorter network, and test sample is used for having instructed
The network perfected is tested, and judges the accuracy of sorter network.
Step 3, structure convolutional neural networks structure;
Further, as shown in Fig. 2, the embodiment of the present invention needs to establish the network for being used for classifying to IVOCT images,
A kind of new convolutional neural networks structure is specifically established, which has a constricted path and two expansions
Open path,
Constricted path is made of typical convolutional network, it is a repetitive structure:The convolution of two repetitions is (before convolution
Image is expanded), and each convolution heel one corrects linear unit (ReLU), and constricted path further includes maximum
The down-sampling that pondization operates and step-length is 2.Each down-sampling, the number of feature passage double.Each step of path expander all can
The deconvolution operation (up-sampling) of one is carried out to characteristic pattern, obtained feature channel number can halve.In network is expanded,
We corresponding with path expander will shrink output and combine with path expander, then carry out two convolution operations, and each
One amendment linear unit of convolution heel.It is necessary that every layer, which is shunk output to be applied to path expander,.Because up-sampling is logical
Deconvolution realization is crossed, if simply 4 layers of deconvolution are carried out to the 5th layer of output, although we can obtain and the equal size of artwork
Picture, but obtained result is not fine enough, i.e. the classification to detail section is not accurate enough.Therefore in the present invention, we will
The output of every layer of down-sampling also carries out deconvolution operation.Since in IVOCT images, different types of tissue is often close in
Together, therefore second output is that the tissue closed in order to prevent connects together and causes to judge by accident.
Step 4, be trained using convolutional neural networks described in the training sample set pair, to obtain CNN models;
Specifically, the image that training sample is concentrated is sequentially inputted in the above-mentioned convolutional neural networks established, pass through
Circuit training is carried out to the supplemental characteristic of convolutional layer and/or full articulamentum in convolutional neural networks model, obtains preset format
View data, so that CNN models are established, as shown in Fig. 3 (a), but the picture number of the preset format of initial CNN models output
According to larger with file destination difference, it is therefore desirable to concentrate substantial amounts of image cycle to be input to the convolutional neural networks training sample
In, CNN models are optimized.It should be noted that the file destination in the embodiment of the present invention is training label, training label
It is to be got by the way that the histological types included in lesion vessels are carried out with analysis.
Specifically, training label is substantially organization type image all in marked IVOCT images, the present invention is implemented
In example, training label is divided into segmentation tag and boundary label, and segmentation tag is used for the structure for representing each organization type, boundary label
For representing the profile of each organization type, when being trained to convolutional neural networks, segmentation tag and boundary label are used to make
The output display content that is constantly close and being equal to segmentation tag and boundary label of convolutional neural networks.
The organization type that lesion vessels include includes N kinds, then each corresponding organization type sets a segmentation tag
With a boundary label, therefore need to set N kinds segmentation tag and N kind boundary labels altogether, namely need the group classified
Knitting type has N kinds.
There are 11 kinds with regard to the organization type in category, being able to confirm that known to Medical in lesion angiocarpy, such as:Bifurcated,
Fibrous plaque, calcified plaque, Lipid Plaque, fibrocalcification patch, fiber atherosclerotic plaque, red, white thrombus, seal wire, conduit,
Thin fibrous cap patch, vascular wall, i.e. N values are 11.
It should be noted that with reference to practical situations, the value of N is not limited to 11 kinds, with the development of medical technology, hair
Now new pathological tissues, then the species of pathological tissues can increase, or be currently known a variety of with reference to practical application, only selection
A part for pathological tissues is classified, such as 4 kinds or 5 kinds of pathological tissues, is correspondingly arranged the training label pair of equal amount
Convolutional neural networks are trained, and are within the protection category of the embodiment of the present invention.
In the embodiment of the present invention, by taking N values 11 as an example, analyzed altogether from cardiovascular OCT image and confirm 11 kinds of groups
Type is knitted, and generates the structure chart of 11 kinds of organization types as 11 kinds of segmentation tags, the border of 11 kinds of organization types of generation
Figure is used as 11 kinds of boundary labels.Then input training sample concentrate picture, which is trained, every time from
The constricted path of convolutional neural networks inputs a training image, then exports 11 segmentation figures respectively from two path expander
With 11 boundary graphs, which corresponds to 11 kinds of segmentation tags, and 11 boundary graphs correspond to 11 kinds of boundary labels.Specifically
, such as:No. 1 segmentation tag represents the structure of bifurcated, and No. 2 segmentation tags represent the structure of fibrous plaque, and No. 3 labels represent calcium
The structure ... of change patch and so on, No. 11 segmentation tags represent the structure of vascular wall.No. 1 boundary label represents the wheel of bifurcated
Exterior feature, No. 2 boundary labels represent the profile of fibrous plaque, and No. 3 borders represent the profile ... of calcified plaque, No. 11 sides
Boundary mark label represent blood profile.In 11 segmentation figures and 11 boundary graphs that so export, No. 1 is schemed bifurcated in then representing input images
Structure and profile, scheme for No. 2 then to represent the structures and profile of fibrous plaque, No. 3 figures represent the structure and wheel of calcified plaque
It is wide ... that No. 11 figures represent vascular wall profile and structure.The purpose of network training is that by the output of any input picture be default lattice
The image of formula, and the image of the preset format is determined by above-mentioned 11 kinds of segmentation tags and 11 kinds of boundary labels.By a large amount of
Circuit training so that the image of the preset format of output constantly tends to be close or even identical with the training label set,
The CNN models of optimization are just formd at this time.
Further, in practical applications, as shown in Fig. 3 (b), a corresponding contraction road with convolutional neural networks structure
Footpath, the CNN models include an input terminal;Corresponding to two path expanders of convolutional neural networks structure, the CNN models
Including the first output terminal and the second output terminal;First output terminal is used to export N segmentation figures;Second output terminal is used for
Export N boundary graphs.For example:When only including 2 kinds of pathological tissues, example in the cardiovascular IVOCT images inputted from input terminal
No. 4 Lipid Plaques and No. 10 thin fibrous cap patches in this way, then the first output terminal of corresponding CNN models exports 11 segmentations
Figure, the second output terminal export 11 boundary graphs, are taken turns wherein showing institutional framework and tissue on only No. 4 figures and No. 10 figures respectively
Exterior feature, and not display content on remaining segmentation figure and boundary graph.This is because in the cardiovascular IVOCT images of input terminal input
Do not include other pathological tissues, therefore the segmentation figure of those corresponding tissues and boundary graph do not show any content then.
In the embodiment of the present invention, segmentation figure and boundary graph are binary map.
The test sample collection, is inputted the CNN models, the corresponding organization type figure of acquisition different tissues by step 5.
Optimized CNN models need to test its accuracy by test sample, during test, from
The input terminal input angiocarpy IVOCT images of CNN models, then from the output of the first output terminal N segmentation figures and N of CNN models
Boundary graph is opened, then by the segmentation figure of output and boundary graph compared with standard picture, to determine the reliability of the CNN models
And robustness.
When confirming that CNN models are in shape, you can come into operation.As shown in Fig. 4 (a) to Fig. 4 (c), by target
Image inputs, and obtains output as a result, and in the segmentation figure and boundary graph of output being the structure chart and profile of corresponding organization type
Figure, it is achieved thereby that carrying out artificial intelligence classification to organization type.And since the embodiment of the present invention is using two kinds of outputs, one side
Different types of organization type is distinguished, and the profile of different tissues is determined, avoids adjacent two
A organization type causes institutional framework to be judged by accident, therefore tissue typing imitates due to closely causing obscurity boundary to differentiate
Fruit is more preferable.
It should be noted that tissue classification procedure provided in an embodiment of the present invention, not only classifies pathological tissues, and
And also its all existing hetero-organization in cardiovascular IVOCT images can be distinguished, and pass through different organization type figures
Show.
It should be noted that after classification chart and boundary graph is obtained, in order to more intuitively provide each tissue class to user
State of the type in angiocarpy, the embodiment of the present invention further include herein below:
The corresponding segmentation figure of each organization type and the boundary graph, be overlapped by step 6, with described in acquisition
The structure chart of organization type;
It is exactly specifically to be overlapped No. 1 segmentation figure and No. 1 boundary graph so that the side of this tissue of vascular bifurcation
Boundary and structure are clear;Then No. 2 segmentation figures and No. 2 boundary graphs are overlapped, and so on obtain the knot of each organization type
Composition.
The structure chart of histological types, be combined by step 7, to obtain the cardiovascular organization classification chart.
After the structure chart of each organization type is obtained, since the structure chart of those organization types is inputted from same from input terminal
Cardiovascular IVOCT images in sort out what is come, then the structure charts of those organization types is combined in an image again,
Cardiovascular organization classification chart is obtained, can thus observe have in angiocarpy IVOCT images from cardiovascular organization classification chart
There is which organization type, how those organization types are distributed, and where adjacent two kinds organized respective border, they how
Connection etc. information, to provide specific in detail and intuitively usage experience to user.
It should be noted that since segmentation figure and boundary graph are binary map, by white and black displays, therefore by different tissues
After the structure chart superposition of type, it is impossible to the classification results to organization type are showed, therefore, in the embodiment of the present invention, to not
With organization type structure chart be combined before, to paint to the structure chart of different tissues, as shown in figure 5, with display
Different organization types.
The embodiment of the present invention also provides a kind of tissue classifying apparatus based on cardiovascular IVOCT images, which includes
Digital signal processing unit and storage unit, the storage unit are used to store process instruction, and the process instruction is by the numeral
Signal processing unit performs, and realizes step involved in above-described embodiment.
In conclusion specific case used herein is based on angiocarpy IVOCT to one kind provided in an embodiment of the present invention
The tissue classification procedure of image and the principle of device and embodiment are set forth, and the explanation of above example is only intended to side
Assistant solves the method and its core concept of the present invention;Meanwhile for those of ordinary skill in the art, the think of according to the present invention
Think, in specific embodiments and applications there will be changes, in conclusion this specification content should not be construed as pair
The limitation of the present invention, protection scope of the present invention should be subject to appended claims.