CN107993229A - A kind of tissue classification procedure and device based on cardiovascular IVOCT images - Google Patents

A kind of tissue classification procedure and device based on cardiovascular IVOCT images Download PDF

Info

Publication number
CN107993229A
CN107993229A CN201711354768.9A CN201711354768A CN107993229A CN 107993229 A CN107993229 A CN 107993229A CN 201711354768 A CN201711354768 A CN 201711354768A CN 107993229 A CN107993229 A CN 107993229A
Authority
CN
China
Prior art keywords
ivoct
cardiovascular
images
tissue
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711354768.9A
Other languages
Chinese (zh)
Other versions
CN107993229B (en
Inventor
朱锐
李嘉男
曹挥
曹一挥
薛婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Zhongke Low Light Imaging Technology Co Ltd
Original Assignee
Xi'an Zhongke Low Light Imaging Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Zhongke Low Light Imaging Technology Co Ltd filed Critical Xi'an Zhongke Low Light Imaging Technology Co Ltd
Priority to CN201711354768.9A priority Critical patent/CN107993229B/en
Publication of CN107993229A publication Critical patent/CN107993229A/en
Application granted granted Critical
Publication of CN107993229B publication Critical patent/CN107993229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of tissue classification procedure and device based on cardiovascular IVOCT images, wherein, the described method includes:Step 1, obtain multiple marked IVOCT images;Step 2, establish IVOCT image pattern collection, and the IVOCT image patterns collection is divided into training sample set and test sample collection;Step 3, structure convolutional neural networks structure;Step 4, be trained using convolutional neural networks described in the training sample set pair, to obtain CNN models;The test sample collection, is inputted the CNN models, the corresponding organization type figure of acquisition different tissues by step 5.In the embodiment of the present invention, CNN models have two output terminals, and the profile to tissue and internal structure are shown respectively respectively, solve in the prior art, the resolution ratio of output image is smaller, after parts of images information loss, causes organizational boundary to show unclear technical problem.

Description

A kind of tissue classification procedure and device based on cardiovascular IVOCT images
Technical field
The invention belongs to imaging in biological tissues technology, and in particular to a kind of tissue typing based on cardiovascular IVOCT images Method and apparatus.
Background technology
Biopsy is a kind of common medical test, and pathology doctor observes the tissue obtained at subject under the microscope Sample, to determine disease property or degree.It is generally desirable to which tissue is cut into very thin section and dyeing, then could Observed under the microscope.Means of optical coherence tomography (OCT) is a kind of alternative solution of lossless Optical Imaging Modes, The three-dimensional high-definition image for being yet capable of providing biopsy samples tissue is not needed to be dyed.Optical coherence microscope (OCM) combines OCT and is total to The advantages of focusing microscope, there is provided high-resolution cell image.
Normal artery has the homogeneous layered structure being made of inner membrance, middle film and outer membrane, but when angiogenesis lesion, It is intravascular then to include different type tissue, therefore, it is necessary to which those different tissues are classified and detected, however, until mesh Before untill, to these tissue detection and classification also rely primarily on manually, very it is time-consuming.
A kind of convolutional network for Biomedical Image segmentation is proposed in the prior art, and this method is using a contraction Path catches content, and an amplification path carrys out precise positioning, two paths form U-shapeds, are known as U-Net, but U-Net is finally defeated The photo resolution gone out can be slightly less than artwork, cause image information loss, it is impossible to truly reduce institutional framework.
Therefore, design one kind can detect saving manpower automatically, and can reduce the tissue typing side of image information loss Method is the hot technology of this area.
The content of the invention
For the problem present on, the present invention proposes a kind of tissue classification procedure based on cardiovascular IVOCT images And device, specific embodiment are as follows.
The embodiment of the present invention provides a kind of tissue classification procedure based on cardiovascular IVOCT images, wherein, the method bag Include:
Step 1, obtain multiple marked IVOCT images;
Step 2, establish IVOCT image pattern collection, and the IVOCT image patterns collection is divided into training sample set and test specimens This collection;
Step 3, structure convolutional neural networks structure;
Step 4, be trained using convolutional neural networks described in the training sample set pair, to obtain CNN models;
The test sample collection, is inputted the CNN models, the corresponding organization type figure of acquisition different tissues by step 5.
In one embodiment of the invention, the step 2 includes,
Step 21, respectively the IVOCT image marked to each Zhang Suoshu carry out the conversion of diversified forms, to obtain multiple Image after conversion, a sample is set to by each in the image after multiple described conversion;Wherein,
The diversified forms conversion include cut out, translate, overturning, rotating, deforming and one kind in gray-value variation or Person combines;
Step 22, by the multiple sample be set to the IVOCT image patterns collection.
In one embodiment of the invention, further included before the step 4:
The organization type included according to lesion vessels sets training label, and the trained label includes segmentation tag and border Label, the segmentation tag and the boundary label are used to be trained the convolutional neural networks.
In one embodiment of the invention, the organization type that the lesion vessels include includes N kinds, correspondingly, sets Training label, including:
N kinds segmentation tag and N kind boundary labels are set, and wherein N is the positive integer more than 1.
In one embodiment of the invention, the CNN models include input terminal, the first output terminal and the second output terminal;
First output terminal is used to export N segmentation figures;
Second output terminal is used to export N boundary graphs;
The segmentation figure is used to show organizes class included in the cardiovascular IVOCT images by input terminal input The structure of type;The boundary graph is used to show tissue included in the cardiovascular IVOCT images by input terminal input The profile of type;
Wherein, the segmentation figure and the boundary graph are binary map.
In one embodiment of the invention, further included after the step 5:
The corresponding segmentation figure of each organization type and the boundary graph, be overlapped by step 6, with described in acquisition The structure chart of organization type;
The structure chart of histological types, be combined by step 7, to obtain the cardiovascular organization classification chart.
An alternative embodiment of the invention provides a kind of tissue classifying apparatus based on cardiovascular IVOCT images, including number Word signal processing unit and storage unit, the storage unit are used to store process instruction, wherein, the process instruction is described Digital signal processing unit performs, and realizes such as the step in any of the above-described method.
Beneficial effects of the present invention are:
1st, by establishing convolutional neural networks structure, a constricted path and two path expanders is made it have, is formed horizontal To Y-network structure, then by network training, optimize CNN models, in the embodiment of the present invention, CNN models have two outputs End, the profile to tissue and internal structure are shown solve in the prior art respectively respectively, and the resolution ratio of output image is omited It is small, after parts of images information loss, cause organizational boundary to show unclear technical problem.
2nd, the present invention is directed to the tissue typing of cardiovascular IVOCT images, establishes new optimization CNN models, passes through first The structure of different types of tissue, inhomogeneity during the output of the second output terminal is cardiovascular in output terminal output angiocarpy IVOCT images The boundary profile of the tissue of type so that the border of each tissue and profile show respectively, then by boundary graph and segmentation figure into Row superposition forms complete organization chart, and compensate for output image resolution ratio reduces the image information loss brought, knot of tissue Structure reduction degree is high, and image display effect is more preferable.
Brief description of the drawings
Fig. 1 is the flow chart of tissue classification procedure provided in an embodiment of the present invention;
Fig. 2 is the structure chart of convolutional neural networks provided in an embodiment of the present invention;
Fig. 3 (a) is the schematic diagram of CNN models provided in an embodiment of the present invention;
Fig. 3 (b) is the use virtual graph of CNN models provided in an embodiment of the present invention;
Fig. 4 (a) is the cardiovascular IVOCT images that the input terminal of CNN models provided in an embodiment of the present invention inputs;
Fig. 4 (b) is the tissue segmentation figure that the first output terminal of CNN models provided in an embodiment of the present invention exports;
Fig. 4 (c) is organizational boundary's figure that the second output terminal of CNN models provided in an embodiment of the present invention exports;
Fig. 5 is the cardiovascular organization classification chart after combination provided in an embodiment of the present invention.
Embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, below in conjunction with the accompanying drawings to the present invention Embodiment be described in detail.
Embodiment one
As Figure 1-Figure 5, Fig. 1 is the flow chart of tissue classification procedure provided in an embodiment of the present invention;Fig. 2 is the present invention The structure chart for the convolutional neural networks that embodiment provides;Fig. 3 (a) is the schematic diagram of CNN models provided in an embodiment of the present invention;Figure 3 (b) is the use virtual graph of CNN models provided in an embodiment of the present invention;Fig. 4 (a) is CNN models provided in an embodiment of the present invention Input terminal input cardiovascular IVOCT images;Fig. 4 (b) is that the first output terminal of CNN models provided in an embodiment of the present invention is defeated The tissue segmentation figure gone out;Fig. 4 (c) is organizational boundary's figure that the second output terminal of CNN models provided in an embodiment of the present invention exports; Fig. 5 is the cardiovascular organization classification chart after combination provided in an embodiment of the present invention.The embodiment of the present invention provides one kind and is based on painstaking effort The tissue classification procedure of pipe IVOCT images, wherein the described method includes:
Step 1, obtain multiple marked IVOCT images;
Specifically, in the embodiment of the present invention, multiple marked IVOCT images are obtained, specifically in clinical detection process In, medical worker carries out the various organization types in the IVOCT images that have obtained hand labeled and classification, actually each Portion of tissue type may be only included by opening IVOCT images, and can not include whole organization types, it is therefore desirable to collect multiple IVOCT images, to achieve the purpose that to obtain the structure chart of whole organization types.
Step 2, establish IVOCT image pattern collection, and the IVOCT image patterns collection is divided into training sample set and test specimens This collection;
Further, marked IVOCT images are set to an image pattern set, specifically, due to medical worker The image number of hand labeled is limited, and carrying out subsequent network training needs substantial amounts of training sample and test sample, therefore needs Marked IVOCT images are expanded, to increase number of samples, be conducive to carry out network training, specifically, carrying out sample The method of this expansion is:
Step 21, respectively the IVOCT image marked to each Zhang Suoshu carry out the conversion of diversified forms, to obtain multiple Image after conversion, a sample is set to by each in the image after multiple described conversion;
Specifically the IVOCT anamorphose mode marked to each has:Cut out, translate, overturn, rotation, deformation and The modes such as gray-value variation, further include the multiple combinations form of aforesaid way, such as image are cut out and is translated, cut out and turn over Turn, cut out and rotate, cut out and deform, cut out and gray-value variation, cut out, translate and overturn, cut out, translate and rotate, cut Cut out, translation and deformation, cut out, translate and gray-value variation, cut out, translate, overturn and rotate, cut out, translate, overturn and become Shape, cuts out, translates, overturning and gray-value variation, cuts out, translates, overturns, rotates and deforms, cut out, translate, overturning, rotating and Gray-value variation, cuts out, translates, overturning, rotating, deforming and gray-value variation, translation and upset, translation and rotation etc., it is necessary to Illustrate, the mode cut out further includes the different modes such as beveling, trimming, diamond shape, and upset is also turn 90 degrees including overturning 30 degree, turning over Etc. different situations, it follows that an IVOCT image can be extended for multiple images in the embodiment of the present invention, so by one Open IVOCT images and be extended for multiple images, realize and the compensated in advance of output image loss is acted on, solve in the prior art Output image resolution ratio is small, causes the technology that image information is lost to be asked.By being inputted to the various deformation of an image, ensure that Output image can completely reproduce.
Step 22, by the multiple sample be set to the IVOCT image patterns collection.
Each deformed IVOCT image is set to a sample, therefore large number of sample can be obtained, those Sample forms the IVOCT image pattern collection after expanding;Then a selection part is concentrated to be used as training sample from the image pattern, its Remaining part is allocated as test sample, wherein, training sample is used to be trained sorter network, and test sample is used for having instructed The network perfected is tested, and judges the accuracy of sorter network.
Step 3, structure convolutional neural networks structure;
Further, as shown in Fig. 2, the embodiment of the present invention needs to establish the network for being used for classifying to IVOCT images, A kind of new convolutional neural networks structure is specifically established, which has a constricted path and two expansions Open path,
Constricted path is made of typical convolutional network, it is a repetitive structure:The convolution of two repetitions is (before convolution Image is expanded), and each convolution heel one corrects linear unit (ReLU), and constricted path further includes maximum The down-sampling that pondization operates and step-length is 2.Each down-sampling, the number of feature passage double.Each step of path expander all can The deconvolution operation (up-sampling) of one is carried out to characteristic pattern, obtained feature channel number can halve.In network is expanded, We corresponding with path expander will shrink output and combine with path expander, then carry out two convolution operations, and each One amendment linear unit of convolution heel.It is necessary that every layer, which is shunk output to be applied to path expander,.Because up-sampling is logical Deconvolution realization is crossed, if simply 4 layers of deconvolution are carried out to the 5th layer of output, although we can obtain and the equal size of artwork Picture, but obtained result is not fine enough, i.e. the classification to detail section is not accurate enough.Therefore in the present invention, we will The output of every layer of down-sampling also carries out deconvolution operation.Since in IVOCT images, different types of tissue is often close in Together, therefore second output is that the tissue closed in order to prevent connects together and causes to judge by accident.
Step 4, be trained using convolutional neural networks described in the training sample set pair, to obtain CNN models;
Specifically, the image that training sample is concentrated is sequentially inputted in the above-mentioned convolutional neural networks established, pass through Circuit training is carried out to the supplemental characteristic of convolutional layer and/or full articulamentum in convolutional neural networks model, obtains preset format View data, so that CNN models are established, as shown in Fig. 3 (a), but the picture number of the preset format of initial CNN models output According to larger with file destination difference, it is therefore desirable to concentrate substantial amounts of image cycle to be input to the convolutional neural networks training sample In, CNN models are optimized.It should be noted that the file destination in the embodiment of the present invention is training label, training label It is to be got by the way that the histological types included in lesion vessels are carried out with analysis.
Specifically, training label is substantially organization type image all in marked IVOCT images, the present invention is implemented In example, training label is divided into segmentation tag and boundary label, and segmentation tag is used for the structure for representing each organization type, boundary label For representing the profile of each organization type, when being trained to convolutional neural networks, segmentation tag and boundary label are used to make The output display content that is constantly close and being equal to segmentation tag and boundary label of convolutional neural networks.
The organization type that lesion vessels include includes N kinds, then each corresponding organization type sets a segmentation tag With a boundary label, therefore need to set N kinds segmentation tag and N kind boundary labels altogether, namely need the group classified Knitting type has N kinds.
There are 11 kinds with regard to the organization type in category, being able to confirm that known to Medical in lesion angiocarpy, such as:Bifurcated, Fibrous plaque, calcified plaque, Lipid Plaque, fibrocalcification patch, fiber atherosclerotic plaque, red, white thrombus, seal wire, conduit, Thin fibrous cap patch, vascular wall, i.e. N values are 11.
It should be noted that with reference to practical situations, the value of N is not limited to 11 kinds, with the development of medical technology, hair Now new pathological tissues, then the species of pathological tissues can increase, or be currently known a variety of with reference to practical application, only selection A part for pathological tissues is classified, such as 4 kinds or 5 kinds of pathological tissues, is correspondingly arranged the training label pair of equal amount Convolutional neural networks are trained, and are within the protection category of the embodiment of the present invention.
In the embodiment of the present invention, by taking N values 11 as an example, analyzed altogether from cardiovascular OCT image and confirm 11 kinds of groups Type is knitted, and generates the structure chart of 11 kinds of organization types as 11 kinds of segmentation tags, the border of 11 kinds of organization types of generation Figure is used as 11 kinds of boundary labels.Then input training sample concentrate picture, which is trained, every time from The constricted path of convolutional neural networks inputs a training image, then exports 11 segmentation figures respectively from two path expander With 11 boundary graphs, which corresponds to 11 kinds of segmentation tags, and 11 boundary graphs correspond to 11 kinds of boundary labels.Specifically , such as:No. 1 segmentation tag represents the structure of bifurcated, and No. 2 segmentation tags represent the structure of fibrous plaque, and No. 3 labels represent calcium The structure ... of change patch and so on, No. 11 segmentation tags represent the structure of vascular wall.No. 1 boundary label represents the wheel of bifurcated Exterior feature, No. 2 boundary labels represent the profile of fibrous plaque, and No. 3 borders represent the profile ... of calcified plaque, No. 11 sides Boundary mark label represent blood profile.In 11 segmentation figures and 11 boundary graphs that so export, No. 1 is schemed bifurcated in then representing input images Structure and profile, scheme for No. 2 then to represent the structures and profile of fibrous plaque, No. 3 figures represent the structure and wheel of calcified plaque It is wide ... that No. 11 figures represent vascular wall profile and structure.The purpose of network training is that by the output of any input picture be default lattice The image of formula, and the image of the preset format is determined by above-mentioned 11 kinds of segmentation tags and 11 kinds of boundary labels.By a large amount of Circuit training so that the image of the preset format of output constantly tends to be close or even identical with the training label set, The CNN models of optimization are just formd at this time.
Further, in practical applications, as shown in Fig. 3 (b), a corresponding contraction road with convolutional neural networks structure Footpath, the CNN models include an input terminal;Corresponding to two path expanders of convolutional neural networks structure, the CNN models Including the first output terminal and the second output terminal;First output terminal is used to export N segmentation figures;Second output terminal is used for Export N boundary graphs.For example:When only including 2 kinds of pathological tissues, example in the cardiovascular IVOCT images inputted from input terminal No. 4 Lipid Plaques and No. 10 thin fibrous cap patches in this way, then the first output terminal of corresponding CNN models exports 11 segmentations Figure, the second output terminal export 11 boundary graphs, are taken turns wherein showing institutional framework and tissue on only No. 4 figures and No. 10 figures respectively Exterior feature, and not display content on remaining segmentation figure and boundary graph.This is because in the cardiovascular IVOCT images of input terminal input Do not include other pathological tissues, therefore the segmentation figure of those corresponding tissues and boundary graph do not show any content then.
In the embodiment of the present invention, segmentation figure and boundary graph are binary map.
The test sample collection, is inputted the CNN models, the corresponding organization type figure of acquisition different tissues by step 5.
Optimized CNN models need to test its accuracy by test sample, during test, from The input terminal input angiocarpy IVOCT images of CNN models, then from the output of the first output terminal N segmentation figures and N of CNN models Boundary graph is opened, then by the segmentation figure of output and boundary graph compared with standard picture, to determine the reliability of the CNN models And robustness.
When confirming that CNN models are in shape, you can come into operation.As shown in Fig. 4 (a) to Fig. 4 (c), by target Image inputs, and obtains output as a result, and in the segmentation figure and boundary graph of output being the structure chart and profile of corresponding organization type Figure, it is achieved thereby that carrying out artificial intelligence classification to organization type.And since the embodiment of the present invention is using two kinds of outputs, one side Different types of organization type is distinguished, and the profile of different tissues is determined, avoids adjacent two A organization type causes institutional framework to be judged by accident, therefore tissue typing imitates due to closely causing obscurity boundary to differentiate Fruit is more preferable.
It should be noted that tissue classification procedure provided in an embodiment of the present invention, not only classifies pathological tissues, and And also its all existing hetero-organization in cardiovascular IVOCT images can be distinguished, and pass through different organization type figures Show.
It should be noted that after classification chart and boundary graph is obtained, in order to more intuitively provide each tissue class to user State of the type in angiocarpy, the embodiment of the present invention further include herein below:
The corresponding segmentation figure of each organization type and the boundary graph, be overlapped by step 6, with described in acquisition The structure chart of organization type;
It is exactly specifically to be overlapped No. 1 segmentation figure and No. 1 boundary graph so that the side of this tissue of vascular bifurcation Boundary and structure are clear;Then No. 2 segmentation figures and No. 2 boundary graphs are overlapped, and so on obtain the knot of each organization type Composition.
The structure chart of histological types, be combined by step 7, to obtain the cardiovascular organization classification chart. After the structure chart of each organization type is obtained, since the structure chart of those organization types is inputted from same from input terminal Cardiovascular IVOCT images in sort out what is come, then the structure charts of those organization types is combined in an image again, Cardiovascular organization classification chart is obtained, can thus observe have in angiocarpy IVOCT images from cardiovascular organization classification chart There is which organization type, how those organization types are distributed, and where adjacent two kinds organized respective border, they how Connection etc. information, to provide specific in detail and intuitively usage experience to user.
It should be noted that since segmentation figure and boundary graph are binary map, by white and black displays, therefore by different tissues After the structure chart superposition of type, it is impossible to the classification results to organization type are showed, therefore, in the embodiment of the present invention, to not With organization type structure chart be combined before, to paint to the structure chart of different tissues, as shown in figure 5, with display Different organization types.
The embodiment of the present invention also provides a kind of tissue classifying apparatus based on cardiovascular IVOCT images, which includes Digital signal processing unit and storage unit, the storage unit are used to store process instruction, and the process instruction is by the numeral Signal processing unit performs, and realizes step involved in above-described embodiment.
In conclusion specific case used herein is based on angiocarpy IVOCT to one kind provided in an embodiment of the present invention The tissue classification procedure of image and the principle of device and embodiment are set forth, and the explanation of above example is only intended to side Assistant solves the method and its core concept of the present invention;Meanwhile for those of ordinary skill in the art, the think of according to the present invention Think, in specific embodiments and applications there will be changes, in conclusion this specification content should not be construed as pair The limitation of the present invention, protection scope of the present invention should be subject to appended claims.

Claims (7)

  1. A kind of 1. tissue classification procedure based on cardiovascular IVOCT images, it is characterised in that the described method includes:
    Step 1, obtain multiple marked IVOCT images;
    Step 2, establish IVOCT image pattern collection, and the IVOCT image patterns collection is divided into training sample set and test sample Collection;
    Step 3, structure convolutional neural networks structure;
    Step 4, be trained using convolutional neural networks described in the training sample set pair, to obtain CNN models;
    The test sample collection, is inputted the CNN models, the corresponding organization type figure of acquisition different tissues by step 5.
  2. 2. the tissue classification procedure according to claim 1 based on cardiovascular IVOCT images, it is characterised in that the step Rapid 2 include,
    Step 21, respectively the IVOCT image marked to each Zhang Suoshu carry out the conversion of diversified forms, to obtain multiple conversion Image afterwards, a sample is set to by each in the image after multiple described conversion;Wherein,
    The conversion of the diversified forms includes cutting out, translating, overturning, rotating, deforming and one kind or group in gray-value variation Close;
    Step 22, by the multiple sample be set to the IVOCT image patterns collection.
  3. 3. the tissue classification procedure according to claim 2 based on cardiovascular IVOCT images, it is characterised in that the step Further included before rapid 4:
    The organization type included according to lesion vessels sets training label, and the trained label includes segmentation tag and border is marked Label, the segmentation tag and the boundary label are used to be trained the convolutional neural networks.
  4. 4. the tissue classification procedure according to claim 3 based on cardiovascular IVOCT images, it is characterised in that the disease Becoming the organization type that blood vessel includes includes N kinds, correspondingly, sets training label, including:
    N kinds segmentation tag and N kind boundary labels are set, and wherein N is the positive integer more than 1.
  5. 5. the tissue classification procedure according to claim 4 based on cardiovascular IVOCT images, it is characterised in that the CNN Model includes input terminal, the first output terminal and the second output terminal;
    First output terminal is used to export N segmentation figures;
    Second output terminal is used to export N boundary graphs;
    The segmentation figure is used to show organization type included in the cardiovascular IVOCT images by input terminal input Structure;The boundary graph is used to show organization type included in the cardiovascular IVOCT images by input terminal input Profile;
    Wherein, the segmentation figure and the boundary graph are binary map.
  6. 6. the tissue classification procedure according to claim 5 based on cardiovascular IVOCT images, it is characterised in that the step Further included after rapid 5:
    The corresponding segmentation figure of each organization type and the boundary graph, be overlapped by step 6, to obtain the tissue The structure chart of type;
    The structure chart of histological types, be combined by step 7, to obtain the cardiovascular organization classification chart.
  7. 7. a kind of tissue classifying apparatus based on cardiovascular IVOCT images, including digital signal processing unit and storage unit, institute Storage unit to be stated to be used to store process instruction, it is characterised in that the process instruction is performed by the digital signal processing unit, Realize such as the step in any one of claim 1-6 method.
CN201711354768.9A 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image Active CN107993229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711354768.9A CN107993229B (en) 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711354768.9A CN107993229B (en) 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image

Publications (2)

Publication Number Publication Date
CN107993229A true CN107993229A (en) 2018-05-04
CN107993229B CN107993229B (en) 2021-11-19

Family

ID=62038744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711354768.9A Active CN107993229B (en) 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image

Country Status (1)

Country Link
CN (1) CN107993229B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629773A (en) * 2018-05-10 2018-10-09 北京红云智胜科技有限公司 The method for establishing the convolutional neural networks data set of training identification cardiovascular type
CN108805874A (en) * 2018-06-11 2018-11-13 中国电子科技集团公司第三研究所 A kind of multispectral image semanteme cutting method based on convolutional neural networks
CN109063557A (en) * 2018-06-27 2018-12-21 北京红云智胜科技有限公司 The method of rapid build heart coronary artery blood vessel identification data set
CN109087284A (en) * 2018-07-10 2018-12-25 重庆康华众联心血管病医院有限公司 A kind of cardiovascular cannula Image-aided detection device and detection method
CN109568047A (en) * 2018-11-26 2019-04-05 焦建洪 A kind of Cardiological intelligence bed special, control system and control method
CN109741335A (en) * 2018-11-28 2019-05-10 北京理工大学 Blood vessel OCT image medium vessels wall and the dividing method and device of blood flow area
CN109919932A (en) * 2019-03-08 2019-06-21 广州视源电子科技股份有限公司 The recognition methods of target object and device
CN110148112A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A method of it acquires and marks the progress data set foundation of tomoscan diagram data
CN111803104A (en) * 2020-07-20 2020-10-23 上海市第六人民医院 Medical image display method, medium and electronic equipment
CN114882017A (en) * 2022-06-30 2022-08-09 中国科学院大学 Method and device for detecting thin fiber cap plaque based on intracranial artery image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292856A1 (en) * 2015-04-06 2016-10-06 IDx, LLC Systems and methods for feature detection in retinal images
CN106780495A (en) * 2017-02-15 2017-05-31 深圳市中科微光医疗器械技术有限公司 Cardiovascular implantation support automatic detection and appraisal procedure and system based on OCT
CN107392909A (en) * 2017-06-22 2017-11-24 苏州大学 OCT image layer dividing method based on neutral net with constraint graph search algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292856A1 (en) * 2015-04-06 2016-10-06 IDx, LLC Systems and methods for feature detection in retinal images
CN106780495A (en) * 2017-02-15 2017-05-31 深圳市中科微光医疗器械技术有限公司 Cardiovascular implantation support automatic detection and appraisal procedure and system based on OCT
CN107392909A (en) * 2017-06-22 2017-11-24 苏州大学 OCT image layer dividing method based on neutral net with constraint graph search algorithm

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629773A (en) * 2018-05-10 2018-10-09 北京红云智胜科技有限公司 The method for establishing the convolutional neural networks data set of training identification cardiovascular type
CN108805874A (en) * 2018-06-11 2018-11-13 中国电子科技集团公司第三研究所 A kind of multispectral image semanteme cutting method based on convolutional neural networks
CN108805874B (en) * 2018-06-11 2022-04-22 中国电子科技集团公司第三研究所 Multispectral image semantic cutting method based on convolutional neural network
CN109063557A (en) * 2018-06-27 2018-12-21 北京红云智胜科技有限公司 The method of rapid build heart coronary artery blood vessel identification data set
CN109063557B (en) * 2018-06-27 2021-07-09 北京红云智胜科技有限公司 Method for quickly constructing heart coronary vessel identification data set
CN109087284A (en) * 2018-07-10 2018-12-25 重庆康华众联心血管病医院有限公司 A kind of cardiovascular cannula Image-aided detection device and detection method
CN109568047A (en) * 2018-11-26 2019-04-05 焦建洪 A kind of Cardiological intelligence bed special, control system and control method
CN109741335A (en) * 2018-11-28 2019-05-10 北京理工大学 Blood vessel OCT image medium vessels wall and the dividing method and device of blood flow area
CN109919932A (en) * 2019-03-08 2019-06-21 广州视源电子科技股份有限公司 The recognition methods of target object and device
CN110148112A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A method of it acquires and marks the progress data set foundation of tomoscan diagram data
CN111803104A (en) * 2020-07-20 2020-10-23 上海市第六人民医院 Medical image display method, medium and electronic equipment
CN114882017A (en) * 2022-06-30 2022-08-09 中国科学院大学 Method and device for detecting thin fiber cap plaque based on intracranial artery image

Also Published As

Publication number Publication date
CN107993229B (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN107993229A (en) A kind of tissue classification procedure and device based on cardiovascular IVOCT images
CN109035255B (en) Method for segmenting aorta with interlayer in CT image based on convolutional neural network
US8391575B2 (en) Automatic image analysis and quantification for fluorescence in situ hybridization
CN108615236A (en) A kind of image processing method and electronic equipment
CA2492071A1 (en) Computerized image capture of structures of interest within a tissue sample
CN108319977B (en) Cervical biopsy region identification method and device based on channel information multi-mode network
Dercksen et al. The Filament Editor: an interactive software environment for visualization, proof-editing and analysis of 3D neuron morphology
WO2021038202A1 (en) Computerised tomography image processing
CN113888412B (en) Image super-resolution reconstruction method for diabetic retinopathy classification
CN113205524B (en) Blood vessel image segmentation method, device and equipment based on U-Net
Meyer et al. A deep neural network for vessel segmentation of scanning laser ophthalmoscopy images
Odstrčilík et al. Improvement of vessel segmentation by matched filtering in colour retinal images
CN107945176A (en) A kind of colour IVOCT imaging methods
CN110490843A (en) A kind of eye fundus image blood vessel segmentation method
CN109919932A (en) The recognition methods of target object and device
Noh et al. Combining fundus images and fluorescein angiography for artery/vein classification using the hierarchical vessel graph network
Appan K et al. Retinal image synthesis for cad development
Valarmathi et al. RETRACTED ARTICLE: Exudate characterization to diagnose diabetic retinopathy using generalized method
CN112200726A (en) Urinary sediment visible component detection method and system based on lens-free microscopic imaging
Wu et al. A state-of-the-art survey of U-Net in microscopic image analysis: From simple usage to structure mortification
MacKay et al. Automated 3D labelling of fibroblasts and endothelial cells in SEM-imaged placenta using deep learning
CN116071549A (en) Multi-mode attention thinning and dividing method for retina capillary vessel
CN114998582A (en) Coronary artery blood vessel segmentation method, device and storage medium
Tiwari et al. Deep learning-based framework for retinal vasculature segmentation
CN113313714A (en) Coronary artery OCT image lesion plaque segmentation method based on improved U-Net network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhu Rui

Inventor after: Cao Yihui

Inventor after: Xue Ting

Inventor before: Zhu Rui

Inventor before: Li Jianan

Inventor before: Cao Yihui

Inventor before: Xue Ting

CB03 Change of inventor or designer information
CB02 Change of applicant information

Address after: 710119 Room 101, building 11, new industrial park, No. 60, West Avenue, high tech Zone, Xi'an, Shaanxi Province

Applicant after: Zhongke low light medical research center (Xi'an) Co.,Ltd.

Address before: Room 303, floor 3, Zhongke Chuangxing, southwest corner of bianjia village, 322 Youyi West Road, Xi'an, Shaanxi 710068

Applicant before: XI'AN VIVOLIGHT IMAGING TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant