CN107993229B - Tissue classification method and device based on cardiovascular IVOCT image - Google Patents

Tissue classification method and device based on cardiovascular IVOCT image Download PDF

Info

Publication number
CN107993229B
CN107993229B CN201711354768.9A CN201711354768A CN107993229B CN 107993229 B CN107993229 B CN 107993229B CN 201711354768 A CN201711354768 A CN 201711354768A CN 107993229 B CN107993229 B CN 107993229B
Authority
CN
China
Prior art keywords
tissue
ivoct
sample set
cardiovascular
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711354768.9A
Other languages
Chinese (zh)
Other versions
CN107993229A (en
Inventor
朱锐
曹一挥
薛婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Low Light Medical Research Center Xi'an Co ltd
Original Assignee
Zhongke Low Light Medical Research Center Xi'an Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Low Light Medical Research Center Xi'an Co ltd filed Critical Zhongke Low Light Medical Research Center Xi'an Co ltd
Priority to CN201711354768.9A priority Critical patent/CN107993229B/en
Publication of CN107993229A publication Critical patent/CN107993229A/en
Application granted granted Critical
Publication of CN107993229B publication Critical patent/CN107993229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention relates to a tissue classification method and a device based on a cardiovascular IVOCT image, wherein the method comprises the following steps: step 1, acquiring a plurality of marked IVOCT images; step 2, establishing an IVOCT image sample set, and dividing the IVOCT image sample set into a training sample set and a testing sample set; step 3, constructing a convolutional neural network structure; step 4, training the convolutional neural network by using the training sample set to obtain a CNN model; and 5, inputting the test sample set into the CNN model to obtain tissue type diagrams corresponding to different tissues. In the embodiment of the invention, the CNN model is provided with two output ends for respectively displaying the outline and the internal structure of the tissue, thereby solving the technical problems that in the prior art, the resolution of an output image is slightly small, and the tissue boundary display is unclear after partial image information is lost.

Description

Tissue classification method and device based on cardiovascular IVOCT image
Technical Field
The invention belongs to a biological tissue imaging technology, and particularly relates to a tissue classification method and device based on a cardiovascular IVOCT image.
Background
Biopsy is a common medical test in which a pathologist observes a tissue sample taken from a subject under a microscope to determine the nature or extent of a disease. Generally, tissue is cut into extremely thin sections and stained before being viewed under a microscope. Optical Coherence Tomography (OCT) is an alternative to non-destructive optical imaging modalities, providing a three-dimensional high-definition image of biopsy sample tissue without staining. Optical Coherence Microscopy (OCM) combines the advantages of OCT and confocal microscopy, providing high resolution cellular images.
The normal artery has a uniform layered structure of intima, media and adventitia, but when a lesion occurs in a blood vessel, different types of tissues are contained in the blood vessel, and therefore, it is necessary to classify and detect the different tissues, however, until now, the detection and classification of the tissues have been mainly manual and very time-consuming.
In the prior art, a convolution network for biomedical image segmentation is provided, the method adopts a contraction path to capture content, an amplification path to accurately position, and the two paths form a U shape, which is called U-Net, but the resolution of the final output picture of the U-Net is slightly smaller than that of an original picture, so that image information loss is caused, and an organization structure cannot be really restored.
Therefore, it is a technical hotspot in the art to design a tissue classification method capable of automatically detecting and saving labor and reducing image information loss.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for tissue classification based on cardiovascular IVOCT images, and the specific embodiments are as follows.
The embodiment of the invention provides a tissue classification method based on a cardiovascular IVOCT image, wherein the method comprises the following steps:
step 1, acquiring a plurality of marked IVOCT images;
step 2, establishing an IVOCT image sample set, and dividing the IVOCT image sample set into a training sample set and a testing sample set;
step 3, constructing a convolutional neural network structure;
step 4, training the convolutional neural network by using the training sample set to obtain a CNN model;
and 5, inputting the test sample set into the CNN model to obtain tissue type diagrams corresponding to different tissues.
In one embodiment of the present invention, said step 2 comprises,
step 21, respectively carrying out transformation in multiple forms on each marked IVOCT image to obtain multiple transformed images, and setting each of the multiple transformed images as a sample; wherein the content of the first and second substances,
the multi-form transformation comprises one or a combination of clipping, translation, turning, rotation, deformation and gray value change;
and step 22, setting the plurality of samples as the IVOCT image sample set.
In an embodiment of the present invention, the step 4 further includes:
and setting training labels according to the tissue type contained in the lesion blood vessel, wherein the training labels comprise segmentation labels and boundary labels, and the segmentation labels and the boundary labels are used for training the convolutional neural network.
In one embodiment of the present invention, the type of tissue contained in the lesion blood vessel includes N types, and accordingly, setting the training label includes:
setting N segmentation labels and N boundary labels, wherein N is a positive integer larger than 1.
In one embodiment of the invention, the CNN model comprises an input, a first output and a second output;
the first output end is used for outputting N segmentation graphs;
the second output end is used for outputting N boundary graphs;
the segmentation map is used for displaying the structure of the tissue type contained in the cardiovascular IVOCT image input through the input end; the boundary map is used for displaying the outline of the tissue type contained in the cardiovascular IVOCT image input through the input end;
wherein the segmentation map and the boundary map are both binary maps.
In an embodiment of the present invention, step 5 is followed by:
step 6, overlapping the segmentation graph and the boundary graph corresponding to each tissue type to obtain a structure graph of the tissue type;
and 7, combining the structure maps of different tissue types to obtain the cardiovascular tissue classification map.
Another embodiment of the present invention provides a tissue classification device based on cardiovascular IVOCT images, including a digital signal processing unit and a storage unit, wherein the storage unit is used for storing processing instructions, and the processing instructions are executed by the digital signal processing unit, so as to implement the steps in any one of the methods described above.
The invention has the beneficial effects that:
1. the convolutional neural network structure is provided with a contraction path and two expansion paths to form a transverse Y-shaped network structure, and then the CNN model is optimized through network training.
2. Aiming at the tissue classification of the cardiovascular IVOCT image, the new optimized CNN model is established, the structures of different types of tissues in the cardiovascular IVOCT image are output through the first output end, the boundary contour of the different types of tissues in the cardiovascular is output through the second output end, so that the boundary and contour of each tissue are respectively displayed, then the boundary graph and the segmentation graph are superposed to form a complete tissue structure graph, the image information loss caused by the reduction of the resolution of the output image is made up, the tissue structure reduction degree is high, and the image display effect is better.
Drawings
FIG. 1 is a flow chart of a method for tissue classification according to an embodiment of the present invention;
FIG. 2 is a block diagram of a convolutional neural network provided by an embodiment of the present invention;
fig. 3(a) is a schematic diagram of a CNN model provided in an embodiment of the present invention;
fig. 3(b) is a virtual diagram of the CNN model provided in the embodiment of the present invention;
fig. 4(a) is a cardiovascular IVOCT image inputted from an input end of a CNN model provided by an embodiment of the present invention;
fig. 4(b) is a tissue segmentation diagram output from the first output end of the CNN model provided in the embodiment of the present invention;
fig. 4(c) is a tissue boundary diagram output from the second output end of the CNN model provided in the embodiment of the present invention;
fig. 5 is a combined cardiovascular tissue classification map provided by an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Example one
As shown in fig. 1-5, fig. 1 is a flowchart of a tissue classification method according to an embodiment of the present invention; FIG. 2 is a block diagram of a convolutional neural network provided by an embodiment of the present invention; fig. 3(a) is a schematic diagram of a CNN model provided in an embodiment of the present invention; fig. 3(b) is a virtual diagram of the CNN model provided in the embodiment of the present invention; fig. 4(a) is a cardiovascular IVOCT image inputted from an input end of a CNN model provided by an embodiment of the present invention; fig. 4(b) is a tissue segmentation diagram output from the first output end of the CNN model provided in the embodiment of the present invention; fig. 4(c) is a tissue boundary diagram output from the second output end of the CNN model provided in the embodiment of the present invention; fig. 5 is a combined cardiovascular tissue classification map provided by an embodiment of the invention. The embodiment of the invention provides a tissue classification method based on a cardiovascular IVOCT image, wherein the method comprises the following steps:
step 1, acquiring a plurality of marked IVOCT images;
specifically, in the embodiment of the present invention, a plurality of labeled IVOCT images are acquired, specifically, in the clinical examination process, medical staff manually labels and classifies various tissue types in the acquired IVOCT images, and actually each IVOCT image may only include a part of the tissue types, but not all the tissue types, so that a plurality of IVOCT images need to be collected to achieve the purpose of acquiring a structural diagram of all the tissue types.
Step 2, establishing an IVOCT image sample set, and dividing the IVOCT image sample set into a training sample set and a testing sample set;
further, the marked IVOCT image is set as an image sample set, specifically, because the image number manually marked by the medical staff is limited, and a large number of training samples and test samples are needed for performing subsequent network training, the marked IVOCT image needs to be expanded to increase the number of samples, which is beneficial to performing network training, and specifically, the method for expanding the samples comprises the following steps:
step 21, respectively carrying out transformation in multiple forms on each marked IVOCT image to obtain multiple transformed images, and setting each of the multiple transformed images as a sample;
specific deformation modes for each marked IVOCT image include: cutting, translating, turning, rotating, deforming, changing gray level values, etc., and also includes various combinations of the above manners, such as cutting and translating, cutting and turning, cutting and rotating, cutting and deforming, cutting and changing gray level values, cutting, translating and turning, cutting, translating and rotating, cutting, translating and deforming, cutting, translating and changing gray level values, cutting, translating, turning and rotating, cutting, translating, turning and deforming, cutting, translating, turning and changing gray level values, cutting, translating, turning, rotating and deforming, cutting, translating, turning, rotating and changing gray level values, cutting, translating, turning, rotating, deforming, changing gray level values, translating, turning, translating, rotating, etc., it should be noted that the cutting manners also include different manners of chamfering, cutting edge, diamond shape, etc., and the turning also includes turning by 30 degrees, The image processing method and the device have the advantages that the image processing method and the device can be used for expanding one IVOCT image into a plurality of images by turning over 90 degrees and the like under different conditions, so that the IVOCT image is expanded into the plurality of images, the effect of compensating the loss of the output image in advance is achieved, and the technical problem that the image information is lost due to the fact that the resolution ratio of the output image is small in the prior art is solved. Through various deformation inputs to one image, the output image can be completely reproduced.
And step 22, setting the plurality of samples as the IVOCT image sample set.
Setting each deformed IVOCT image as a sample, so that a larger number of samples can be obtained, and the samples form an expanded IVOCT image sample set; and then selecting one part from the image sample set as a training sample, and using the other part as a test sample, wherein the training sample is used for training the classification network, and the test sample is used for testing the trained network to judge the accuracy of the classification network.
Step 3, constructing a convolutional neural network structure;
further, as shown in fig. 2, the embodiment of the present invention requires to establish a network for classifying IVOCT images, and in particular, to establish a new convolutional neural network structure, which has a contraction path and two expansion paths,
the systolic path is formed by a typical convolutional network, which is a repeating structure: two repeated convolutions (the image is expanded before convolution) and each convolution is followed by a modified linear unit (ReLU), the systolic path further comprising a maximum pooling operation and a down-sampling with step size of 2. The number of feature channels doubles for each downsampling. Each step of the dilation path performs a deconvolution operation (i.e., upsampling) on the feature map, and the number of feature channels obtained is reduced by half. In the dilation network, we combine the contraction output corresponding to the dilation path with the dilation path, and perform two convolution operations, and each convolution is followed by a modified linear element. It is necessary to apply the per-layer contraction output to the expansion path. Because the up-sampling is realized by deconvolution, if only the 5 th layer output is subjected to 4-layer deconvolution, although we can obtain a picture with the same size as the original image, the obtained result is not fine enough, that is, the classification of the detail part is not accurate enough. Therefore, in the present invention, we also perform the deconvolution operation on the output of each layer of downsampling. Since different kinds of tissues are often close together in IVOCT images, the second output is to prevent adjacent tissues from being joined together and causing erroneous judgment.
Step 4, training the convolutional neural network by using the training sample set to obtain a CNN model;
specifically, images in the training sample set are sequentially input into the established convolutional neural network, and parameter data of convolutional layers and/or full-link layers in the convolutional neural network model is cyclically trained to obtain image data in a preset format, so as to establish a CNN model, as shown in fig. 3(a), however, the difference between the image data in the preset format output by the initial CNN model and a target file is large, so that a large number of images in the training sample set need to be cyclically input into the convolutional neural network, and the CNN model is optimized. It should be noted that the target file in the embodiment of the present invention is a training label, and the training label is obtained by analyzing different tissue types contained in a diseased vessel.
Specifically, the training labels are substantially all tissue type images in the labeled IVOCT image, in the embodiment of the present invention, the training labels are divided into segmentation labels and boundary labels, the segmentation labels are used for representing structures of various tissue types, and the boundary labels are used for representing outlines of various tissue types.
Since the lesion blood vessel includes N types of tissue, a segmentation tag and a boundary tag are set for each tissue type, and thus N types of segmentation tags and N types of boundary tags need to be set in total, that is, N types of tissue to be classified are provided.
Within the scope of current medical knowledge, there are 11 types of tissues that can be identified in the diseased cardiovascular system, such as: bifurcations, fibrous plaques, calcified plaques, lipid plaques, fibrous calcified plaques, fibrous atherosclerotic plaques, red and white thrombi, guide wires, catheters, thin fibrous cap plaques, and vessel walls, namely N takes 11.
It should be noted that, in combination with the practical application, the value of N is not limited to 11, and with the development of medical technology, new diseased tissues are found, and then the types of diseased tissues will increase, or in combination with the practical application, only a part of currently known multiple diseased tissues is selected for classification, for example, 4 or 5 diseased tissues, and an equal number of training labels are correspondingly set to train the convolutional neural network, which is within the protection scope of the embodiment of the present invention.
In the embodiment of the present invention, taking N as an example of 11, 11 tissue types are analyzed and confirmed in total from the cardiovascular OCT image, a structural diagram of the 11 tissue types is generated as 11 segmentation labels, and a boundary diagram of the 11 tissue types is generated as 11 boundary labels. Then inputting pictures in a training sample set, training the convolutional neural network, inputting a training image from a contraction path of the convolutional neural network each time, and then respectively outputting 11 segmentation maps and 11 boundary maps from two expansion paths of the convolutional neural network, wherein the 11 segmentation maps correspond to 11 segmentation labels, and the 11 boundary maps correspond to 11 boundary labels. Specifically, for example: the segmentation label No. 1 represents the structure of bifurcation, the segmentation label No. 2 represents the structure of fibrous plaque, the segmentation label No. 3 represents the structure of calcified plaque … … and so on, and the segmentation label No. 11 represents the structure of blood vessel wall. Boundary label No. 1 represents the outline of the bifurcation, boundary label No. 2 represents the outline of the fibrous plaque, boundary No. 3 represents the outline … … of the calcified plaque and so on, and boundary label No. 11 represents the outline of the blood. Then, of the output 11 segmentation maps and 11 boundary maps, the map No. 1 represents the structure and contour of the bifurcation in the input image, the map No. 2 represents the structure and contour of fibrous plaque, and the map No. 3 represents the structure of calcified plaque and the map No. … … 11 represents the contour and structure of the blood vessel wall. The network training aims to output any input image as an image in a preset format, and the image in the preset format is determined by the 11 segmentation labels and the 11 boundary labels. Through a large amount of cyclic training, the output image in the preset format and the set training label tend to be close to or even identical continuously, and an optimized CNN model is formed at the moment.
Further, in practical applications, as shown in fig. 3(b), the CNN model includes an input corresponding to a systolic path of the convolutional neural network structure; two dilation paths corresponding to a convolutional neural network structure, the CNN model comprising a first output and a second output; the first output end is used for outputting N segmentation graphs; and the second output end is used for outputting N boundary graphs. For example: when the cardiovascular IVOCT image input from the input end only includes 2 kinds of lesion tissues, such as lipid plaque No. 4 and thin fibrous cap plaque No. 10, 11 segmentation maps are output from the first output end of the corresponding CNN model, 11 boundary maps are output from the second output end, wherein only the tissue structure and the tissue contour are respectively shown on the image No. 4 and the image No. 10, and the rest segmentation maps and the rest boundary maps do not show the content. This is because the cardiovascular IVOCT image input at the input end does not include other lesion tissues, and therefore the segmentation map and the boundary map corresponding to these tissues do not display any content.
In the embodiment of the invention, the segmentation graph and the boundary graph are binary graphs.
And 5, inputting the test sample set into the CNN model to obtain tissue type diagrams corresponding to different tissues.
The optimized CNN model needs to be tested for the accuracy of the CNN model through a test sample, in the testing process, a cardiovascular IVOCT image is input from an input end of the CNN model, then N segmentation maps and N boundary maps are output from a first output end of the CNN model, and then the output segmentation maps and boundary maps are compared with a standard image to determine the reliability and robustness of the CNN model.
And when the CNN model is in a good state, the CNN model can be put into use. As shown in fig. 4(a) to 4(c), the target image is input, the output result is obtained, and the output segmentation graph and boundary graph are the structure graph and contour graph of the corresponding tissue type, so that the artificial intelligence classification of the tissue type is realized. In addition, the embodiment of the invention adopts two outputs, on one hand, different types of tissue types are distinguished, and the outlines of different tissues are determined, so that the tissue structure misjudgment caused by the fact that the boundaries of two adjacent tissue types are fuzzy and indistinguishable due to the close connection of the two tissue types is avoided, and the tissue classification effect is better.
It should be noted that, the tissue classification method provided in the embodiment of the present invention not only classifies the diseased tissue, but also distinguishes all other tissues present in the cardiovascular IVOCT image, and is shown by different tissue type diagrams.
It should be noted that, after obtaining the classification map and the boundary map, in order to more intuitively provide the user with the state of each tissue type in the cardiovascular system, the embodiment of the present invention further includes the following contents:
step 6, overlapping the segmentation graph and the boundary graph corresponding to each tissue type to obtain a structure graph of the tissue type;
specifically, the segmentation map No. 1 and the boundary map No. 1 are superimposed, so that the boundary and the structure of the tissue of the blood vessel bifurcation are clear; and then, overlapping the division diagram No. 2 and the boundary diagram No. 2, and so on to obtain a structural diagram of each tissue type.
And 7, combining the structure maps of different tissue types to obtain the cardiovascular tissue classification map. After acquiring the structural map of each tissue type, since the structural maps of the tissue types are classified from the same cardiovascular IVOCT image input from the input terminal, the structural maps of the tissue types are combined into one image again to acquire a cardiovascular tissue classification map, so that information such as which tissue types are in the cardiovascular IVOCT image, how the tissue types are distributed, where the respective boundaries of two adjacent tissues are, how the two adjacent tissues are connected, and the like can be observed from the cardiovascular tissue classification map, so as to provide a specific detailed and intuitive use experience for a user.
It should be noted that, since the segmentation map and the boundary map are binary maps and are displayed in black and white, after the structure maps of different tissue types are superimposed, the classification result of the tissue types cannot be presented, and therefore, in the embodiment of the present invention, before the structure maps of different tissue types are combined, the structure maps of different tissues are colored, as shown in fig. 5, to display different tissue types.
The embodiment of the invention also provides a tissue classification device based on the cardiovascular IVOCT image, which comprises a digital signal processing unit and a storage unit, wherein the storage unit is used for storing a processing instruction, and the processing instruction is executed by the digital signal processing unit, so that the steps involved in the embodiment are realized.
In summary, the principle and implementation of the method and apparatus for tissue classification based on cardiovascular IVOCT images according to the embodiments of the present invention are described herein by using specific examples, and the above description of the embodiments is only used to help understand the method and its core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be defined by the appended claims.

Claims (6)

1. A method of tissue classification based on cardiovascular IVOCT images, the method comprising:
step 1, acquiring a plurality of marked IVOCT images;
step 2, establishing an IVOCT image sample set, and dividing the IVOCT image sample set into a training sample set and a testing sample set;
step 3, constructing a convolutional neural network structure;
step 4, training the convolutional neural network by using the training sample set to obtain a CNN model;
step 5, inputting the test sample set into the CNN model to obtain tissue type graphs corresponding to different tissues;
the CNN model comprises an input end, a first output end and a second output end;
the first output end is used for outputting N segmentation graphs;
the second output end is used for outputting N boundary graphs;
the segmentation map is used for displaying the structure of the tissue type contained in the cardiovascular IVOCT image input through the input end; the boundary map is used for displaying the outline of the tissue type contained in the cardiovascular IVOCT image input through the input end;
wherein the segmentation map and the boundary map are both binary maps.
2. The method of claim 1, wherein the step 2 comprises,
step 21, respectively carrying out transformation in multiple forms on each marked IVOCT image to obtain multiple transformed images, and setting each of the multiple transformed images as a sample; wherein the content of the first and second substances,
the multi-form transformation comprises one or a combination of clipping, translation, turning, rotation, deformation and gray value change;
and step 22, setting the plurality of samples as the IVOCT image sample set.
3. The method for classifying tissue based on the cardiovascular IVOCT image of claim 2, further comprising before step 4:
and setting training labels according to the tissue type contained in the lesion blood vessel, wherein the training labels comprise segmentation labels and boundary labels, and the segmentation labels and the boundary labels are used for training the convolutional neural network.
4. The method for classifying tissues based on the cardiovascular IVOCT image of claim 3, wherein the tissue types contained in the lesion blood vessel include N types, and accordingly, setting a training label comprises:
setting N segmentation labels and N boundary labels, wherein N is a positive integer larger than 1.
5. The method of claim 4, further comprising after step 5:
step 6, overlapping the segmentation graph and the boundary graph corresponding to each tissue type to obtain a structure graph of the tissue type;
and 7, combining the structure maps of different tissue types to obtain the cardiovascular tissue classification map.
6. A tissue classification device based on cardiovascular IVOCT images, comprising a digital signal processing unit and a storage unit for storing processing instructions, characterized in that the processing instructions are executed by the digital signal processing unit, implementing the steps of the method according to any of claims 1-5.
CN201711354768.9A 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image Active CN107993229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711354768.9A CN107993229B (en) 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711354768.9A CN107993229B (en) 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image

Publications (2)

Publication Number Publication Date
CN107993229A CN107993229A (en) 2018-05-04
CN107993229B true CN107993229B (en) 2021-11-19

Family

ID=62038744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711354768.9A Active CN107993229B (en) 2017-12-15 2017-12-15 Tissue classification method and device based on cardiovascular IVOCT image

Country Status (1)

Country Link
CN (1) CN107993229B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629773B (en) * 2018-05-10 2021-06-18 北京红云智胜科技有限公司 Method for establishing convolutional neural network data set for training and identifying type of heart blood vessel
CN108805874B (en) * 2018-06-11 2022-04-22 中国电子科技集团公司第三研究所 Multispectral image semantic cutting method based on convolutional neural network
CN109063557B (en) * 2018-06-27 2021-07-09 北京红云智胜科技有限公司 Method for quickly constructing heart coronary vessel identification data set
CN109087284A (en) * 2018-07-10 2018-12-25 重庆康华众联心血管病医院有限公司 A kind of cardiovascular cannula Image-aided detection device and detection method
CN109568047A (en) * 2018-11-26 2019-04-05 焦建洪 A kind of Cardiological intelligence bed special, control system and control method
CN109741335B (en) * 2018-11-28 2021-05-14 北京理工大学 Method and device for segmenting vascular wall and blood flow area in blood vessel OCT image
CN109919932A (en) * 2019-03-08 2019-06-21 广州视源电子科技股份有限公司 The recognition methods of target object and device
CN110148112A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A method of it acquires and marks the progress data set foundation of tomoscan diagram data
CN111803104B (en) * 2020-07-20 2021-06-11 上海杏脉信息科技有限公司 Medical image display method, medium and electronic equipment
CN114882017B (en) * 2022-06-30 2022-10-28 中国科学院大学 Method and device for detecting thin fiber cap plaque based on intracranial artery image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780495A (en) * 2017-02-15 2017-05-31 深圳市中科微光医疗器械技术有限公司 Cardiovascular implantation support automatic detection and appraisal procedure and system based on OCT
CN107392909A (en) * 2017-06-22 2017-11-24 苏州大学 OCT image layer dividing method based on neutral net with constraint graph search algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10115194B2 (en) * 2015-04-06 2018-10-30 IDx, LLC Systems and methods for feature detection in retinal images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780495A (en) * 2017-02-15 2017-05-31 深圳市中科微光医疗器械技术有限公司 Cardiovascular implantation support automatic detection and appraisal procedure and system based on OCT
CN107392909A (en) * 2017-06-22 2017-11-24 苏州大学 OCT image layer dividing method based on neutral net with constraint graph search algorithm

Also Published As

Publication number Publication date
CN107993229A (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN107993229B (en) Tissue classification method and device based on cardiovascular IVOCT image
CN107909585B (en) Intravascular intima segmentation method of intravascular ultrasonic image
CN110288597B (en) Attention mechanism-based wireless capsule endoscope video saliency detection method
AU2019431299B2 (en) AI systems for detecting and sizing lesions
CN109978037B (en) Image processing method, model training method, device and storage medium
CN110298844B (en) X-ray radiography image blood vessel segmentation and identification method and device
CN111429474B (en) Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution
CN107993228B (en) Vulnerable plaque automatic detection method and device based on cardiovascular OCT (optical coherence tomography) image
EP2996058A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports
CN111368849A (en) Image processing method, image processing device, electronic equipment and storage medium
US10019794B2 (en) Method and apparatus for breast lesion diagnosis
CN112991346B (en) Training method and training system for learning network for medical image analysis
CN112381164A (en) Ultrasound image classification method and device based on multi-branch attention mechanism
CN113424222A (en) System and method for providing stroke lesion segmentation using a conditional generation countermeasure network
CN110163872A (en) A kind of method and electronic equipment of HRMR image segmentation and three-dimensional reconstruction
Chen et al. AI-PLAX: AI-based placental assessment and examination using photos
CN107945176B (en) Color IVOCT imaging method
CN115409859A (en) Coronary artery blood vessel image segmentation method and device, storage medium and terminal
US11776115B2 (en) System and method for estimating a quantity of interest based on an image of a histological section
MacKay et al. Automated 3D labelling of fibroblasts and endothelial cells in SEM-imaged placenta using deep learning
CN115861298B (en) Image processing method and device based on endoscopic visualization
CN112070778A (en) Multi-parameter extraction method based on intravascular OCT and ultrasound image fusion
WO2021015232A1 (en) Learning device, method, and program, graph structure extraction device, method, and program, and learned extraction model
CN114419061A (en) Method and system for segmenting pulmonary artery and vein blood vessels
CN113744215A (en) Method and device for extracting center line of tree-shaped lumen structure in three-dimensional tomography image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhu Rui

Inventor after: Cao Yihui

Inventor after: Xue Ting

Inventor before: Zhu Rui

Inventor before: Li Jianan

Inventor before: Cao Yihui

Inventor before: Xue Ting

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 710119 Room 101, building 11, new industrial park, No. 60, West Avenue, high tech Zone, Xi'an, Shaanxi Province

Applicant after: Zhongke low light medical research center (Xi'an) Co.,Ltd.

Address before: Room 303, floor 3, Zhongke Chuangxing, southwest corner of bianjia village, 322 Youyi West Road, Xi'an, Shaanxi 710068

Applicant before: XI'AN VIVOLIGHT IMAGING TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant