CN110660068B - Semi-automatic brain region segmentation method for three-dimensional cell construction image - Google Patents

Semi-automatic brain region segmentation method for three-dimensional cell construction image Download PDF

Info

Publication number
CN110660068B
CN110660068B CN201910853268.2A CN201910853268A CN110660068B CN 110660068 B CN110660068 B CN 110660068B CN 201910853268 A CN201910853268 A CN 201910853268A CN 110660068 B CN110660068 B CN 110660068B
Authority
CN
China
Prior art keywords
image
brain
dimensional
images
knowledge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910853268.2A
Other languages
Chinese (zh)
Other versions
CN110660068A (en
Inventor
丰钊
李安安
刘鑫
倪鸿
龚辉
骆清铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hust-Suzhou Institute For Brainsmatics
Original Assignee
Hust-Suzhou Institute For Brainsmatics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hust-Suzhou Institute For Brainsmatics filed Critical Hust-Suzhou Institute For Brainsmatics
Priority to CN201910853268.2A priority Critical patent/CN110660068B/en
Publication of CN110660068A publication Critical patent/CN110660068A/en
Application granted granted Critical
Publication of CN110660068B publication Critical patent/CN110660068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention provides a semi-automatic brain region segmentation method of a three-dimensional cell construction image, which comprises the following steps: step S1, introducing knowledge; step S2, digitizing knowledge; step S3, knowledge packaging; step S4, automatically recognizing the brain area; and step S5, brain area boundary optimization. According to the invention, higher-dimensional and more abstract characteristics can be better extracted through a deep learning method, so that the whole boundary of a brain area consisting of discrete cell bodies can be identified; in addition, through interactive segmentation, the invention successfully introduces the priori knowledge of neuroanatomy into a prediction network constructed by a deep learning technology, and digitalizes the priori knowledge of neuroanatomy mastered by experts, so that the only unpredictable experience in the brain of the experts can be changed into a tool to be repeatedly used, the threshold of brain area automatic identification is greatly reduced, common neuroscience researchers can get rid of high dependence on the knowledge of the narrow field of neuroanatomy, and the brain area identification efficiency is greatly improved.

Description

Semi-automatic brain region segmentation method for three-dimensional cell construction image
Technical Field
The invention relates to the field of image processing, in particular to a semi-automatic brain region segmentation method of a three-dimensional cell construction image with resolution level of hundreds of micrometers to micrometers.
Background
Brain imaging is one of the necessary technical means for developing neuroscience research. With the progress of the micro-optical imaging technology, people can acquire a large amount of three-dimensional brain image data at the micron resolution level, and a foundation is laid for the research on a finer spatial scale. Among the many types of brain images, cytostatic images are considered as the gold standard that identifies different brain regions in the brain. In this type of image, the morphology of the cell bodies and the spatial aggregation pattern of the cells in different brain regions are different from each other, so that the boundaries of the respective brain regions are usually found and manually drawn by an experienced neuro-anatomical specialist by carefully identifying the textural features of the cytostructural image.
The artificial recognition of the brain region boundary plays an extremely important role in brain science research, and is the basis for developing different directions of work such as clinical operation, accurate drug administration, cognitive behavior research, brain function loop research and the like. Take a study of brain functional loops as an example. The functional circuit generally refers to an information communication channel formed by connection of nerve fibers in different brain regions of the brain in order to perform a certain function together. A researcher firstly obtains a three-dimensional image data set containing functional loop information through a biomarker technology, and then determines the structural composition of the functional loop based on the brain region boundary information marked manually in advance.
There are obvious limitations to manually identifying brain regions. First, the individual brain regions are manually identified one by one, which is not only extremely time consuming, but also requires long-term neuroanatomical training and rich experience accumulation for the operator, which is a huge challenge for most researchers. Secondly, the manual identification method is only applicable to image datasets obtained by traditional microscopic optical imaging. The axial resolution of the data can only reach the level of hundreds of microns, while the axial resolution of a mass image data set obtained by the current high-resolution micro-optical imaging can reach the level of microns, pictures are tens of thousands of pictures, and the data volume is increased by two orders of magnitude compared with the traditional method.
Using traditional brain region artificial identification methods requires researchers to have years of neuroanatomy training, and when constructing images in the face of brand new, unmarked cells, it is still necessary to carefully and repeatedly observe the shape, orientation, arrangement pattern and other texture information of the neuronal cell bodies on the images, and therefore it is very inefficient. In the past, a complete whole brain microscopic optical image data set only comprises dozens of pictures to about one hundred pictures, so that the time consumed for identifying the brain area of the whole brain can be controlled to be about several months although the manual identification method is time-consuming and labor-consuming. With the development of imaging technology, the number of pictures contained in a whole brain data set is often tens of thousands nowadays, and if the traditional manual identification method is continuously adopted, the drawing of all brain region boundaries theoretically consumes years, which is obviously infeasible for neuroscience research.
Although the conventional method is time-consuming and labor-consuming, it is difficult to automate the identification of the boundary of the brain region on the cytostructure image. The information contained in the cytostructural image is highly complex, and each brain region appears as an irregular region in the image where a large number of discrete cells are aggregated in a specific density and arrangement. For the extraction of the region boundary, the traditional image segmentation algorithm which depends on continuous and regular gray scale information cannot be used.
Disclosure of Invention
In view of the shortcomings of the prior art, the present invention aims to provide an efficient method for semi-automatic segmentation of brain regions in three-dimensional cytostatic images.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a method of semi-automatic brain region segmentation of a three-dimensional cytostatic image, the method comprising the steps of:
step S1, knowledge introduction: constructing an image data set based on the three-dimensional cells, selecting a two-dimensional image sequence, and marking each brain area boundary on each image of the two-dimensional image sequence to form a gold standard image set;
step S2, knowledge digitization: traversing all pixel points of each gold standard image in the gold standard image set, counting the number of the pixel points, selecting partial pixel points, constructing a local image and forming a training set;
Step S3, knowledge packaging: selecting a multi-target classification deep learning network structure to train by taking the training set as input, and constructing a multi-target classification deep learning prediction network;
step S4, automatic brain area identification: predicting all images of the three-dimensional cell construction image data set by using the multi-target classification deep learning prediction network to obtain a prediction result image set containing a target region;
step S5, brain region boundary optimization: for each prediction result image in the prediction result image set, searching a gold standard image which is closest to the prediction result image in space in the gold standard image set, and registering the gold standard image to the prediction result image through a nonlinear registration algorithm to obtain a brain region segmentation result image.
Further, the selecting of the two-dimensional image sequence based on the three-dimensional cytostructural image dataset is performed by extracting one image per a first number of images in an axial direction of the three-dimensional cytostructural image dataset.
Furthermore, the marking of the boundaries of the brain regions on the images of the two-dimensional image sequence is to mark the range covered by the brain region of interest with pixels of the same gray scale, and simultaneously mark other brain regions and the background on the images with different gray scales, so that the boundaries between the brain regions are clearly visible.
Further, the marking of the boundaries of the brain regions on the images of the two-dimensional image sequence is a manual interactive marking.
Further, the local images include positive example local images and negative example local images, and the positive example local images and the negative example local images form the training set.
Further, the regular local image is a local image with a first size constructed on an image corresponding to the current image in the two-dimensional image sequence by randomly selecting a second number of pixel points for each image in the gold standard image set according to the proportion of the number of the pixel points in different brain regions and taking the pixel points as the center.
Further, the counterexample local image is a local image with a first size constructed by selecting a third number of pixel points from a blank background area which does not belong to any brain area on the current image and constructing the local image according to the same method;
further, the multi-target classification deep learning network structure is an inclusion Net network, the initialization weight of the inclusion Net network is a weight obtained by pre-training on an Image Net training set, and an Adam optimizer is adopted.
Further, the prediction is performed on all the images of the three-dimensional cytometric image data set by using the multi-objective classification deep learning prediction network to obtain a predicted image containing a plurality of target regions, the predicted image containing the plurality of target regions is split into a plurality of predicted images only containing a single target region, and the predicted image containing only the single target region forms the prediction result image set containing the target region.
Further, the nonlinear registration algorithm is a differential homomorphic nonlinear registration algorithm.
The invention introduces the latest progress of the recent deep learning field into the image segmentation field and is used for segmenting the boundary of the brain region. Because the deep learning method can better extract higher-dimensional and more abstract characteristics, the whole boundary of a brain area formed by discrete cell bodies can be identified; in addition, through interactive segmentation, the invention successfully introduces the priori knowledge of neuroanatomy into a prediction network constructed by a deep learning technology, and digitalizes the priori knowledge of neuroanatomy mastered by experts, so that only the experience which is difficult to say in the brains of the experts can be changed into a tool for repeated use, thereby greatly reducing the threshold of brain area automatic identification, enabling common neuroscience researchers to get rid of high dependence on the knowledge of the narrow field of neuroanatomy, and greatly improving the brain area identification efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a semi-automatic brain region segmentation method for a three-dimensional cytostructural image according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a semi-automatic brain region segmentation method for a three-dimensional cytostructural image according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a semi-automatic brain region segmentation method for a three-dimensional cytostatic image according to an embodiment of the present invention includes the following steps:
step S1, knowledge introduction: constructing an image data set based on the three-dimensional cells, selecting a two-dimensional image sequence, and marking each brain area boundary on each image of the two-dimensional image sequence to form a gold standard image set;
the two-dimensional image sequence is selected based on the three-dimensional cell structure image dataset, and one image is extracted at intervals of a first number of images in the axial direction of the three-dimensional cell structure image dataset, so that the selected two-dimensional image sequence is representative.
The marking of the boundaries of the brain regions on the images of the two-dimensional image sequence is to mark the range covered by the brain region of interest (i.e. the brain region which is expected to be automatically identified in the following step) with pixels of the same gray scale, and simultaneously mark other brain regions and the background on the images with different gray scales, so that the boundaries between the brain regions are clearly visible.
The marking of the boundaries of the brain regions on the images of the two-dimensional image sequence is a manual interactive marking, namely, a manual marking is used as a main mark, and an automatic marking is used as an auxiliary mark, so that the efficiency is improved.
Referring to fig. 2, a flow chart of a semi-automatic brain region segmentation method for a three-dimensional cytostructural image according to an embodiment of the present invention is shown, in which a mouse brain micron resolution three-dimensional cytostructural image data set D1 including a dominant olfactory bulb region is selected. The main olfactory bulb is one of the most important functional structures of the central olfactory system of mammals, and is responsible for receiving and integrating olfactory organ input signals, the internal structure of the olfactory bulb is highly complex and comprises different brain regions such as an olfactory nerve layer, an olfactory bulbar layer, an outer plexiform cell layer, a mitral cell layer, an inner plexiform cell layer, a granular cell layer and the like, and the traditional target identification method is difficult to complete the image segmentation task of the complex brain regions. According to the embodiment of the invention, a mouse of C57 strain is selected as an experimental animal, a brain tissue sample is prepared by using a Niger staining technology, and a whole brain image dataset with both horizontal and axial spatial resolutions of 1 micron is acquired by using a microscopic optical sectioning tomography technology, namely, the spatial resolution of the three-dimensional cell construction image dataset D1 in the horizontal direction, namely an image plane, is 1 mu m/pixel, and simultaneously the spatial distance in the axial direction, namely between adjacent images of the dataset is also 1 mu m/pixel. From the three-dimensional cytostructural image data set D1, 1 image was extracted at intervals of 100 in the axial direction, and a two-dimensional image sequence D2 was constructed.
And manually drawing the boundaries of the brain regions of the main olfactory bulb on each image of the two-dimensional image sequence D2, wherein pixels in the boundaries of each brain region use the same gray scale mark to obtain corresponding mark images. The marker images constitute the gold standard image set D3, and the number of images is N. Software such as Amira Software or ITK Snap can be adopted for specific marking work, and the Software provides a very powerful interactive image marking function, so that the marking efficiency can be greatly improved. In this embodiment, for example, Amira Software, first, an image is imported, and then, the coverage of the brain region of interest is manually marked in Amira through an image annotation tool such as lasso or grayscale threshold provided by Amira. And then, continuously refining the boundary through marking tools such as hole filling, two-dimensional boundary smoothing, three-dimensional body smoothing and the like provided by Amira. To further improve efficiency, interval labeling may also be used. For example, the 5 images to be labeled are manually labeled with the 1 st and 5 th images, and then the coverage of the interesting brain region on the middle 3 images to be labeled is automatically calculated by using a labeling region interpolation tool provided by Amira. Of course, this automatically calculated range is usually not accurate enough, so it is further constructed to fit the real situation of brain area coverage. This labeling process, which combines human and computer assistance, is called an interactive labeling method.
Step S2, knowledge digitization: traversing all pixel points of each gold standard image in the gold standard image set, counting the number of the pixel points, selecting partial pixel points, constructing a local image and forming a training set;
the local images comprise positive example local images and negative example local images, and the positive example local images and the negative example local images form the training set.
The regular local image is a local image with a first size constructed on an image corresponding to the current image in the two-dimensional image sequence by randomly selecting a second number of pixel points according to the proportion of the number of the pixel points contained in different brain areas for each image in the gold standard image set and taking the pixel points as the center. The counterexample local image is a local image with a first size constructed by selecting a third number of pixel points from a blank background area which does not belong to any brain area on the current image and constructing the local image according to the same method;
referring to fig. 2, for each image in the gold standard image set D3, traversing each pixel point thereon, and counting the number of pixel points in each brain region of the dominant olfactory bulb, such as olfactory nerve layer, olfactory bulbar layer, outer plexiform cell layer, mitral cell layer, inner plexiform cell layer, and granular cell layer; randomly selecting 8 ten thousand pixel points according to the proportion of the number of the pixel points in different brain areas, and constructing 8 ten thousand positive example local images with the size of 200 x 200 on the image corresponding to the current image in the two-dimensional image sequence D2 by taking the pixel point as a center; then 2 ten thousand pixel points are randomly selected from a blank background area which does not belong to any brain area on the image, and 2 ten thousand counter example local images with the size of 200 multiplied by 200 are constructed according to the same method. By traversing each image of the golden standard image set D3 according to the method, N × 10 ten thousand partial images can be obtained, thereby forming a training set D4.
Step S3, knowledge packaging: selecting a multi-target classification deep learning network structure to train by taking the training set as input, and constructing a multi-target classification deep learning prediction network;
referring to fig. 2, the training set D4 is used as an input image, and an inclusion Net deep learning network structure is used to perform training: the last classifier of the Incepton Net network is changed into 2 layers of full-connection layers, the feature number of each layer is set to be 1024, and a Dropout layer is added. The initialization weight of the network adopts the weight obtained by pre-training on an ImageNet training set. The Adam optimizer is adopted during training, the initial learning rate is 0.00001, the training steps are 10, when the loss of the verification set is not reduced, the learning rate is reduced by 10 times, when the loss of the verification set is not reduced within 3 steps, the network stops training, and the model parameters with the minimum loss of the corresponding verification set are stored. And obtaining the multi-target classification deep learning prediction network Net1 after the training is finished.
Step S4, automatic brain area identification: predicting all images of the three-dimensional cell construction image data set by using the multi-target classification deep learning prediction network to obtain a prediction result image set containing a target region;
The method comprises the steps of carrying out prediction on all images of the three-dimensional cell structure image data set by using the multi-target classification deep learning prediction network to obtain a predicted image containing a plurality of target areas, splitting the predicted image containing the plurality of target areas into a plurality of predicted images only containing a single target area, and forming the predicted result image set containing the target area by using the predicted image containing the single target area.
Referring to fig. 2, the three-dimensional cell-constructed image data set D1 is traversed, and for each image, the image is input into the multi-objective classification depth learning prediction network Net1 to obtain a corresponding prediction image, where the image should include a plurality of detected brain regions, and pixels included in each brain region use the same gray scale label. All the predictive images collectively form a predictive image set D5, and the predictive image set D5 has a one-to-one correspondence with the image numbers in the three-dimensional cytometric image data set D1.
For each image in the predicted image set D5, recording all gray values contained in the image; and forming a gray value set G, wherein the number of elements of the gray value set G is k. Traversing the gray value set G, retrieving pixel points with gray values equal to G on the image for any element G, then creating a blank image with the same size as all the images in the predicted image set D5, copying all the pixel points with gray values equal to G to the blank image, and repeating the operation until k images respectively only containing single gray value are created. And traversing the predicted image set D5, and repeating the operation to split each multi-target region predicted image in the predicted image set D5 into a plurality of predicted images only containing a single target region. These predictive pictures containing only a single target region constitute said set of predictive result pictures D6 containing a target region.
Step S5, brain region boundary optimization: for each prediction result image in the prediction result image set, searching a gold standard image which is closest to the prediction result image in space in the gold standard image set, and registering the gold standard image to the prediction result image through a nonlinear registration algorithm to obtain a brain region segmentation result image.
This step converts the original predicted image containing the scatter and the hole acquired in step S4 into a brain region with smooth boundaries that meets the requirements of neuroscience research. Wherein the nonlinear registration algorithm is a differential homomorphic nonlinear registration algorithm.
Referring to fig. 2, the set of prediction result images D6 containing the target region is traversed, and for each prediction result image I, the nearest golden standard image J in the set of golden standard images D3 in terms of spatial distance is found. And registering the nearest gold standard image J to the prediction result image I through a differential homoembryo nonlinear registration algorithm. And obtaining a brain region segmentation result image D7 after the registration optimization.
The gold standard image J is a manually marked gold standard image, the brain area boundary on the gold standard image J is complete and continuous, and the differential homoembryo algorithm has topological conformal property, so that the topological property of the object shape can be maintained while the object shape is subjected to nonlinear space transformation, and therefore, the shape of the registered prediction result image I is close to that of the gold standard image J on one hand, and the correct shape of the manually drawn brain area on the gold standard image J is also maintained on the other hand.
The invention introduces the latest progress in the field of deep learning in recent years into the field of image segmentation, and is used for segmentation of brain region boundaries. Because the deep learning method can better extract higher-dimensional and more abstract features, the whole boundary of a brain area formed by discrete cell bodies can be identified; in addition, through interactive segmentation, the invention successfully introduces the priori knowledge of neuroanatomy into a prediction network constructed by a deep learning technology, and digitalizes the priori knowledge of neuroanatomy mastered by experts, so that the only unpredictable experience in the brain of the experts can be changed into a tool to be repeatedly used, the threshold of brain area automatic identification is greatly reduced, common neuroscience researchers can get rid of high dependence on the knowledge of the narrow field of neuroanatomy, and the brain area identification efficiency is greatly improved.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention disclosed herein are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (6)

1. A method for semi-automatic brain region segmentation of a three-dimensional cytostructured image, the method comprising the steps of:
step S1, knowledge introduction: constructing an image data set based on the three-dimensional cells, selecting a two-dimensional image sequence, and marking each brain area boundary on each image of the two-dimensional image sequence to form a gold standard image set;
selecting a two-dimensional image sequence based on the three-dimensional cell construction image data set, wherein one image is extracted at intervals of a first number of images in the axial direction of the three-dimensional cell construction image data set;
marking the boundaries of the brain areas on the images of the two-dimensional image sequence in a manual interactive mode, marking the range covered by the brain area of interest by using pixels with the same gray level, and marking other brain areas and the background on the images by using different gray levels respectively, so that the boundaries between the brain areas are clearly visible;
step S2, knowledge digitization: traversing all pixel points of each gold standard image in the gold standard image set, counting the number of the pixel points, selecting partial pixel points from the pixel points, constructing a local image and forming a training set;
Step S3, knowledge packaging: selecting a multi-target classification deep learning network structure to train by taking the training set as input, and constructing a multi-target classification deep learning prediction network;
step S4, automatic brain area identification: predicting all images of the three-dimensional cell construction image data set by using the multi-target classification deep learning prediction network to obtain a prediction result image set containing a target region; specifically, the prediction is performed on all images of the three-dimensional cell structure image data set by using the multi-target classification deep learning prediction network, so as to obtain a predicted image containing a plurality of target regions, the predicted image containing the plurality of target regions is split into a plurality of predicted images only containing a single target region, and the predicted image containing only the single target region forms the prediction result image set containing the target region;
step S5, brain region boundary optimization: for each prediction result image in the prediction result image set, searching a gold standard image which is closest to the prediction result image in space in the gold standard image set, and registering the gold standard image to the prediction result image through a nonlinear registration algorithm to obtain a brain region segmentation result image.
2. The method of claim 1, wherein the partial images comprise a positive example partial image and a negative example partial image, and the positive example partial image and the negative example partial image form the training set.
3. The method of claim 2, wherein the formal partial image is a partial image of a first size created on an image corresponding to a current image in the two-dimensional image sequence centered around a first pixel point, and the formal partial image is a partial image of a second size randomly selected for each image in the golden standard image set according to a ratio of the number of pixels included in different brain regions.
4. The method of claim 3, wherein the counterexample partial image is a partial image having a first size, which is constructed on an image corresponding to the current image in the two-dimensional image sequence centering on a third number of second pixels selected from a blank background region not belonging to any brain region on the current image.
5. The method of claim 1, wherein the multi-objective classification deep learning network structure is an IncepotionNet network, and the initialization weights of the IncepotionNet network are weights pre-trained on an ImageNet training set, and are implemented by an Adam optimizer.
6. The method of claim 1, wherein the non-linear registration algorithm is a differential homoblast non-linear registration algorithm.
CN201910853268.2A 2019-09-10 2019-09-10 Semi-automatic brain region segmentation method for three-dimensional cell construction image Active CN110660068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910853268.2A CN110660068B (en) 2019-09-10 2019-09-10 Semi-automatic brain region segmentation method for three-dimensional cell construction image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910853268.2A CN110660068B (en) 2019-09-10 2019-09-10 Semi-automatic brain region segmentation method for three-dimensional cell construction image

Publications (2)

Publication Number Publication Date
CN110660068A CN110660068A (en) 2020-01-07
CN110660068B true CN110660068B (en) 2022-06-03

Family

ID=69036923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910853268.2A Active CN110660068B (en) 2019-09-10 2019-09-10 Semi-automatic brain region segmentation method for three-dimensional cell construction image

Country Status (1)

Country Link
CN (1) CN110660068B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916187B (en) * 2020-07-17 2024-04-19 华中科技大学 Medical image cell position auxiliary user positioning method, system and device
CN111887813A (en) * 2020-08-11 2020-11-06 南通大学 Method and device for recognizing brain region position of fresh in-vitro tissue

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430430B1 (en) * 1999-04-29 2002-08-06 University Of South Florida Method and system for knowledge guided hyperintensity detection and volumetric measurement
CN109308680A (en) * 2018-08-30 2019-02-05 迈格生命科技(深圳)有限公司 A kind of brain anatomy tutoring system based on nuclear magnetic resonance image
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9208556B2 (en) * 2010-11-26 2015-12-08 Quantitative Insights, Inc. Method, system, software and medium for advanced intelligent image analysis and display of medical images and information
WO2015002846A2 (en) * 2013-07-02 2015-01-08 Surgical Information Sciences, Inc. Method and system for a brain image pipeline and brain image region location and shape prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430430B1 (en) * 1999-04-29 2002-08-06 University Of South Florida Method and system for knowledge guided hyperintensity detection and volumetric measurement
CN109308680A (en) * 2018-08-30 2019-02-05 迈格生命科技(深圳)有限公司 A kind of brain anatomy tutoring system based on nuclear magnetic resonance image
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A multi-phase semi-automatic approach for multisequence brain tumor image segmentation";Khai Yin Lim等;《Expert Systems with Applications》;20181201;第112卷;全文 *
"An active texture-based digital atlas enables automated mapping of structures and markers across brains";Chen, Y.等;《Nat Methods》;20190311;第16卷;全文 *
"基于随机游走算法的CT图像肺区域和肺肿瘤的分割研究";顾潇蒙;《中国优秀博硕士学位论文全文数据库(硕士)·医药卫生科技辑》;20170315;第2017年卷(第3期);全文 *

Also Published As

Publication number Publication date
CN110660068A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
Lu et al. Cultivated land information extraction in UAV imagery based on deep convolutional neural network and transfer learning
CN109344736B (en) Static image crowd counting method based on joint learning
CN106203395B (en) Face attribute recognition method based on multitask deep learning
CN104992223B (en) Intensive Population size estimation method based on deep learning
CN108596046A (en) A kind of cell detection method of counting and system based on deep learning
CN106920243A (en) The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN104077447B (en) Urban three-dimensional space vector modeling method based on paper plane data
CN104268552B (en) One kind is based on the polygonal fine classification sorting technique of part
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN107833213A (en) A kind of Weakly supervised object detecting method based on pseudo- true value adaptive method
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN107909102A (en) A kind of sorting technique of histopathology image
CN110660068B (en) Semi-automatic brain region segmentation method for three-dimensional cell construction image
CN111860570B (en) Cloud particle image extraction and classification method
CN111062441A (en) Scene classification method and device based on self-supervision mechanism and regional suggestion network
CN109063713A (en) A kind of timber discrimination method and system based on the study of construction feature picture depth
CN110334584A (en) A kind of gesture identification method based on the full convolutional network in region
CN115393293A (en) Electron microscope red blood cell segmentation and positioning method based on UNet network and watershed algorithm
Wang et al. Cellular structure image classification with small targeted training samples
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
CN114399686A (en) Remote sensing image ground feature identification and classification method and device based on weak supervised learning
CN113469119B (en) Cervical cell image classification method based on visual converter and image convolution network
CN112258525A (en) Image abundance statistics and population recognition algorithm based on bird high frame frequency sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant