CN111091530A - Automatic detection method and system for single neuron dendritic spines in fluorescent image - Google Patents

Automatic detection method and system for single neuron dendritic spines in fluorescent image Download PDF

Info

Publication number
CN111091530A
CN111091530A CN201811243701.2A CN201811243701A CN111091530A CN 111091530 A CN111091530 A CN 111091530A CN 201811243701 A CN201811243701 A CN 201811243701A CN 111091530 A CN111091530 A CN 111091530A
Authority
CN
China
Prior art keywords
dendritic
image
single neuron
connected domain
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811243701.2A
Other languages
Chinese (zh)
Other versions
CN111091530B (en
Inventor
曾绍群
程胜华
余雅清
刘小茂
刘钰蓉
王小俊
尹芳芳
李宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201811243701.2A priority Critical patent/CN111091530B/en
Publication of CN111091530A publication Critical patent/CN111091530A/en
Application granted granted Critical
Publication of CN111091530B publication Critical patent/CN111091530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses an automatic detection method and system for single neuron dendritic spines in a fluorescence image, wherein the method is realized by the following steps: splitting the three-dimensional fluorescence image to be detected into a plurality of image blocks, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all dendrites of the single neuron; uniformly scattering seed points along the single neuron dendritic framework, and taking neighborhood image blocks taking various seed points as centers from the target image to be identified; performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map, and then fusing; and performing connected domain analysis on the fused dendritic spine segmentation maps to obtain a single connected domain, splitting the dendritic spines on each connected domain into one or more dendritic spines, and merging the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result. The accuracy of identifying the dendritic spines is greatly improved by the method.

Description

Automatic detection method and system for single neuron dendritic spines in fluorescent image
Technical Field
The invention belongs to the field of image analysis of neuroscience, and particularly relates to an automatic detection method and system for single neuron dendritic spines in a fluorescence image based on a deep semantic segmentation network.
Background
Neurons are mainly composed of soma, dendrites and axons, and different neurons are connected with each other and transmit information through synapses. Many spinous processes are visible on the surface of the dendrites, called dendritic spines, which are one site where the junction between the dendrites and the axon terminals between neurons forms synapses. Medical research shows that the dendritic spines have electrical properties and plasticity, the properties are related to growth and development of nerve cells and establishment and disappearance of synapses, and the dendritic spines play a key role in human brain cognition, learning, memory, nerve diseases and the like. Therefore, the detection of the neuron dendritic spines is the basis for researching the structure of the dendritic spines, including the number, the shape, the density and the distribution and the change thereof, and has great significance to life science.
The dendritic spines are particularly fine and belong to a microscopic structure, and the length of the dendritic spines is generally 0.5-2 mu m0 μm, volume typically 0.01 μm3~0.8μm3In the meantime. Existing fluorescence labeling techniques and high resolution three-dimensional imaging techniques have made it possible to acquire sub-micron resolved three-dimensional fluorescence image datasets of single neurons, making it possible to systematically identify and quantify the dendritic spines of single neurons.
Early people identified and detected the neuron dendritic spines mainly by manual extraction, which not only wastes time and labor, but also has strong subjectivity, no uniform standard and high error rate, and the identification is difficult to be effective like jiuhuamao for a large number of fine neurons. In recent years, a lot of scholars have made great efforts in the detection of the neuron dendritic spines, and many semi-automatic or automatic dendritic spine segmentation methods have been proposed. The core technology of the methods is that the central axis of the dendrite is extracted, and then the dendritic spines are further separated according to the morphological characteristics of the dendritic spines and the dendrite to realize the detection of the dendritic spines. The method has high requirements on image resolution, and is used for imaging local structures of neurons at high resolution, and the obtained image generally only contains one or more sections of dendrites. These methods present challenges when faced with the dendritic spine detection task of intact neurons, as demonstrated by: a) the traditional method is generally based on artificially designed features or filtering rules, and the method is suitable for a small number of local dendritic spines, but is not robust enough for a large number of dendritic spines of a complete neuron, so that all dendritic spines with different shapes, sizes and attachment degrees are difficult to consider, and the method is often out of consideration and has certain limitation; b) the dendritic spines on part of the dendrites are very dense, adhesion exists, and the dense dendritic spines are difficult to distinguish by the traditional method.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides an automatic detection method and system for single-neuron dendritic spines in a fluorescence image, so that the technical problems that the existing detection for the single-neuron dendritic spines is time-consuming and labor-consuming under big data and all dendritic spine detections with different shapes, sizes and attachment degrees are difficult to be considered are solved.
To achieve the above object, according to one aspect of the present invention, there is provided an automatic detection method of a single neuron dendritic spine in a fluorescence image, comprising:
splitting a three-dimensional fluorescent image to be detected into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image to be detected;
uniformly scattering seed points along the single neuron dendritic framework in the three-dimensional fluorescent image to be detected, and taking neighborhood image blocks with preset sizes taking various seed points as centers in the target image to be identified;
performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map, and fusing the dendritic spine segmentation maps of all the neighborhood image blocks;
and performing connected domain analysis on the fused dendritic spine segmentation maps to obtain a single connected domain, splitting the dendritic spine on each connected domain into one or more dendritic spines, and merging the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result of the three-dimensional fluorescence image to be detected.
Preferably, the splitting the three-dimensional fluorescent image to be detected into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain a target image to be identified corresponding to all the single neuron dendrites of the three-dimensional fluorescent image to be detected includes:
splitting all dendrites of the single neuron in the three-dimensional fluorescence image to be detected into a plurality of dendrite segments with equal length according to the artificially tracked single neuron dendrite structure, wherein the adjacent dendrite segments are overlapped;
for each dendritic section, taking a minimum image block containing the dendritic section from the three-dimensional fluorescent image to be detected, setting a pixel, the axial distance of which from the image block to the dendritic section is greater than a preset distance threshold value, in the image block to be detected to be 0, and then projecting the image block along the Z axis to obtain an image to be identified;
and splicing the images to be identified of each dendritic section together according to the corresponding XY coordinates to obtain target images to be identified corresponding to all the dendrites of the single neuron of the three-dimensional fluorescent image to be detected.
Preferably, the semantic segmentation network based on the depth residual error network structure and the multi-scale void convolution structure includes:
the method comprises the steps of reserving the first three residual modules in a depth residual network structure, replacing the fourth residual module in the depth residual network structure with a cavity convolution module with a first interval to obtain a first structure, connecting a multi-scale cavity convolution structure formed by connecting a cavity convolution module with a second interval, a cavity convolution module with a third interval, a cavity convolution module with a fourth interval and a cavity convolution module with convolution kernel size of p multiplied by p in parallel behind the first structure in series to obtain a second structure, and connecting a residual convolution module, a convolution layer and an upper sampling layer behind the second structure in series to obtain the semantic segmentation network.
Preferably, before the obtaining of the dendritic spine segmentation map by performing dendritic spine segmentation on each of the neighborhood image blocks using the semantic segmentation network based on the depth residual network structure and the multi-scale cavity convolution structure, the method further includes:
splitting the three-dimensional fluorescent image into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image;
randomly scattering seed points along a single neuron dendritic framework in the three-dimensional fluorescent image, taking neighborhood image blocks with preset sizes taking various seed points as centers from the target image to be recognized and the mask image corresponding to the three-dimensional fluorescent image as a sample set, and randomly splitting the sample set into a training sample set and a verification sample set;
and training the semantic segmentation network by adopting a transfer learning mode through the training sample set and the verification sample set to obtain the trained semantic segmentation network.
Preferably, the training the semantic segmentation network by the training sample set and the verification sample set in a transfer learning manner to obtain the trained semantic segmentation network includes:
calling weights based on an ImageNet data set to initialize a residual error network part in a semantic segmentation network, randomly initializing a cavity convolution module in the semantic segmentation network, setting convolution layer parameters of the first two residual error modules in the semantic segmentation network as non-learnable before the first round of training is started, setting other layers as learnable, and taking cross entropy as a loss function;
testing image blocks in the training sample set by using the first-round network model, excavating wrongly-classified pixels, randomly selecting a plurality of seed points in the wrongly-classified pixels, taking the selected seed points as a center to extract the image blocks with preset sizes as difficult samples for second-round training, combining the difficult samples and the training sample set according to a preset proportion to form a second-round training sample set again, and training the first-round network model by using the second-round training sample set to obtain a trained semantic segmentation network.
Preferably, the performing connected domain analysis on the fused dendritic spine segmentation maps to obtain a single connected domain, and then splitting the dendritic spine on each connected domain into one or more dendritic spines includes:
for each pixel point in a connected domain, acquiring the local density of the pixel point and the shortest distance from the pixel point to all pixel points higher than the local density of the pixel point;
selecting a clustering center according to a two-dimensional feature space formed by the local density of the pixel points and the shortest distance corresponding to the pixel points;
and distributing each pixel point in the connected domain to the selected clustering center so as to split the dendritic spines in the connected domain into one or more dendritic spines.
Preferably, the selecting a cluster center according to a two-dimensional feature space formed by the local density of the pixel points and the shortest distance corresponding to the pixel points includes:
for any one connected domain I*To 1, pair*Obtaining the local density rho and the shortest distance delta of each pixel point p in the image to form a rho-delta two-dimensional space;
calculating the local density of each pixel point in the rho-delta two-dimensional space, taking the points with the local density smaller than a preset density value as candidate clustering points, and selecting the points with the shortest distance larger than a preset distance threshold value from the candidate clustering points as target clustering centers.
Preferably, the allocating each pixel point in the connected domain to the selected clustering center includes:
arranging the pixel points in the connected domain in a descending order according to local density to form a sequencing point set, wherein the sequencing point set does not comprise a target clustering center;
for each target point in the sorted point set, finding a point closest to the target point in a point set with a density greater than the local density of the target point, if the closest point has been allocated, taking the class of the closest point as the class of the target point, if the closest point has not been allocated, not performing any operation, forming all unallocated points into a new sorted point set, and allocating based on the new sorted point set until all points are completely allocated.
According to another aspect of the present invention, there is provided an automatic detection system for single neuron dendritic spines in a fluorescence image, comprising:
the first splitting module is used for splitting the three-dimensional fluorescent image to be detected into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain a target image to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image to be detected;
the second splitting module is used for uniformly scattering seed points along the single neuron dendritic framework of the three-dimensional fluorescent image to be detected and taking neighborhood image blocks with preset sizes taking various seed points as centers in the target image to be identified;
the first segmentation module is used for performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map and fusing the dendritic spine segmentation maps of all the neighborhood image blocks;
and the second segmentation module is used for analyzing the connected domains of the fused dendritic spine segmentation maps to obtain a single connected domain, splitting the dendritic spines on each connected domain into one or more dendritic spines, and combining the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result of the three-dimensional fluorescence image to be detected.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
1. the method comprises the step of segmenting dendritic spines of single neurons by establishing a semantic segmentation network model based on a depth residual error network and a multi-scale cavity convolution structure. Compared with the traditional dendritic spine segmentation method based on morphological rules, the dendritic spine segmentation method based on deep learning is more robust, is more suitable for segmenting dendritic spines with different shapes and sizes, and greatly improves the accuracy of identifying the dendritic spines.
2. Aiming at a complete single neuron tree structure, the invention designs a redundant segmentation method of the tree, which is an effective scheme for identifying single neuron dendritic spines. Since the spatial dimensions of intact single neurons are typically 103×103×103Pixels are difficult to directly process, and a block fusion scheme is generally adopted. However, the common block fusion method is to divide the structure of the tree simultaneously along the horizontal and vertical directions, the efficiency of the division scheme for the tree structure is very low, and aiming at the characteristics of the tree structure, the invention designs the redundancy segmentation method along the skeleton line of the neuron, thereby being more suitable for the characteristics of the tree structure of the neuron.
3. Since the dendritic spines on the partial dendrites are very dense and conglutination exists, the invention divides the conglutinated dendritic spines by connected domain analysis and can effectively prevent the single dendritic spine from being over-divided into a plurality of dendritic spines.
Drawings
FIG. 1 is a schematic flow chart of a method for automatically detecting single neuron dendritic spines in a fluorescence image according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a result of extracting a single neuron dendritic image signal according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a single neuron tree structure along redundant segments of a fiber scaffold according to an embodiment of the present invention;
fig. 4 is a sample of a deep semantic segmentation network provided by an embodiment of the present invention, where a left image a is an input image block, and a right image b is a corresponding output dendritic spine segmentation mask image;
fig. 5 is a schematic diagram of sample upsampling of a depth semantic segmentation network according to an embodiment of the present invention, where the left image is a 120 × 120 image block, and the right image is a 960 × 960 image block after upsampling;
FIG. 6 is a structural diagram of a dendritic spine segmentation network based on a depth residual error network and a multi-scale cavity convolution structure according to an embodiment of the present invention;
fig. 7 is a segmentation result of all dendritic spines of a complete neuron according to an embodiment of the present invention, where the upper two graphs correspond to the dendritic spine segmentation result output by the deep semantic segmentation network, and the lower two graphs correspond to a single dendritic spine segmentation graph obtained after further performing a density peak clustering algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order.
In order to adapt to a neuron dendritic spine segmentation task under big data, the method thoroughly abandons the idea of extracting a middle axis first and then detecting the dendritic spines in the traditional method, adopts a deep learning method, takes a three-dimensional fluorescence image of the neuron as a source of a neural network sample set, marks the dendritic spines in the fluorescence image, and then combines a residual neural network and a cavity convolution to build and train a deep semantic segmentation network to realize the segmentation of the dendritic spines. The initial segmentation is not thorough enough, some dendritic spines can still be adhered together, and the detection and segmentation of the single dendritic spine are realized by using connected domain analysis to complete the segmentation task.
Fig. 1 is a schematic flow chart of a method according to an embodiment of the present invention, where the method shown in fig. 1 includes the following steps:
s1, splitting the three-dimensional fluorescent image to be detected into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites of the three-dimensional fluorescent image to be detected;
in the embodiment of the present invention, the specific implementation manner of step S1 is:
splitting all dendrites of a single neuron in a three-dimensional fluorescence image to be detected into a plurality of dendrite segments with equal length according to the artificially tracked single neuron dendrite structure, wherein the adjacent dendrite segments are overlapped;
for each dendritic section, taking a minimum image block containing the dendritic section from the three-dimensional fluorescence image to be detected, setting a pixel with an axial distance from the image block to the dendritic section greater than a preset distance threshold value to be 0, and then projecting the image block along a Z axis to obtain an image to be identified;
and splicing the images to be identified of each dendritic section together according to the corresponding XY coordinates to obtain target images to be identified corresponding to all the dendrites of the single neuron of the three-dimensional fluorescent image to be detected.
S2, uniformly scattering seed points along the single neuron dendritic framework in the three-dimensional fluorescent image to be detected, and taking neighborhood image blocks with preset sizes taking various seed points as centers in the target image to be recognized;
s3, performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map, and fusing the dendritic spine segmentation maps of all the neighborhood image blocks;
the common deep residual error network comprises ResNet-34, ResNet-50, ResNet-101 and the like, the network computing cost, the network capacity and the pathological section image characteristics are comprehensively considered, and ResNet-50 is preferably used in the embodiment of the invention.
In the embodiment of the present invention, as shown in fig. 6, the structure of the semantic segmentation network is:
the method comprises the steps of reserving the first three residual modules in a depth residual network ResNet-50, replacing the fourth residual module in the depth residual network structure with a hole convolution module with the first distance of k to obtain a first structure, connecting a multi-scale hole convolution structure formed by connecting a hole convolution module with the second distance of l, a hole convolution module with the third distance of m, a hole convolution module with the fourth distance of n and a hole convolution module with the convolution kernel size of p multiplied by p in parallel behind the first structure in series to obtain a second structure, and connecting a residual convolution module, a convolution layer and an upper sampling layer behind the second structure in series to obtain the semantic segmentation network.
The multi-scale void convolution module has the following functions: the method comprises the steps of (1) retaining semantic position information while encoding large-scale semantic information in an image; semantic information of different scales is coded, so that the network can be suitable for dendritic spines with different sizes in the image.
The values of k, l, m, n and p may be determined according to actual needs, and in the embodiment of the present invention, k is preferably 2, l is preferably 4, m is preferably 8, n is preferably 12, and p is preferably 1.
In the embodiment of the present invention, before performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network to obtain a dendritic spine segmentation map, training the semantic segmentation network is further included, where the specific training process includes:
splitting the three-dimensional fluorescent image into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image;
randomly scattering seed points along a single neuron dendritic framework of a three-dimensional fluorescent image, taking neighborhood image blocks with preset sizes taking various seed points as centers from a target image to be identified and a mask image corresponding to the three-dimensional fluorescent image as a sample set, and randomly splitting the sample set into a training sample set and a verification sample set;
during the preparation process of the sample set, according to the artificially tracked single neuron dendritic structure, extracting an image to be identified in the three-dimensional fluorescence image data according to the following mode: as shown in fig. 3, all dendrites of a single neuron are divided into several segments with equal length, and there is a certain overlap between adjacent segments; for each small segment of dendrite, taking a minimum image block containing the small segment of dendrite from the three-dimensional fluorescence image, then setting a pixel, the axial distance of which from the image block to the segment of dendrite is greater than a specified threshold value, to be 0, and finally performing maximum projection on the image block along the Z axis to obtain an image to be identified; the images to be recognized of all the small segments of dendrites are spliced together according to the corresponding XY coordinates to obtain the images to be recognized of the targets corresponding to all the dendrites of the single neuron, as shown in FIG. 2. The redundant splitting mode of the tree structure can effectively process complete neurons. Only adjacent image signals are reserved along the dendritic center skeleton as recognition objects, and the processing method greatly reduces the influence of other neuron signals on the current neuron dendritic recognition.
Randomly selecting a plurality of seed points along the dendritic skeleton, taking neighborhood image blocks with preset sizes taking the seed points as centers from the obtained target image to be recognized and the corresponding artificially segmented dendritic spine mask images, and taking mask image blocks at corresponding positions from the mask images, wherein a pair of image blocks in the target image to be recognized and image blocks in the mask images which correspond to each point are called as a sample, and as shown in fig. 4, the image blocks in the target image to be recognized and the corresponding mask image blocks are shown. The graph block in the target image to be recognized is input of the semantic segmentation network, the corresponding mask image block is ideal output of the semantic segmentation network, and the learning algorithm optimizes weight parameters of the semantic segmentation network according to the difference between the actual output and the ideal output of the semantic segmentation network. Because the size of the dendritic spines is very small, the diameter of the dendritic spines is about 2-5 pixels, if image blocks with preset sizes obtained from the target image to be recognized are directly used as the input of a depth semantic segmentation network, the position information of the dendritic spines can be submerged by trunk fibers in the network. Therefore, an image block of a preset size obtained from the target image to be recognized is up-sampled by several times, as shown in fig. 5, for comparison before and after the up-sampling.
The preset size may be determined according to actual needs, and in the embodiment of the present invention, it is preferably 120 × 120.
And training the semantic segmentation network by adopting a transfer learning mode through the training sample set and the verification sample set to obtain the trained semantic segmentation network.
In the embodiment of the invention, a transfer learning training mode is adopted for the deep semantic segmentation network. Because the semantic segmentation network constructed above is very deep, the network is easily over-fitted if trained directly with limited dendritic spine image data. The method comprises the following steps:
a first round of training: and calling a weight value based on the ImageNet data set to initialize a residual error network part in the semantic segmentation network, and randomly initializing the cavity convolution module. Before training begins, the parameters of the first two residual modules are frozen (set to non-learnable) and the other layers are set to learnable. And updating the network weight by adopting the Adam algorithm by adopting the cross entropy as a loss function. Since the proportion of the dendritic spine part to the non-dendritic spine part in the mask image is very small, the two types of pixels of the dendritic spine and the non-dendritic spine are weighted, and the influence of the imbalance among the types on the learning effect of the whole network is reduced. By observing the loss function curves of the training set and the verification set, if the loss functions of the training set and the verification set both meet the preset requirement and the difference between the loss functions of the training set and the verification set is within the preset range, the first round of training can be stopped.
The preset requirement and the preset range can be determined according to actual needs, for example, the preset requirement is that the loss functions of the preset requirement and the preset range are both very small, and the difference between the loss functions of the preset requirement and the preset range is not large.
And (3) training for the second round: and a difficult sample mining mode is adopted to further improve the identification effect of the network on the dendritic spines. Using the model which is trained in the first round to test image blocks in the training set, digging out pixels which are classified in error (such as false positive and false negative), randomly selecting a plurality of seed points in the pixels which are classified in error, taking the seed points as the center to extract image blocks with preset sizes as difficult samples of the second round of training, and combining the difficult samples and the samples of the first round of training in proportion to form a second round of training sample set again to carry out second round learning on a new training sample set.
The preset size can be determined according to actual needs.
S4, conducting connected domain analysis on the fused protruding spine segmentation maps to obtain a single connected domain, splitting the dendritic spines on each connected domain into one or more dendritic spines, and combining the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result in the three-dimensional fluorescence image to be detected.
In the embodiment of the invention, for the dendritic spine segmentation result obtained by the semantic segmentation network, the stuck dendritic spines are further segmented into single dendritic spines. The method specifically comprises the following steps: the method comprises the steps of conducting connected domain analysis on a dendritic spine segmentation graph obtained by a semantic segmentation network to obtain a single connected domain, and then splitting the connected domain into one or more dendritic spines on each connected domain.
For each pixel point p in a connected domainiFirst, two variables are defined: local density ρ of the spotiAnd the shortest distance delta from the point to all pixel points with higher local density than the pointi. The definition is as follows:
Figure BDA0001839995540000121
wherein, I (p)j) Represents pjImage ofThe prime value, σ is a constant, Z is a normalization constant, and R is the window radius (R ═ 2 σ).
After the local density of each pixel point is obtained, the shortest distance of each pixel point is calculated according to the following formula:
Figure BDA0001839995540000122
and selecting a clustering center according to a two-dimensional feature space formed by the local density and the shortest distance. In density peak clustering, a clustering center is characterized in that local density rho and distance delta are large, and relatively isolated points are represented in a rho-delta two-dimensional characteristic space. In the embodiment of the present invention, the clustering center is selected according to the characteristic of the clustering center, and the specific operation is as follows:
a) let one connected region be I*For region I*Each pixel point p iniThe local density ρ of the spot is calculated by the above equations (1) and (2)iAnd the shortest distance deltaiForming a rho-delta two-dimensional space;
b) calculating the density Lambda of each point in the rho-Delta space, taking the point with low density Lambda as a candidate clustering point, and simultaneously requiring that the shortest distance is greater than a preset threshold Delta*The final cluster center is determined according to the two conditions. The density Λ measures the sparsity and denseness of points in the ρ - δ space, and relatively isolated points in the ρ - δ space can be screened by calculating Λ. Threshold delta*The average dendritic spine diameter is generally set so that the same dendritic spine is not divided into two dendritic spines, preventing over-segmentation.
After the cluster center is selected, each point in the connected domain needs to be allocated to the selected cluster center to obtain a final dendritic spine segmentation result. The method comprises the following specific steps:
1) numbering each clustering center;
2) arranging the pixel points in the communication area in a descending order according to the density rho to form a sequencing point set, wherein the sequencing point set does not comprise a clustering center;
3) for each point, the closest point to it is found within a set of points that is denser than it is. If that closest point has already been assigned, the class of that closest point is taken as the class of that point and a class number is recorded for that point. If the nearest point is not allocated, no action is taken. In this way, all points in the sorted set of points are traversed, and the remaining unassigned points form a new sorted set of points.
4) Repeating step 3) until all points are assigned. Finally, points with the same number are classified into the same class.
In the embodiment of the present invention, since the semantic segmentation network is segmented by using an image block manner, for a complete single neuron, all dendrites need to be split into a plurality of small segments of dendrites, a corresponding image block is extracted for each segment of dendrite to complete the dendrite spine segmentation, and then segmentation results of all image blocks are merged to obtain a final single neuron dendrite spine segmentation result. In step S2, seed points are uniformly scattered along the dendritic skeleton, and then image blocks are extracted centering on the seed points. The adjacent image blocks have certain redundancy, and the influence on the division of the dendritic spines when the dendritic spines are positioned at the boundaries of the image blocks is reduced. Wherein, the distance of the seed points is calculated according to the redundant width between the blocks, and then the seed points are uniformly spread according to the distance. A schematic diagram of the redundancy segmentation strategy for dendritic structures is shown in fig. 3.
The embodiment of the invention also provides an automatic detection system of the single neuron dendritic spines in the fluorescence image, which comprises:
the first splitting module is used for splitting the three-dimensional fluorescent image to be detected into a plurality of image blocks according to the artificially tracked single neuron dendritic structure, and then performing projection splicing on each image block to obtain a target image to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image to be detected;
the second splitting module is used for uniformly scattering seed points along the single neuron dendritic framework of the three-dimensional fluorescent image to be detected and taking neighborhood image blocks with preset sizes taking various seed points as centers in the target image to be identified;
the first segmentation module is used for performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map, and fusing the dendritic spine segmentation maps of all the neighborhood image blocks;
and the second segmentation module is used for analyzing the connected domains of the fused dendritic spine segmentation maps to obtain a single connected domain, splitting the dendritic spines on each connected domain into one or more dendritic spines, and combining the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result of the three-dimensional fluorescence image to be detected.
The specific implementation of each module may refer to the description of the method embodiment, and the embodiment of the present invention will not be repeated.
Fig. 7 shows the segmentation results of all dendritic spines of a complete neuron segmented by the method of the present invention, including the segmentation results output by the semantic segmentation network and the segmentation results of single dendritic spine obtained by further performing dendritic spine splitting, and it can be seen that most of the dendritic spines on the dendrites of the neuron can be segmented and the adhered dendritic spines can also be separated. Through comparison with the manually labeled standard, 2343 dendritic spines are segmented by the method, 2953 dendritic spines are manually labeled, and 2220 successfully identified dendritic spines are obtained.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. An automatic detection method of single neuron dendritic spines in a fluorescence image is characterized by comprising the following steps:
splitting a three-dimensional fluorescent image to be detected into a plurality of image blocks according to the traced single neuron dendritic structure, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image to be detected;
uniformly scattering seed points along the single neuron dendritic framework in the three-dimensional fluorescent image to be detected, and taking neighborhood image blocks with preset sizes taking various seed points as centers in the target image to be identified;
performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map, and fusing the dendritic spine segmentation maps of all the neighborhood image blocks;
and performing connected domain analysis on the fused dendritic spine segmentation maps to obtain a single connected domain, splitting the dendritic spine on each connected domain into one or more dendritic spines, and merging the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result of the three-dimensional fluorescence image to be detected.
2. The method according to claim 1, wherein the step of splitting the three-dimensional fluorescence image to be detected into a plurality of image blocks according to the traced single neuron dendritic structure, and then performing projection stitching on each image block to obtain the target image to be recognized corresponding to all the single neuron dendrites of the three-dimensional fluorescence image to be detected comprises:
splitting all dendrites of the single neuron into a plurality of dendrite segments with equal length according to the traced dendrite structure of the single neuron, wherein superposition exists between every two adjacent dendrite segments;
for each dendritic section, taking a minimum image block containing the dendritic section from the three-dimensional fluorescent image to be detected, setting a pixel, the axial distance of which from the image block to the dendritic section is greater than a preset distance threshold value, in the image block to be detected to be 0, and then projecting the image block along the Z axis to obtain an image to be identified;
and splicing the images to be identified of each dendrite segment together according to the corresponding XY coordinates to obtain target images to be identified corresponding to all dendrites of the single neuron in the three-dimensional fluorescent image to be detected.
3. The method according to claim 1 or 2, wherein the semantic segmentation network based on the depth residual network structure and the multi-scale hole convolution structure comprises:
the method comprises the steps of reserving the first three residual modules in a depth residual network structure, replacing the fourth residual module in the depth residual network structure with a cavity convolution module with a first interval to obtain a first structure, connecting a multi-scale cavity convolution structure formed by connecting a cavity convolution module with a second interval, a cavity convolution module with a third interval, a cavity convolution module with a fourth interval and a cavity convolution module with convolution kernel size of p multiplied by p in parallel behind the first structure in series to obtain a second structure, and connecting a residual convolution module, a convolution layer and an upper sampling layer behind the second structure in series to obtain the semantic segmentation network.
4. The method according to claim 3, wherein before said obtaining a dendritic spine segmentation map by performing dendritic spine segmentation on each of said neighborhood image blocks using a semantic segmentation network based on a depth residual network structure and a multi-scale hole convolution structure, the method further comprises:
splitting the three-dimensional fluorescent image into a plurality of image blocks according to the tracked single neuron dendritic structures, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites in the three-dimensional fluorescent image;
randomly scattering seed points along a single neuron dendritic framework in the three-dimensional fluorescent image, taking neighborhood image blocks with preset sizes taking various seed points as centers from the target image to be recognized and the mask image corresponding to the three-dimensional fluorescent image as a sample set, and randomly splitting the sample set into a training sample set and a verification sample set;
and training the semantic segmentation network by adopting a transfer learning mode through the training sample set and the verification sample set to obtain the trained semantic segmentation network.
5. The method according to claim 4, wherein the training the semantic segmentation network with the training sample set and the verification sample set by using the transfer learning method to obtain the trained semantic segmentation network comprises:
calling weights based on an ImageNet data set to initialize a residual error network part in a semantic segmentation network, randomly initializing a cavity convolution module in the semantic segmentation network, setting convolution layer parameters of the first two residual error modules in the semantic segmentation network as non-learnable before the first round of training is started, setting other layers as learnable, and taking cross entropy as a loss function;
testing image blocks in the training sample set by using the first-round network model, excavating wrongly-classified pixels, randomly selecting a plurality of seed points in the wrongly-classified pixels, taking the selected seed points as a center to extract the image blocks with preset sizes as difficult samples for second-round training, combining the difficult samples and the training sample set according to a preset proportion to form a second-round training sample set again, and training the first-round network model by using the second-round training sample set to obtain a trained semantic segmentation network.
6. The method according to claim 4 or 5, wherein the performing connected domain analysis on the fused dendritic spine segmentation maps to obtain single connected domains, and then splitting the dendritic spine on each connected domain into one or more dendritic spines comprises:
for each pixel point in a connected domain, acquiring the local density of the pixel point and the shortest distance from the pixel point to all pixel points higher than the local density of the pixel point;
selecting a clustering center according to a two-dimensional feature space formed by the local density of the pixel points and the shortest distance corresponding to the pixel points;
and distributing each pixel point in the connected domain to the selected clustering center so as to split the dendritic spines in the connected domain into one or more dendritic spines.
7. The method according to claim 6, wherein selecting a cluster center according to a two-dimensional feature space formed by the local density of the pixel points and the shortest distance corresponding to the pixel points comprises:
for any one connected domain I*To 1, pair*Obtaining the local density rho and the shortest distance delta of each pixel point p in the image to form a rho-delta two-dimensional space;
calculating the local density of each pixel point in the rho-delta two-dimensional space, taking the points with the local density smaller than a preset density value as candidate clustering points, and selecting the points with the shortest distance larger than a preset distance threshold value from the candidate clustering points as target clustering centers.
8. The method of claim 7, wherein assigning each pixel point in the connected domain to a selected cluster center comprises:
arranging the pixel points in the connected domain in a descending order according to local density to form a sequencing point set, wherein the sequencing point set does not comprise a target clustering center;
for each target point in the sorted point set, finding a point closest to the target point in a point set with a density greater than the local density of the target point, if the closest point has been allocated, taking the class of the closest point as the class of the target point, if the closest point has not been allocated, not performing any operation, forming all unallocated points into a new sorted point set, and allocating based on the new sorted point set until all points are completely allocated.
9. An automatic detection system for single neuron dendritic spines in a fluorescence image, comprising:
the first splitting module is used for splitting the three-dimensional fluorescent image to be detected into a plurality of image blocks according to the traced single neuron dendritic structure, and then performing projection splicing on each image block to obtain target images to be identified corresponding to all the single neuron dendrites of the three-dimensional fluorescent image to be detected;
the second splitting module is used for uniformly scattering seed points along the single neuron dendritic framework in the three-dimensional fluorescent image to be detected and taking neighborhood image blocks with preset sizes taking various seed points as centers in the target image to be identified;
the first segmentation module is used for performing dendritic spine segmentation on each neighborhood image block by using a semantic segmentation network based on a depth residual error network structure and a multi-scale cavity convolution structure to obtain a dendritic spine segmentation map and fusing the dendritic spine segmentation maps of all the neighborhood image blocks;
and the second segmentation module is used for analyzing the connected domains of the fused dendritic spine segmentation maps to obtain a single connected domain, splitting the dendritic spines on each connected domain into one or more dendritic spines, and combining the dendritic spines in each connected domain to obtain a single neuron dendritic spine segmentation result of the three-dimensional fluorescence image to be detected.
CN201811243701.2A 2018-10-24 2018-10-24 Automatic detection method and system for single neuron dendritic spines in fluorescent image Active CN111091530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811243701.2A CN111091530B (en) 2018-10-24 2018-10-24 Automatic detection method and system for single neuron dendritic spines in fluorescent image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811243701.2A CN111091530B (en) 2018-10-24 2018-10-24 Automatic detection method and system for single neuron dendritic spines in fluorescent image

Publications (2)

Publication Number Publication Date
CN111091530A true CN111091530A (en) 2020-05-01
CN111091530B CN111091530B (en) 2022-06-17

Family

ID=70392195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811243701.2A Active CN111091530B (en) 2018-10-24 2018-10-24 Automatic detection method and system for single neuron dendritic spines in fluorescent image

Country Status (1)

Country Link
CN (1) CN111091530B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421218A (en) * 2021-04-16 2021-09-21 深圳大学 Method for extracting branch point of vascular network
CN113792745A (en) * 2021-09-17 2021-12-14 重庆大学 Method and system for extracting single-sided tree point cloud skeleton line
CN116563616A (en) * 2023-04-23 2023-08-08 北京大学 Image recognition method, computer equipment and medium based on neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150573A (en) * 2012-12-24 2013-06-12 西交利物浦大学 Nerve dendritic spine image classification method based on multiresolution fractal features
US20140169647A1 (en) * 2011-08-08 2014-06-19 Instytut Biologii Doswiadczalnej Im. M. Nenckiego Pan Method and a system for processing an image comprising dendritic spines
CN106373116A (en) * 2016-08-24 2017-02-01 中国科学院自动化研究所 Two-photon image-based synapse detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140169647A1 (en) * 2011-08-08 2014-06-19 Instytut Biologii Doswiadczalnej Im. M. Nenckiego Pan Method and a system for processing an image comprising dendritic spines
CN103150573A (en) * 2012-12-24 2013-06-12 西交利物浦大学 Nerve dendritic spine image classification method based on multiresolution fractal features
CN106373116A (en) * 2016-08-24 2017-02-01 中国科学院自动化研究所 Two-photon image-based synapse detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUERONG XIAO ET AL.: "Automated dendritic spine detection using convolutional neural networks on maximum intensity projected microscopic volumes", 《JOURNAL OF NEUROSCIENCE METHODS》 *
ZHIYANG LIU ET AL.: "Towards Clinical Diagnosis: Automated Stroke Lesion Segmentation on Multimodal MR Image Using Convolutional Neural Network", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
王水花: "基于机器视觉的多尺度脑图像的若干技术研究", 《中国博士学位论文全文数据库 (信息科技辑)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421218A (en) * 2021-04-16 2021-09-21 深圳大学 Method for extracting branch point of vascular network
CN113421218B (en) * 2021-04-16 2024-02-23 深圳大学 Extraction method of vascular network branch point
CN113792745A (en) * 2021-09-17 2021-12-14 重庆大学 Method and system for extracting single-sided tree point cloud skeleton line
CN113792745B (en) * 2021-09-17 2023-10-20 重庆大学 Single-sided tree point cloud skeleton line extraction method and system
CN116563616A (en) * 2023-04-23 2023-08-08 北京大学 Image recognition method, computer equipment and medium based on neural network
CN116563616B (en) * 2023-04-23 2024-01-30 北京大学 Image recognition method, computer equipment and medium based on neural network

Also Published As

Publication number Publication date
CN111091530B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
Kunst et al. A cellular-resolution atlas of the larval zebrafish brain
CN111091530B (en) Automatic detection method and system for single neuron dendritic spines in fluorescent image
CN109903284B (en) HER2 immunohistochemical image automatic discrimination method and system
CN110852316B (en) Image tampering detection and positioning method adopting convolution network with dense structure
CN110197714B (en) Image analysis method and device and method for generating deep learning algorithm after learning
Huang et al. Fully-automatic synapse prediction and validation on a large data set
AU726049B2 (en) A neural network assisted multi-spectral segmentation system
CN110853022B (en) Pathological section image processing method, device and system and storage medium
CN113574534A (en) Machine learning using distance-based similarity labels
CN109344285A (en) A kind of video map construction and method for digging, equipment towards monitoring
CN103984939B (en) A kind of sample visible component sorting technique and system
CN109815870B (en) High-throughput functional gene screening method and system for quantitative analysis of cell phenotype image
Hobson et al. Classifying anti-nuclear antibodies HEp-2 images: A benchmarking platform
CN109102498A (en) A kind of method of cluster type nucleus segmentation in cervical smear image
US20200117894A1 (en) Automated parameterization image pattern recognition method
CN114550169A (en) Training method, device, equipment and medium for cell classification model
Ge et al. Coarse-to-fine foraminifera image segmentation through 3D and deep features
CN111680575A (en) Human epithelial cell staining and classifying device, equipment and storage medium
JP2023549020A (en) How to classify cells
US11803963B2 (en) Computational model for analyzing images of a biological specimen
Fishman et al. Practical segmentation of nuclei in brightfield cell images with neural networks trained on fluorescently labelled samples
CN113537371B (en) Epithelial cell classification method and system integrating two stages of edge features
CN111028249A (en) Garment image segmentation method based on deep learning
Sriram et al. Classification of human epithelial type-2 cells using hierarchical segregation
CN112364844B (en) Data acquisition method and system based on computer vision technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant