CN116416253B - Neuron extraction method and device based on bright-dark channel priori depth of field estimation - Google Patents
Neuron extraction method and device based on bright-dark channel priori depth of field estimation Download PDFInfo
- Publication number
- CN116416253B CN116416253B CN202310689836.6A CN202310689836A CN116416253B CN 116416253 B CN116416253 B CN 116416253B CN 202310689836 A CN202310689836 A CN 202310689836A CN 116416253 B CN116416253 B CN 116416253B
- Authority
- CN
- China
- Prior art keywords
- neuron
- time sequence
- spatial position
- action potential
- sequence change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000002569 neuron Anatomy 0.000 title claims abstract description 150
- 238000000605 extraction Methods 0.000 title claims abstract description 60
- 230000008859 change Effects 0.000 claims abstract description 53
- 230000036982 action potential Effects 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 27
- 230000005540 biological transmission Effects 0.000 claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 20
- 238000000354 decomposition reaction Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 8
- 230000015654 memory Effects 0.000 claims description 6
- 238000005311 autocorrelation function Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 abstract description 3
- 230000003287 optical effect Effects 0.000 description 4
- OYPRJOBELJOOCE-UHFFFAOYSA-N Calcium Chemical group [Ca] OYPRJOBELJOOCE-UHFFFAOYSA-N 0.000 description 3
- 229910052791 calcium Inorganic materials 0.000 description 3
- 239000011575 calcium Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Geometry (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a neuron extraction method and a device based on prior depth of field estimation of a bright and dark channel, wherein the method comprises the following steps: inputting original image video data; calculating the prior of a bright channel and a dark channel of the original image video data, and estimating the transmission rate corresponding to the depth of field of the original image video data by using two prior; dividing transmission rate corresponding to depth of field by using pixels as unit points of original image video data to construct a neuron extraction frame of constraint non-negative matrix factorization; iteratively solving for relevant parameters of a neuron, comprising: spatial position and shape, action potential and time sequence change, and background until iteration is finished, finally outputting the extracted information of the neuron, comprising: spatial position and shape size, action potential and timing variations. The invention can rapidly estimate and utilize the depth of field of the original data, effectively remove the scattering of the original data, and accurately extract the characteristic information such as the spatial position and the shape of the neuron, the action potential, the time sequence change and the like.
Description
Technical Field
The invention relates to the technical field of neuron extraction, in particular to a neuron extraction method and device based on prior depth of field estimation of a bright and dark channel.
Background
Neuron extraction is a very critical processing link in brain science, biomedicine and other researches, extracts characteristic information such as spatial position and shape, action potential and time sequence change of neuron cells from fluorescent calcium signal image video data acquired by optical microscopic imaging equipment, and is used for subsequent observation, analysis and understanding of life mechanism activities and change rules generated by large-scale or mesoscale neuron cell groups. In recent years, neuronal extraction has been widely used in many fields such as brain science, life science, neuroscience and cell biology.
Aiming at the neuron extraction problem of calcium signal image video data, the existing neuron extraction method comprises constrained non-negative matrix factorization, expanded constrained non-negative matrix factorization and the like. However, in the method, when the original data is processed and operated, the three-dimensional original video data is directly converted into a two-dimensional image matrix, and the depth information of the original data is not considered and is not effectively utilized; meanwhile, the problem of calcium imaging data scattering generated by mechanisms such as optical microscopic nonlinear imaging and light scattering is not considered and solved. Therefore, how to provide a depth of field utilization and efficient neuron extraction method for scattering is a problem that needs to be solved currently by those skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a neuron extraction method and device based on prior depth of field estimation of a bright and dark channel. The technical scheme is as follows:
in one aspect, a neuron extraction method based on a priori depth of field estimation of a bright and dark channel is provided, the method is implemented by an electronic device, and the method comprises:
s1, inputting original image video data;
s2, calculating the prior of a bright channel and a dark channel of the original image video data, and estimating the transmission rate corresponding to the depth of field of the original image video data by using two prior;
s3, dividing transmission rate corresponding to depth of field of the original image video data by using pixels as unit points, and constructing a neuron extraction frame of constraint non-negative matrix factorization;
s4, iteratively solving relevant parameters of the neurons, wherein the relevant parameters comprise: the method comprises the steps of outputting the extraction information of neurons finally after the iteration is finished, wherein the extraction information comprises the following steps of: spatial position and shape size, action potential and timing variations.
Optionally, the S2 specifically includes:
for each frame of image in the original image video data, calculating a corresponding bright channel prior I bcp :
(1)
For each frame of image in the original image video data, calculating a corresponding dark channel prior I dcp :
(2)
wherein ,Ii Is an input image and contains r channelsX is the current pixel coordinate of I,a neighboring window area with a pixel point of a coordinate x as a center, and y is the coordinate of each pixel point in the area;
estimating the depth of field corresponding transmission rate t of each frame of image on the basis of obtaining the prior of the bright and dark channels of each frame of image:
(3)
the division operation is a pixel division operation, and a weight coefficient w is added to avoid overestimation of the transmission rate.
Optionally, the step S3 specifically includes:
dividing the original image video data and the transmission rate by the following formula:
,i = 1, ..., N (4)
wherein ,is a smaller constant 10 -6 The purpose is to prevent t from approaching 0 so as to avoid the whole fraction of the equation approaching infinity, and the division operation is pixel point division operation;
the result after the dot division operation is expressed as a column vector form, and all the results form a matrixInitial input data of a neuron extraction framework as the constrained non-negative matrix factorization, where V i Is the ith column vector.
Optionally, the S4 specifically includes:
s41, fixing the background in the related parameters, the action potential and time sequence change of the neuron, and solving to obtain the spatial position and the shape of the neuron by using a rapid layered alternating least square algorithm;
s42, fixing the background in the related parameters, and deducing the action potential and time sequence change of the neuron by using an online active set algorithm according to the spatial position and the shape of the neuron obtained in the step S41;
s43, fixing the spatial position and the shape of the neuron obtained in the step S41 and the action potential and time sequence change of the neuron obtained in the step S42, and solving the background information by using a singular value decomposition method;
s44, repeating the steps S41-S43 until the first layer iteration is finished, and obtaining the current neuron characteristic information;
s45, taking the characteristic information of the current neuron as input data of a second layer iteration, and repeating the steps S2-S45 until the second layer iteration is finished, and finally outputting the extraction information of the neuron, wherein the extraction information comprises: spatial position and shape size, action potential and timing variations.
Optionally, the step S41 specifically includes:
fixing the current background and the current action potential and time sequence change of all neurons, and solving the objective function of the spatial position and the shape of all neurons to be expressed as:
(5)
wherein V is input data, P is the spatial position and shape of all the extracted neurons, C is the action potential and time sequence change of all the extracted neurons, P has sparsity and spatial locality, D and F are the spatial position and time sequence change of the background respectively, and the superscript symbol is the current estimated value of the relevant parameters;
solving an objective function of P through a rapid hierarchical alternating least square algorithm, and carrying out the following operation on each neuron in C:
(6)
wherein ,, />the superscript symbol T denotes a transpose operation, K is the number of all neurons in C, k=1.
Optionally, the step S42 specifically includes:
fixing the current background and the current space position and shape of all neurons obtained in the step S41, and deducing the action potential C and the time sequence change S of all neurons to construct an objective function as follows:
(7)
wherein ,,/>,s k is the number of impacts generated in the time dynamic activity of the neuron, and s k With sparsity, modeling each neuron time dynamic activity using a second order regression process, G (k) Is a second order regression coefficient;
solving an objective function of the action potential C and the time sequence change S of the neurons by using an online active set algorithm, and carrying out iterative operation on each neuron:
(8)
wherein , and />The pooling variable and the pooling length are respectively +.>,/>,/>For the autocorrelation function coefficient of G, the weighting coefficient +.>,q k The initial value of (1) is set asThe neuron action potential C and the time sequence change S are deduced through iteration of the operation.
Optionally, the step S43 specifically includes:
fixing the spatial position and shape of the neuron obtained in the step S41 and the current value of the action potential and time sequence change obtained in the step S42, and solving the objective function of the spatial position D and the time sequence change F of the background to be expressed as follows:
(9)
wherein ,;
solving the spatial position D and the time sequence change F of the background by using a singular value decomposition method, wherein the formula is as follows:
(10)
wherein SVD represents singular value decomposition operations.
In another aspect, a neuron extraction device based on a priori depth of field estimation of a bright-dark channel is provided, the device comprising:
the input module is used for inputting original image video data;
the computing module is used for computing the prior of the bright and dark channels of the original image video data, and estimating the transmission rate corresponding to the depth of field of the original image video data by using two prior;
the construction module is used for dividing the transmission rate corresponding to the depth of field of the original image video data by taking pixels as unit points to construct a neuron extraction frame of constraint non-negative matrix factorization;
the iteration module is used for iteratively solving relevant parameters of the neurons, wherein the relevant parameters comprise: the method comprises the steps of outputting the extraction information of neurons finally after the iteration is finished, wherein the extraction information comprises the following steps of: spatial position and shape size, action potential and timing variations.
In another aspect, an electronic device is provided that includes a processor and a memory having at least one instruction stored therein that is loaded and executed by the processor to implement the above-described neuron extraction method based on a priori depth of field estimation of a light and dark channel.
In another aspect, a computer readable storage medium having stored therein at least one instruction loaded and executed by a processor to implement the above-described neuron extraction method based on a light-dark channel a priori depth of field estimation is provided.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
compared with the existing methods such as constrained non-negative matrix factorization and expansion thereof, the method can quickly estimate and utilize the depth of field of the original video image data, effectively remove the scattering of the original video image data and accurately extract the characteristic information such as the spatial position and the shape, the action potential and the time sequence change of the neuron.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a neuron extraction method based on a priori depth of field estimation of a bright-dark channel according to an embodiment of the present invention;
FIG. 2 is a flow chart of depth estimation for a bright-dark channel prior according to an embodiment of the present invention;
FIG. 3 is a detailed flowchart of a neuron extraction method based on a priori depth of field estimation of a bright-dark channel according to an embodiment of the present invention;
fig. 4 is a block diagram of a neuron extraction device based on a priori depth of field estimation of a bright-dark channel according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a neuron extraction method based on a priori depth of field estimation of a bright and dark channel, which can be realized by electronic equipment, wherein the electronic equipment can be a terminal or a server. The neuron extraction method flow chart based on the prior depth of field estimation of the bright and dark channels as shown in fig. 1, and the processing flow of the method can comprise the following steps:
s1, inputting original image video data;
inputting original image video data as initial data of depth estimation;
converting input video data into one-frame image data by formatN is the total number of frames of the image of the input data.
S2, calculating the prior of a bright channel and a dark channel of the original image video data, and estimating the transmission rate corresponding to the depth of field of the original image video data by using two prior;
optionally, as shown in fig. 2, the S2 specifically includes:
for each frame of image in the original image video data, calculating a corresponding bright channel prior I bcp :
(1)
For each frame of image in the original image video data, calculating a corresponding dark channel prior I dcp :
(2)
wherein ,Ii Is an input image and contains r channelsX is the current pixel coordinate of I,a neighboring window area with a pixel point of a coordinate x as a center, and y is the coordinate of each pixel point in the area;
estimating the depth of field corresponding transmission rate t of each frame of image on the basis of obtaining the prior of the bright and dark channels of each frame of image:
(3)
the division operation is a pixel division operation, and a weight coefficient w is added to avoid overestimation of the transmission rate.
Optionally, w is set to 0.95 per check.
S3, dividing transmission rate corresponding to depth of field of the original image video data by using pixels as unit points, and constructing a neuron extraction frame of constraint non-negative matrix factorization;
optionally, as shown in fig. 3, the step S3 specifically includes:
dividing the original image video data and the transmission rate by the following formula:
,i = 1, ..., N (4)
wherein ,is a smaller constant 10 -6 The purpose is to prevent t from approaching 0 so as to avoid the whole fraction of the equation approaching infinity, and the division operation is pixel point division operation;
the result after the dot division operation is expressed as a column vector form, and all the results form a matrixInitial input data of a neuron extraction framework as the constrained non-negative matrix factorization, where V i Is the ith column vector.
S4, iteratively solving relevant parameters of the neurons, wherein the relevant parameters comprise: the method comprises the steps of outputting the extraction information of neurons finally after the iteration is finished, wherein the extraction information comprises the following steps of: spatial position and shape size, action potential and timing variations.
Optionally, the S4 specifically includes:
s41, fixing the background in the related parameters, the action potential and time sequence change of the neuron, and solving to obtain the spatial position and the shape of the neuron by using a rapid layered alternating least square algorithm;
optionally, the step S41 specifically includes:
fixing the current background and the current action potential and time sequence change of all neurons, and solving the objective function of the spatial position and the shape of all neurons to be expressed as:
(5)
wherein V is input data, P is the spatial position and shape of all the extracted neurons, C is the action potential and time sequence change of all the extracted neurons, P has sparsity and spatial locality, D and F are the spatial position and time sequence change of the background respectively, and the superscript symbol is the current estimated value of the relevant parameters;
solving an objective function of P through a rapid hierarchical alternating least square algorithm, and carrying out the following operation on each neuron in C:
(6)
wherein ,, />the superscript symbol T denotes a transpose operation, K is the number of all neurons in C, k=1.
S42, fixing the background in the related parameters, and deducing the action potential and time sequence change of the neuron by using an online active set algorithm according to the spatial position and the shape of the neuron obtained in the step S41;
optionally, the step S42 specifically includes:
fixing the current background and the current space position and shape of all neurons obtained in the step S41, and deducing the action potential C and the time sequence change S of all neurons to construct an objective function as follows:
(7)
wherein ,,/>,s k is the number of impacts generated in the time dynamic activity of the neuron, and s k With sparsity, modeling each neuron time dynamic activity using a second order regression process, G (k) Is a second order regression coefficient;
solving an objective function of the action potential C and the time sequence change S of the neurons by using an online active set algorithm, and carrying out iterative operation on each neuron:
(8)
wherein , and />The pooling variable and the pooling length are respectively +.>,/>,/>For the autocorrelation function coefficient of G, the weighting coefficient +.>,q k The initial value is set to +.>The neuron action potential C and the time sequence change S are deduced through iteration of the operation.
S43, fixing the spatial position and the shape of the neuron obtained in the step S41 and the action potential and time sequence change of the neuron obtained in the step S42, and solving the background information by using a singular value decomposition method;
optionally, the step S43 specifically includes:
fixing the spatial position and shape of the neuron obtained in the step S41 and the current value of the action potential and time sequence change obtained in the step S42, and solving the objective function of the spatial position D and the time sequence change F of the background to be expressed as follows:
(9)
wherein ,;
solving the spatial position D and the time sequence change F of the background by using a singular value decomposition method, wherein the formula is as follows:
(10)
wherein SVD represents singular value decomposition operations.
S44, repeating the step S41-S43 until the first layer iteration is finished, obtaining the current neuron characteristic information;
S45, the current neuron characteristic information is processedRepeating steps S2-S45 until the second layer iteration is finished as input data of the second layer iteration, and finally outputting the extraction information of the neurons, wherein the extraction information comprises the following steps: spatial position and shape size P, action potential C, and time sequence variation S.
In order to extract accurate neuron characteristic information, two layers of iterations are set in the embodiment of the invention, and 2 iterations are set in each layer.
As shown in fig. 4, an embodiment of the present invention further provides a neuron extraction device based on a priori depth of field estimation of a light channel, where the device includes:
an input module 410 for inputting original image video data;
the calculating module 420 is configured to calculate a priori a bright-dark channel of the original image video data, and estimate a transmission rate corresponding to a depth of field of the original image video data using two priors;
a construction module 430, configured to divide the transmission rate corresponding to the depth of field by the original image video data using pixels as unit points, and construct a neuron extraction frame of constrained non-negative matrix factorization;
an iteration module 440 for iteratively solving relevant parameters of the neuron, the relevant parameters including: the method comprises the steps of outputting the extraction information of neurons finally after the iteration is finished, wherein the extraction information comprises the following steps of: spatial position and shape size, action potential and timing variations.
The functional structure of the neuron extraction device based on the prior depth of field estimation of the bright and dark channels provided by the embodiment of the invention corresponds to the neuron extraction method based on the prior depth of field estimation of the bright and dark channels provided by the embodiment of the invention, and is not repeated here.
Fig. 5 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present invention, where the electronic device 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 501 and one or more memories 502, where at least one instruction is stored in the memories 502, and the at least one instruction is loaded and executed by the processors 501 to implement the above-mentioned neuron extraction method based on the estimation of the depth of field of the light and dark channels.
In an exemplary embodiment, a computer readable storage medium, such as a memory comprising instructions executable by a processor in a terminal to perform the above-described neuron extraction method based on a light-dark channel a priori depth of field estimation, is also provided. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (5)
1. A neuron extraction method based on a priori depth of field estimation of a bright and dark channel, the method comprising:
s1, inputting original image video data;
s2, calculating the prior of a bright channel and a dark channel of the original image video data, and estimating the transmission rate corresponding to the depth of field of the original image video data by using two prior;
s3, dividing transmission rate corresponding to depth of field of the original image video data by using pixels as unit points, and constructing a neuron extraction frame of constraint non-negative matrix factorization;
s4, iteratively solving relevant parameters of the neurons, wherein the relevant parameters comprise: the method comprises the steps of outputting the extraction information of neurons finally after the iteration is finished, wherein the extraction information comprises the following steps of: spatial position and shape size, action potential and time sequence variation;
the step S3 specifically comprises the following steps:
dividing the original image video data and the transmission rate by the following formula:
,i = 1, ..., N(4)
wherein ,is a smaller constant 10 -6 The purpose is to prevent t from approaching 0 so as to avoid the whole fraction of the equation approaching infinity, and the division operation is pixel point division operation;
the result after the dot division operation is expressed as a column vector form, and all the results form a matrixInitial input data of a neuron extraction framework as the constrained non-negative matrix factorization, where V i Is the ith column vector;
the step S4 specifically comprises the following steps:
s41, fixing the background in the related parameters, the action potential and time sequence change of the neuron, and solving to obtain the spatial position and the shape of the neuron by using a rapid layered alternating least square algorithm;
s42, fixing the background in the related parameters, and deducing the action potential and time sequence change of the neuron by using an online active set algorithm according to the spatial position and the shape of the neuron obtained in the step S41;
s43, fixing the spatial position and the shape of the neuron obtained in the step S41 and the action potential and time sequence change of the neuron obtained in the step S42, and solving the background information by using a singular value decomposition method;
s44, repeating the steps S41-S43 until the first layer iteration is finished, and obtaining the current neuron characteristic information;
s45, taking the characteristic information of the current neuron as input data of a second layer iteration, and repeating the steps S2-S45 until the second layer iteration is finished, and finally outputting the extraction information of the neuron, wherein the extraction information comprises: spatial position and shape size, action potential and time sequence variation;
the step S41 specifically includes:
fixing the current background and the current action potential and time sequence change of all neurons, and solving the objective function of the spatial position and the shape of all neurons to be expressed as:
(5)
wherein V is input data, P is the spatial position and shape of all the extracted neurons, C is the action potential and time sequence change of all the extracted neurons, P has sparsity and spatial locality, D and F are the spatial position and time sequence change of the background respectively, and the superscript symbol is the current estimated value of the relevant parameters;
solving an objective function of P through a rapid hierarchical alternating least square algorithm, and carrying out the following operation on each neuron in C:
(6)
wherein ,,/>the superscript symbol T denotes a transpose operation, K is the number of all neurons in C, k=1Replacing the operation until the iteration is finished;
the step S42 specifically includes:
fixing the current background and the current space position and shape of all neurons obtained in the step S41, and deducing the action potential C and the time sequence change S of all neurons to construct an objective function as follows:
(7)
wherein ,,/>,s k is the number of impacts generated in the time dynamic activity of the neuron, and s k With sparsity, modeling each neuron time dynamic activity using a second order regression process, G (k) Is a second order regression coefficient;
solving an objective function of the action potential C and the time sequence change S of the neurons by using an online active set algorithm, and carrying out iterative operation on each neuron:
(8)
wherein , and />The pooling variable and the pooling length are respectively +.>,/>,/>For the autocorrelation function coefficient of G, the weighting coefficient +.>,q k The initial value is set to +.>Deducing the action potential C of the neuron and the time sequence change S through iteration of the operation;
the step S43 specifically includes:
fixing the spatial position and shape of the neuron obtained in the step S41 and the current value of the action potential and time sequence change obtained in the step S42, and solving the objective function of the spatial position D and the time sequence change F of the background to be expressed as follows:
(9)
wherein ,;
solving the spatial position D and the time sequence change F of the background by using a singular value decomposition method, wherein the formula is as follows:
(10)
wherein SVD represents singular value decomposition operations.
2. The method according to claim 1, wherein S2 specifically comprises:
for each frame of image in the original image video data, calculating a corresponding bright channel prior I bcp :
(1)
For each frame of image in the original image video data, calculating a corresponding dark channel prior I dcp :
(2)
wherein ,Ii Is an input image and contains r channels,xFor the current pixel coordinate of I, < >>Is the coordinatesxA neighboring window area centered on the pixel point of (c),yis the coordinates of each pixel point in the region;
estimating the depth of field corresponding transmission rate t of each frame of image on the basis of obtaining the prior of the bright and dark channels of each frame of image:
(3)
wherein the division operation is pixel division operation, and weight coefficient is added to avoid overestimation of transmission ratew。
3. A neuron extraction device based on a priori depth of field estimation of a bright and dark channel, the device comprising:
the input module is used for inputting original image video data;
the computing module is used for computing the prior of the bright and dark channels of the original image video data, and estimating the transmission rate corresponding to the depth of field of the original image video data by using two prior;
the construction module is used for dividing the transmission rate corresponding to the depth of field of the original image video data by taking pixels as unit points to construct a neuron extraction frame of constraint non-negative matrix factorization;
the iteration module is used for iteratively solving relevant parameters of the neurons, wherein the relevant parameters comprise: the method comprises the steps of outputting the extraction information of neurons finally after the iteration is finished, wherein the extraction information comprises the following steps of: spatial position and shape size, action potential and time sequence variation;
the construction module is specifically configured to:
dividing the original image video data and the transmission rate by the following formula:
,i = 1, ..., N(4)
wherein ,is a smaller constant 10 -6 The purpose is to prevent t from approaching 0 so as to avoid the whole fraction of the equation approaching infinity, and the division operation is pixel point division operation;
the result after the dot division operation is expressed as a column vector form, and all the results form a matrixInitial input data of a neuron extraction framework as the constrained non-negative matrix factorization, where V i Is the ith column vector;
the iteration module is specifically configured to:
s41, fixing the background in the related parameters, the action potential and time sequence change of the neuron, and solving to obtain the spatial position and the shape of the neuron by using a rapid layered alternating least square algorithm;
s42, fixing the background in the related parameters, and deducing the action potential and time sequence change of the neuron by using an online active set algorithm according to the spatial position and the shape of the neuron obtained in the step S41;
s43, fixing the spatial position and the shape of the neuron obtained in the step S41 and the action potential and time sequence change of the neuron obtained in the step S42, and solving the background information by using a singular value decomposition method;
s44, repeating the steps S41-S43 until the first layer iteration is finished, and obtaining the current neuron characteristic information;
s45, taking the characteristic information of the current neuron as input data of a second layer iteration, and repeating the steps S2-S45 until the second layer iteration is finished, and finally outputting the extraction information of the neuron, wherein the extraction information comprises: spatial position and shape size, action potential and time sequence variation;
the step S41 specifically includes:
fixing the current background and the current action potential and time sequence change of all neurons, and solving the objective function of the spatial position and the shape of all neurons to be expressed as:
(5)
wherein V is input data, P is the spatial position and shape of all the extracted neurons, C is the action potential and time sequence change of all the extracted neurons, P has sparsity and spatial locality, D and F are the spatial position and time sequence change of the background respectively, and the superscript symbol is the current estimated value of the relevant parameters;
solving an objective function of P through a rapid hierarchical alternating least square algorithm, and carrying out the following operation on each neuron in C:
(6)
wherein ,,/>the superscript symbol T denotes a transpose operation, K is the number of all neurons in C, k=1,..k, iterating the above operation until the iteration ends;
the step S42 specifically includes:
fixing the current background and the current space position and shape of all neurons obtained in the step S41, and deducing the action potential C and the time sequence change S of all neurons to construct an objective function as follows:
(7)
wherein ,,/>,s k is the number of impacts generated in the time dynamic activity of the neuron, and s k With sparsity, modeling each neuron time dynamic activity using a second order regression process, G (k) Is a second order regression coefficient;
solving an objective function of the action potential C and the time sequence change S of the neurons by using an online active set algorithm, and carrying out iterative operation on each neuron:
(8)
wherein , and />The pooling variable and the pooling length are respectively +.>,/>,/>For the autocorrelation function coefficient of G, the weighting coefficient +.>,q k The initial value is set to +.>Deducing the action potential C of the neuron and the time sequence change S through iteration of the operation;
the step S43 specifically includes:
fixing the spatial position and shape of the neuron obtained in the step S41 and the current value of the action potential and time sequence change obtained in the step S42, and solving the objective function of the spatial position D and the time sequence change F of the background to be expressed as follows:
(9)
wherein ,;
solving the spatial position D and the time sequence change F of the background by using a singular value decomposition method, wherein the formula is as follows:
(10)
wherein SVD represents singular value decomposition operations.
4. An electronic device comprising a processor and a memory having at least one instruction stored therein, wherein the at least one instruction is loaded and executed by the processor to implement the neuron extraction method based on a light-dark channel a priori depth of field estimation of claim 1 or 2.
5. A computer readable storage medium having stored therein at least one instruction, wherein the at least one instruction is loaded and executed by a processor to implement the neuron extraction method based on a light-dark channel a priori depth of field estimation of claim 1 or 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310689836.6A CN116416253B (en) | 2023-06-12 | 2023-06-12 | Neuron extraction method and device based on bright-dark channel priori depth of field estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310689836.6A CN116416253B (en) | 2023-06-12 | 2023-06-12 | Neuron extraction method and device based on bright-dark channel priori depth of field estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116416253A CN116416253A (en) | 2023-07-11 |
CN116416253B true CN116416253B (en) | 2023-08-29 |
Family
ID=87059697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310689836.6A Active CN116416253B (en) | 2023-06-12 | 2023-06-12 | Neuron extraction method and device based on bright-dark channel priori depth of field estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116416253B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117474818B (en) * | 2023-12-27 | 2024-03-15 | 北京科技大学 | Underwater image enhancement method and device based on non-parameter Bayesian depth of field estimation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002358504A (en) * | 2001-05-31 | 2002-12-13 | Canon Inc | Signal processing circuit and pattern recognizing device |
CN110197114A (en) * | 2019-04-04 | 2019-09-03 | 华中科技大学 | A kind of automatic identifying method and device of full brain range single neuron aixs cylinder synaptic knob |
CN113920124A (en) * | 2021-06-22 | 2022-01-11 | 西安理工大学 | Brain neuron iterative segmentation method based on segmentation and error guidance |
WO2022134391A1 (en) * | 2020-12-25 | 2022-06-30 | 中国科学院西安光学精密机械研究所 | Fusion neuron model, neural network structure and training and inference methods therefor, storage medium, and device |
CN115035173A (en) * | 2022-06-08 | 2022-09-09 | 山东大学 | Monocular depth estimation method and system based on interframe correlation |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008061548A1 (en) * | 2006-11-22 | 2008-05-29 | MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. | Reconstruction and visualization of neuronal cell structures with brigh-field mosaic microscopy |
EP2631872A4 (en) * | 2010-10-18 | 2015-10-28 | Univ Osaka | Feature extraction device, feature extraction method and program for same |
US11854281B2 (en) * | 2019-08-16 | 2023-12-26 | The Research Foundation For The State University Of New York | System, method, and computer-accessible medium for processing brain images and extracting neuronal structures |
-
2023
- 2023-06-12 CN CN202310689836.6A patent/CN116416253B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002358504A (en) * | 2001-05-31 | 2002-12-13 | Canon Inc | Signal processing circuit and pattern recognizing device |
CN110197114A (en) * | 2019-04-04 | 2019-09-03 | 华中科技大学 | A kind of automatic identifying method and device of full brain range single neuron aixs cylinder synaptic knob |
WO2022134391A1 (en) * | 2020-12-25 | 2022-06-30 | 中国科学院西安光学精密机械研究所 | Fusion neuron model, neural network structure and training and inference methods therefor, storage medium, and device |
CN113920124A (en) * | 2021-06-22 | 2022-01-11 | 西安理工大学 | Brain neuron iterative segmentation method based on segmentation and error guidance |
CN115035173A (en) * | 2022-06-08 | 2022-09-09 | 山东大学 | Monocular depth estimation method and system based on interframe correlation |
Non-Patent Citations (1)
Title |
---|
深度学习在电力系统预测中的应用;苗磊等;工程科学学报;第45卷(第4期);663-672 * |
Also Published As
Publication number | Publication date |
---|---|
CN116416253A (en) | 2023-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969250B (en) | Neural network training method and device | |
Li et al. | Markov random field model-based edge-directed image interpolation | |
Bucak et al. | Incremental subspace learning via non-negative matrix factorization | |
Liu et al. | A chaotic quantum-behaved particle swarm optimization based on lateral inhibition for image matching | |
CN116416253B (en) | Neuron extraction method and device based on bright-dark channel priori depth of field estimation | |
WO2021054402A1 (en) | Estimation device, training device, estimation method, and training method | |
CN106408550A (en) | Improved self-adaptive multi-dictionary learning image super-resolution reconstruction method | |
Sun et al. | Learning local quality-aware structures of salient regions for stereoscopic images via deep neural networks | |
CN114283495A (en) | Human body posture estimation method based on binarization neural network | |
CN108108769B (en) | Data classification method and device and storage medium | |
CN113421276A (en) | Image processing method, device and storage medium | |
CN113689517A (en) | Image texture synthesis method and system of multi-scale channel attention network | |
Hartikainen et al. | Sparse spatio-temporal Gaussian processes with general likelihoods | |
EP3121788B1 (en) | Image feature estimation method and device | |
CN109447147B (en) | Image clustering method based on depth matrix decomposition of double-image sparsity | |
Duan et al. | Combining transformers with CNN for multi-focus image fusion | |
Deng et al. | Modeling shape dynamics during cell motility in microscopy videos | |
CN115861044B (en) | Complex cloud layer background simulation method, device and equipment based on generation countermeasure network | |
Graßhoff et al. | Scalable Gaussian process separation for kernels with a non-stationary phase | |
CN116705151A (en) | Dimension reduction method and system for space transcriptome data | |
CN116433662B (en) | Neuron extraction method and device based on sparse decomposition and depth of field estimation | |
CN115909016A (en) | System, method, electronic device, and medium for analyzing fMRI image based on GCN | |
CN115018856A (en) | Contrast learning and spatial coding based weak supervision medical image segmentation registration cooperation method | |
Han et al. | Blind image quality assessment with channel attention based deep residual network and extended LargeVis dimensionality reduction | |
Singh et al. | Haar Adaptive Taylor-ASSCA-DCNN: A Novel Fusion Model for Image Quality Enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |