CN113781442A - Rice seedling plant three-dimensional structure model reconstruction method - Google Patents
Rice seedling plant three-dimensional structure model reconstruction method Download PDFInfo
- Publication number
- CN113781442A CN113781442A CN202111066567.5A CN202111066567A CN113781442A CN 113781442 A CN113781442 A CN 113781442A CN 202111066567 A CN202111066567 A CN 202111066567A CN 113781442 A CN113781442 A CN 113781442A
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional
- group
- rice seedling
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 235000007164 Oryza sativa Nutrition 0.000 title claims abstract description 48
- 235000009566 rice Nutrition 0.000 title claims abstract description 48
- 241000196324 Embryophyta Species 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 34
- 240000007594 Oryza sativa Species 0.000 title 1
- 241000209094 Oryza Species 0.000 claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000010008 shearing Methods 0.000 claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000002591 computed tomography Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 14
- 239000013598 vector Substances 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000005315 distribution function Methods 0.000 claims description 3
- 238000006116 polymerization reaction Methods 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 235000013339 cereals Nutrition 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000607479 Yersinia pestis Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000005284 basis set Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000008558 metabolic pathway by substance Effects 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003020 moisturizing effect Effects 0.000 description 1
- 238000010899 nucleation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000009331 sowing Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a rice seedling plant three-dimensional structure model reconstruction method, which comprises a scanning device for collecting two-dimensional microscopic images; the computer apparatus for obtaining a three-dimensional microstructure model comprises: the image processing device comprises an image shearing unit, an image denoising unit, an image reconstruction unit and an image data processing unit; the image shearing unit is used for shearing the obtained two-dimensional micro CT image to obtain a target central image Z; the image denoising unit is used for removing noise points in the target central image to obtain a corresponding clean image Y; the image reconstruction unit is used for reconstructing a three-dimensional microstructure model of the clean image by adopting a depth convolution countermeasure generation network; and the image data processing unit is used for processing according to the reconstruction data to generate a three-dimensional microstructure model of the target rice seedling plant. The three-dimensional microstructure model of the rice seedling plant obtained by the invention is closer to an entity, and the internal structure is convenient to observe.
Description
Technical Field
The invention relates to the field of plant three-dimensional model construction, in particular to a rice seedling plant three-dimensional structure model reconstruction method.
Background
Rice is one of the major crops in the world. The sowing area of Chinese rice accounts for 1/4 of national grain crops, and the yield also accounts for a great proportion. The paddy is one of the important grain crops in China, and is affected by factors such as climate, environment and the like, and the paddy planting area and the yield distribution in each region are uneven. The rice planting mode mainly comprises seedling transplantation and direct seeding, most of rice planting in China adopts a transplanting mode, but mechanical damage, pest damage and the like exist in the rice transplanting process. Therefore, the three-dimensional microscopic reconstruction of the rice seedling plant is realized, an accurate physical structure model can be provided for researches on a mechanical damage mechanism, a self-healing mechanism, substance metabolism and the like of the rice seedling, and the method has important significance for improving the survival rate, the stress resistance, the yield and the quality of the rice seedling.
The classical denoising system, such as an image denoising system based on deep learning, an algorithm based on signal filtering and a denoising system based on singular value decomposition, is the latest development direction in image denoising based on sparse representation and constraint regularization along with the development of a compressive sensing theory, wherein the BM3D image denoising algorithm and the sparse representation denoising algorithm are better algorithms in the image denoising field, but most of the algorithms are higher in complexity, and the denoising effect on high-intensity noise images is poor. Therefore, the image which utilizes the sparse residual error and combines the BM3D algorithm and the sparse algorithm to realize the efficient image noise removal is proposed.
Disclosure of Invention
The invention aims to provide a rice seedling plant three-dimensional structure model reconstruction method, which solves the problems that the prior art has no accurate physical model for the microstructure of a rice seedling plant and cannot acquire the data of a seedling mechanical damage mechanism, a self-healing mechanism, material metabolism and the like in real time.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the method for reconstructing the three-dimensional structure model of the rice seedling plant comprises the following steps:
step 1, acquiring a CT scanning two-dimensional microscopic image of a target rice seedling plant;
step 2, shearing the CT scanning two-dimensional microscopic image of the rice seedling plant obtained in the step 1 to obtain a target central image Z;
step 3, removing noise points in the target central image Z obtained in the step 2 to obtain a de-noised clean image Y;
step 4, carrying out three-dimensional model reconstruction on the denoised clean image Y obtained in the step 3 by adopting a deep convolution countermeasure generation network to obtain reconstruction data;
and 5, processing the reconstruction data obtained in the step 4 to generate a three-dimensional structure model of the target rice seedling plant.
Further, in step 2, a shearing function in OpenCV is utilized to shear the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain a target central image Z, and firstly, the pixel point coordinate (x) of the upper left end point of the target central image Z is determinedmin,ymin) Then, the width and height (w, h) occupied by the target center image Z are determined, and the shear function in OpenCV is used to obtain img (x)min,ymin,(xmin+w),(ymin+ h)) cutting the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain the meshThe center image Z is marked.
Further, in step 3, the adaptive three-dimensional block matching sparse residual algorithm removes noise points in the target central image Z, and the process is as follows:
firstly, initializing the target central image Z by using BM3D to obtain a good approximate denoising initial image Y of the denoising clean image YfindAnd obtaining a real group sparse code B;
and obtaining a denoised clean image Y by using a self-adaptive group sparse residual algorithm improved based on group sparse coding on the target central image Z.
Further, the process of obtaining the denoised clean image Y by using the adaptive group sparse residual algorithm for the target center image Z is as follows:
(1) selecting k vectors Z for image blocks with the size of NxN for a target central image ZKRepresents; for each image block zKMatching a set of image blocks with greater similarity by using adaptive block search algorithm
Wherein SSIM denotes structural similarity, yfindRepresenting the denoised initial image after BM3D processing,representing the t-th iteration denoised image.
Define p as a small constant ifThenObtaining the index of each group of m similar blocks as the target image, otherwiseThe index of each set of m similar blocks is acquired as the target image.
Will be provided withWritten in matrix form, denotedThenReferred to as group similarity blocks, RN×CTo representMiddle image block zKDimension is NXN, and there are C similar blocks; the denoised initial image y processed by BM3D is processed by the same methodfindCarrying out block matching group to obtain similar blocks
(2) For each group of similar blocksCalculating a sparse coefficient by using a self-adaptive group sparse residual algorithm, and then calculating to obtain a de-noised clean image Y, wherein the method specifically comprises the following steps:
first, based on a group sparse coding model:
wherein the content of the first and second substances,represented as a set of noisy images similar to a block,a dictionary of sets of noise images is represented,represents the sparse coefficients of the noise image group,represents LFNorm, | · | luminance1To representIs measured by the sparsity of the network,representing the k-th set of regularization parameters.
Secondly, in order to make the denoising quality better, a self-adaptive group sparse residual error model is established to denoise the image:
wherein the content of the first and second substances,representing clean image group sparse coefficients.
Then, the unknowns in the model are determined in turnAndnoise image group sparse coefficientDetermination of (1): updating noise image sparsity coefficientsClean image group sparse coefficientsDetermination of (1): computing real image sparse coefficientsDictionaryDetermination of (1): computing matricesCovariance matrix ofDecomposition with SVDWhereinAndare all unitary matrices that are used for the transmission of the signal,is a matrix of all 0 except the elements on the main diagonal, each element on the main diagonal is called a singular value, and the matrix is obtainedI.e. the dictionary sought. Regularization parameterDetermination of (1):where C and ε are small constants, δiIs the estimated residual errorVariance of (a)n 2Is the variance of the noise.
Finally, the calculation is carried out to obtainAfter each group is repeatedAfter that, polymerization is carried outAnd obtaining a denoised clean image Y.
Further, the procedure of step 4 is as follows
Firstly, converting the obtained denoised clean image Y into a stacked three-dimensional microscopic image, and segmenting the stacked three-dimensional microscopic image into 128 multiplied by 128 training set data, namely a real image;
then setting an initial random noise vector Z as 100 and inputting the initial random noise vector Z into a generator, wherein a microscopic image generated by the generator is an unreal image G (Z); inputting a real image and an unreal image into a discriminator, fixing a generator, training the discriminator to accurately discriminate the images, wherein the true image G (Y) is accurately discriminated to be 1, and the false image D [ G (Z) ] is 0; inputting the probability of the unreal image fed back by the discriminator into a generator, fixing the discriminator and training the generator to enable the probability of the unreal image generated by the discriminator to be close to 1;
the cost function defining the penalty, i.e. the deep convolution generation countermeasure network, is as follows:
wherein E (#) represents the expected value of the distribution function, Pdata(Y) represents the distribution of real samples, Pz(Z) represents the distribution of false samples, Z is a random noise vector, G (Z) represents the image generated by the generator, G (Y) represents the discrimination of the discriminator on the real image, and the closer the discrimination result is to 1, the better; d [ G (Z)]Indicating that the discriminator discriminates the generated image, it is desirable that the discrimination result is as close to 0 as possible.
First, the generator is fixed, and the discriminator is trained to maximize it with the following formula:
then, fixing the discriminator and training the generator to minimize the generator, the following formula is given:
in this process, the learning rate is set to 0.0002, the iteration is set to 1000, the batch size is set to 128, and the momentum is set to 0.5, so that the three-dimensional microstructure data of the target rice seedling plant is obtained as reconstruction data.
Further, in step 5, the generated reconstruction data is processed as follows: after the generation training is finished, the generator generates an hdf5 transition file under a specified generation catalog, firstly, a post-processing program is used for converting the format of a generated image into a tiff file, then the tiff file is visualized in software, and finally, a three-dimensional structure model of the target rice seedling is generated.
Compared with the prior art, the invention has the advantages that:
(1) in order to enable the internal microstructure of the CT scanning two-dimensional microscopic image to be more accurate, a self-adaptive matching three-dimensional block matching sparse residual algorithm is provided to remove noise from the image, so that the microstructure is more accurate and the reconstruction authenticity is increased.
(2) The three-dimensional reconstruction is carried out by combining the deep convolution countermeasure generation network, so that the method is closer to an entity, is convenient for observing an internal structure, and is higher in speed and simpler and more convenient in system.
Drawings
FIG. 1 is a block diagram of the system flow of the present invention.
Fig. 2 is a schematic block diagram of the BM3D algorithm of the present invention.
Fig. 3 is a schematic diagram of the GSR algorithm of the present invention.
FIG. 4 is a schematic diagram of a three-dimensional reconstruction algorithm of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in FIG. 1, the method for reconstructing the three-dimensional structure model of the rice seedling plant comprises the following steps:
step 1, acquiring a CT scanning two-dimensional microscopic image of a target rice seedling plant;
and (5) cultivating rice seedling plants. Selecting rice seedlings of required variety and days, fixing the rice seedling plants in a closed container made of a material with low X-ray absorption rate, and adding water for moisturizing. The closed container is placed on a rotatable object stage, an X-ray source emits X-rays to penetrate through an object, a series of projection images with different light and shade are formed on a detector, then the internal three-dimensional structure of the object is reconstructed through a back projection fault, then data are transmitted into a computer, the object stage is moved up and down, then the steps are repeated, new fault images are obtained, the obtained rice seedling plant images of different faults are spliced, and therefore the images which are high enough in the longitudinal axis direction are obtained.
Step 2, shearing the CT scanning two-dimensional microscopic image of the rice seedling plant obtained in the step 1 to obtain a target center image Z, wherein the method specifically comprises the following steps:
the target central image Z is obtained by cutting the CT scanning two-dimensional microscopic image of the rice seedling plant, and the purpose is to reserve the main information of the target central image, so that the required two-dimensional microscopic image of the rice seedling plant occupies most of the central area, the blank area position is reduced, the subsequent denoising is convenient for identification, and the interference is reduced. The specific mode of respectively shearing each target is not limited, a CT scanning two-dimensional microscopic image of a rice seedling plant is sheared by utilizing a shearing function in OpenCV to obtain a target central image Z, and firstly, the pixel point coordinate (x) of the upper left end point of the target central image Z is determinedmin,ymin) Then, the width and height (w, h) occupied by the target center image Z are determined, and the shear function in OpenCV is used to obtain img (x)min,ymin,(xmin+w),(ymin+ h)) shearing the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain a target central image Z. And cutting to obtain a target central image Z, storing the target central image Z in a folder, and facilitating denoising of subsequent images.
And 3, removing noise points in the target central image Z by using an image denoising unit to obtain a denoised clean image Y, wherein the image denoising unit specifically operates as follows:
initializing the target central image Z by using BM3D to obtain a good approximate denoising initial image Y of the denoising clean image YfindAnd obtaining the real group sparse code B therefrom, as shown in fig. 2, the specific operations are as follows:
selecting image blocks of fixed size NxN from Z according to the obtained target central image ZxWhere x is the coordinate of the upper left corner of the block. Each processed block is marked as R, and the current processed image block is fixed as RHard thresholding the obtained coefficients to obtain inter-block similarity using a normalized 2D linear transformWherein the image block size is Nht×Nht,ZxRepresenting an image block located at x in the image Z,representing hard-threshold filtering of a two-dimensional image block, gamma2DRepresenting a hard threshold filter factor, | | - | luminance2Represents L2And (4) norm. The block matching result isWhereinIs the maximum d-distance that two image blocks are similar,is a processing blockSet of all similar blocks, ZxRepresenting an image block located at x in the image Z,representing a processing block.
Will be provided withAll the similar blocks in the three-dimensional matrix are sorted according to the similarity from high to low to form a three-dimensional matrixWherein the size is To representThe number of similar blocks contained therein.Efficient noise attenuation by hard thresholding followed by inverse transformation to produce a 3D array of block estimatesWhereinRepresenting cooperative hard threshold filtering, gamma3DRepresenting a co-operative hard threshold filter factor,to representInverse transformation of (1), arrayIncludedEstimated value of each stackSubscript xmIndicating the position of this estimated block, superscript xRA reference block is indicated.
The collaborative hard threshold filtering may generate multiple estimation values of the same reference block, which may cause multiple estimation values for each pixel point, and the process of performing weighted average calculation on the estimation values is aggregation. The basis estimation image can be obtained by aggregation.
Aggregation processWhereinIs a similar blockIs determined by the characteristic function of (a),an estimate of one of the reference blocks is represented,representing the weight of the similar block by the formulaWhere, σ denotes the noise standard deviation,and the number of all non-zero elements after the three-dimensional matrix is subjected to collaborative hard threshold filtering is represented.
Obtaining the preliminary estimation value of the real image in the above stepsContinuing grouping within the preliminary estimate and performing collaborative wienerFiltering to improve the noise reduction effect.
The image obtained by basic estimation is already obviously attenuated, so the similarity between blocks of the final estimation can be expressed by ideal L2Norm is calculated, and the coordinate set of block matching is as follows:
According to the obtained image block coordinate setTo normalize two groups, one from the preliminary estimate and the other from the observation noise,from preliminary estimation blocksThe components are stacked to form the structure,by noise blocksAnd (4) stacking.
Empirical wiener coefficients are defined from the energy of the 3D transform coefficients estimated from the basis set:
will be provided withImplementation of cooperative wiener filtering as noisy data 3D transform coefficientsAnd wiener shrinkage factorElement by element multiplication and then inverse transformationTo produce a 3D array of block estimates:the set including being located in a matching positionBlock estimate of (a)
Weighted averaging of different estimates for the same reference block to obtain the final estimate:
wherein the content of the first and second substances,is the weight of each similar block, expressed as
Since BM3D has ideal denoising performance, yfindCan be regarded as a good approximation of the original image Z and can therefore be taken from yfindThereby obtaining the real group sparse code B.
Obtaining a denoised clean image Y by using a self-adaptive group sparse residual algorithm for the target central image Z, as shown in fig. 3, specifically operating as follows:
selecting k vectors Z for image blocks with the size of NxN for a target central image ZKRepresents; for each image block zKMatching a set of image blocks with greater similarity by using adaptive block search algorithm
Wherein SSIM denotes structural similarity, yfindRepresenting the denoised initial image after BM3D processing,representing the t-th iteration denoised image.
Define p as a small constant ifThenObtaining the index of each group of m similar blocks as the target image, otherwiseThe index of each set of m similar blocks is acquired as the target image.
Will be provided withWritten in matrix form, denotedThenReferred to as group similarity blocks, RN×CTo representMiddle image block zKDimension is NXN, and there are C similar blocks; the denoised initial image y processed by BM3D is processed by the same methodfindCarrying out block matching group to obtain similar blocks
For each group of similar blocksCalculating a sparse coefficient by using a self-adaptive group sparse residual algorithm, and then calculating to obtain a de-noised clean image Y, wherein the method specifically comprises the following steps:
first, based on a group sparse coding model:
wherein the content of the first and second substances,represented as a set of noisy images similar to a block,a dictionary of sets of noise images is represented,represents the sparse coefficients of the noise image group,represents LFNorm, | · | luminance1To representIs measured by the sparsity of the network,representing the k-th set of regularization parameters.
Secondly, in order to make the denoising quality better, a self-adaptive group sparse residual error model is established to denoise the image:
wherein the content of the first and second substances,representing clean image group sparse coefficients.
Then, the unknowns in the model are determined in turnAndnoise image group sparse coefficientDetermination of (1): updating noise image sparsity coefficientsClean image group sparse coefficientsDetermination of (1): computing real image sparse coefficientsDictionaryDetermination of (1): computing matricesCovariance matrix ofDecomposition with SVDWhereinAndare all unitary matrices that are used for the transmission of the signal,is a matrix of all 0 except the elements on the main diagonal, each element on the main diagonal is called a singular value, and the matrix is obtainedI.e. the dictionary sought. Regularization parameterDetermination of (1):where c and ε are small constants, δiIs the estimated residual errorVariance of (a)n 2Is the variance of the noise.
Finally, the calculation is carried out to obtainAfter each group is repeatedAfter that, polymerization is carried outAnd obtaining a denoised clean image Y.
Step 4, performing three-dimensional model reconstruction on the clean image by adopting a depth convolution countermeasure generation network, as shown in fig. 4, specifically operating as follows:
firstly, converting the obtained denoised clean image Y into a stacked three-dimensional microscopic image, and segmenting the stacked three-dimensional microscopic image into 128 multiplied by 128 training set data, namely a real image;
then setting an initial random noise vector Z as 100 and inputting the initial random noise vector Z into a generator, wherein a microscopic image generated by the generator is an unreal image G (Z); inputting a real image and an unreal image into a discriminator, fixing a generator, training the discriminator to accurately discriminate the images, wherein the true image D (Y) is accurately discriminated to be 1, and the false image D [ G (z) ] is 0; inputting the probability of the unreal image fed back by the discriminator into a generator, fixing the discriminator and training the generator to enable the probability of the unreal image generated by the discriminator to be close to 1;
the cost function defining the penalty, i.e. the deep convolution generation countermeasure network, is as follows:
wherein E (#) represents the expected value of the distribution function, Pdata(Y) represents the distribution of real samples, Pz(Z) represents the distribution of false samples, Z is a random noise vector, g (Z) represents the image generated by the generator, d (y) represents the discrimination of the true image by the discriminator, the closer its discrimination result is to 1, the better it is; d [ G (z)]Indicating that the discriminator discriminates the generated image, it is desirable that the discrimination result is as close to 0 as possible.
First, the generator is fixed, and the discriminator is trained to maximize it with the following formula:
then, fixing the discriminator and training the generator to minimize the generator, the following formula is given:
in this process, the learning rate is set to 0.0002, the iteration is set to 1000, the batch size is set to 128, and the momentum is set to 0.5, so that the three-dimensional microstructure data of the target rice seedling plant is obtained as reconstruction data.
And step 5, carrying out the following processing on the generated reconstruction data: after the generation training is finished, the generator generates an hdf5 transition file under a specified generation catalog, firstly, a post-processing program is used for converting the format of a generated image into a tiff file, then the tiff file is visualized in software, and finally, a three-dimensional structure model of the target rice seedling is generated:
the stored File is converted into an image format by using a python language, all images are imported in ImageJ software firstly, and then Plugins → 3DViewer is clicked to carry out three-dimensional visualization on the images, and in order to store the images, File → SaveView is clicked for the next view.
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.
Claims (6)
1. The method for reconstructing the three-dimensional structure model of the rice seedling plant is characterized by comprising the following steps of:
step 1, acquiring a CT scanning two-dimensional microscopic image of a target rice seedling plant;
step 2, shearing the CT scanning two-dimensional microscopic image of the rice seedling plant obtained in the step 1 to obtain a target central image Z;
step 3, removing noise points in the target central image Z obtained in the step 2 to obtain a de-noised clean image Y;
step 4, carrying out three-dimensional model reconstruction on the denoised clean image Y obtained in the step 3 by adopting a deep convolution countermeasure generation network to obtain reconstruction data;
and 5, processing the reconstruction data obtained in the step 4 to generate a three-dimensional structure model of the target rice seedling plant.
2. The method for reconstructing the three-dimensional structural model of the rice seedling plant according to claim 1, wherein in the step 2, a CT scanning two-dimensional microscopic image of the rice seedling plant is clipped by using a clipping function in OpenCV to obtain a target central image Z, and first, a pixel point coordinate (x) of an upper left end point of the target central image Z is determinedmin,ymin) Then, the width and height (w, h) occupied by the target center image Z are determined, and the shear function in OpenCV is used to obtain img (x)min,ymin,(xmin+w),(ymin+ h)) shearing the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain a target central image Z.
3. The rice seedling plant three-dimensional structure model reconstruction method according to claim 1, wherein in step 3, the adaptive three-dimensional block matching sparse residual algorithm removes noise points in the target central image Z, and the process is as follows:
firstly, initializing the target central image Z by using BM3D to obtain a good approximate denoising initial image Y of the denoising clean image YfindAnd obtaining a real group sparse code B;
and obtaining a denoised clean image Y by using a self-adaptive group sparse residual algorithm improved based on group sparse coding on the target central image Z.
4. The method for reconstructing the three-dimensional structural model of the rice seedling plant as claimed in claim 3, wherein the process of obtaining the denoised clean image Y by using the adaptive group sparse residual algorithm for the target central image Z is as follows:
(1) selecting k vectors Z for image blocks with the size of NxN for a target central image ZKRepresents; for each image block zKMatching a set of image blocks with greater similarity by using adaptive block search algorithmThe following formula is provided:
wherein SSIM denotes structural similarity, yfindRepresenting the denoised initial image after BM3D processing,representing a t-th iteration de-noised image;
define p as a small constant ifThenObtaining the index of each group of m similar blocks as the target image, otherwiseObtaining the index of each group of m similar blocks as a target image;
will be provided withWritten in matrix form, denotedThenReferred to as group similarity blocks, RN×CTo representMiddle image block zKDimension is NXN, and there are C similar blocks; the denoised initial image y processed by BM3D is processed by the same methodfindCarrying out block matching group to obtainGroup of similar blocks
(2) For each group of similar blocksCalculating a sparse coefficient by using a self-adaptive group sparse residual algorithm, and then calculating to obtain a de-noised clean image Y, wherein the method specifically comprises the following steps:
firstly, a group-based sparse coding model is established as follows:
wherein the content of the first and second substances,represented as a set of noisy images similar to a block,a dictionary of sets of noise images is represented,represents the sparse coefficients of the noise image group,represents LFNorm, | · | luminance1To representIs measured by the sparsity of the network,a regularization parameter representing a kth group;
secondly, in order to make the denoising quality better, a self-adaptive group sparse residual error model is established to denoise the image:
wherein the content of the first and second substances,representing a clean image group sparse coefficient;
then, the unknowns in the model are determined in turnAndnoise image group sparse coefficientDetermination of (1): updating noise image sparsity coefficientsClean image group sparse coefficientsDetermination of (1): computing real image sparse coefficientsDictionaryDetermination of (1): computing matricesCovariance matrix ofDecomposition with SVDWhereinAndare all unitary matrices that are used for the transmission of the signal,is a matrix of all 0 except the elements on the main diagonal, each element on the main diagonal is called a singular value, and the matrix is obtainedThe obtained dictionary is obtained; regularization parameterDetermination of (1):where c and ε are small constants, δiIs the estimated residual errorVariance of (a)n 2Is the variance of the noise;
5. The method for reconstructing a three-dimensional structure model of a young rice plant according to claim 1, wherein the process of step 4 is as follows
Firstly, converting the obtained denoised clean image Y into a stacked three-dimensional microscopic image, and segmenting the stacked three-dimensional microscopic image into 128 multiplied by 128 training set data, namely a real image;
then setting an initial random noise vector Z as 100 and inputting the initial random noise vector Z into a generator, wherein a microscopic image generated by the generator is an unreal image G (Z); inputting a real image and an unreal image into a discriminator, fixing a generator, training the discriminator to accurately discriminate the images, wherein the true image D (Y) is accurately discriminated to be 1, and the false image D [ G (z) ] is 0; inputting the probability of the unreal image fed back by the discriminator into a generator, fixing the discriminator and training the generator to enable the probability of the unreal image generated by the discriminator to be close to 1;
the cost function defining the penalty, i.e. the deep convolution generation countermeasure network, is as follows:
wherein E (#) represents the expected value of the distribution function, Pdata(Y) represents the distribution of real samples, Pz(Z) represents the distribution of false samples, Z is a random noise vector, g (Z) represents the image generated by the generator, d (y) represents the discrimination of the true image by the discriminator, the closer its discrimination result is to 1, the better it is; d [ G (z)]Indicating that the discriminator discriminates the generated image, and it is desirable that the discrimination result is as close to 0 as possible;
first, the generator is fixed, and the discriminator is trained to maximize it as follows:
then the discriminator is fixed and the generator is trained to minimize as follows:
in this process, the learning rate is set to 0.0002, the iteration is set to 1000, the batch size is set to 128, and the momentum is set to 0.5, so that the three-dimensional microstructure data of the target rice seedling plant is obtained as reconstruction data.
6. The method for reconstructing a three-dimensional structure model of a young rice plant as claimed in claim 1, wherein in step 5, the generated reconstruction data is processed as follows: after the generation training is finished, the generator generates an hdf5 transition file under a specified generation catalog, firstly, a post-processing program is used for converting the format of a generated image into a tiff file, then the tiff file is visualized in software, and finally, a three-dimensional structure model of the target rice seedling is generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111066567.5A CN113781442B (en) | 2021-09-13 | 2021-09-13 | Three-dimensional structure model reconstruction method for rice seedling plants |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111066567.5A CN113781442B (en) | 2021-09-13 | 2021-09-13 | Three-dimensional structure model reconstruction method for rice seedling plants |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781442A true CN113781442A (en) | 2021-12-10 |
CN113781442B CN113781442B (en) | 2024-05-31 |
Family
ID=78842680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111066567.5A Active CN113781442B (en) | 2021-09-13 | 2021-09-13 | Three-dimensional structure model reconstruction method for rice seedling plants |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781442B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170103161A1 (en) * | 2015-10-13 | 2017-04-13 | The Governing Council Of The University Of Toronto | Methods and systems for 3d structure estimation |
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN109801375A (en) * | 2019-01-24 | 2019-05-24 | 电子科技大学 | Porous material three-dimensional reconstruction method based on depth convolution confrontation neural network |
CN110223231A (en) * | 2019-06-06 | 2019-09-10 | 天津工业大学 | A kind of rapid super-resolution algorithm for reconstructing of noisy image |
CN112967210A (en) * | 2021-04-29 | 2021-06-15 | 福州大学 | Unmanned aerial vehicle image denoising method based on full convolution twin network |
-
2021
- 2021-09-13 CN CN202111066567.5A patent/CN113781442B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170103161A1 (en) * | 2015-10-13 | 2017-04-13 | The Governing Council Of The University Of Toronto | Methods and systems for 3d structure estimation |
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN109801375A (en) * | 2019-01-24 | 2019-05-24 | 电子科技大学 | Porous material three-dimensional reconstruction method based on depth convolution confrontation neural network |
CN110223231A (en) * | 2019-06-06 | 2019-09-10 | 天津工业大学 | A kind of rapid super-resolution algorithm for reconstructing of noisy image |
CN112967210A (en) * | 2021-04-29 | 2021-06-15 | 福州大学 | Unmanned aerial vehicle image denoising method based on full convolution twin network |
Non-Patent Citations (1)
Title |
---|
汪祖辉;孙刘杰;邵雪;: "一种改进的三维块匹配图像去噪算法", 包装工程, no. 21, 15 November 2016 (2016-11-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN113781442B (en) | 2024-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112200750B (en) | Ultrasonic image denoising model establishing method and ultrasonic image denoising method | |
CN110175630B (en) | Method and system for approximating deep neural networks for anatomical object detection | |
JP6057881B2 (en) | Method for removing noise from input image consisting of pixels containing noise | |
CN111951186A (en) | Hyperspectral image denoising method based on low-rank and total variation constraint | |
CN104217406B (en) | SAR image noise reduction method based on shear wave coefficient processing | |
CN110992292A (en) | Enhanced low-rank sparse decomposition model medical CT image denoising method | |
CN101685158B (en) | Hidden Markov tree model based method for de-noising SAR image | |
CN113392937A (en) | 3D point cloud data classification method and related device thereof | |
CN109859131A (en) | A kind of image recovery method based on multi-scale self-similarity Yu conformal constraint | |
CN112578471B (en) | Clutter noise removing method for ground penetrating radar | |
CN106067165B (en) | High spectrum image denoising method based on clustering sparse random field | |
CN116402825B (en) | Bearing fault infrared diagnosis method, system, electronic equipment and storage medium | |
CN105184742B (en) | A kind of image de-noising method of the sparse coding based on Laplce's figure characteristic vector | |
CN107590785A (en) | A kind of Brillouin spectrum image-recognizing method based on sobel operators | |
CN107301631B (en) | SAR image speckle reduction method based on non-convex weighted sparse constraint | |
Khmag | Digital image noise removal based on collaborative filtering approach and singular value decomposition | |
CN113191968B (en) | Method for establishing three-dimensional ultrasonic image blind denoising model and application thereof | |
CN113781442A (en) | Rice seedling plant three-dimensional structure model reconstruction method | |
CN115731135A (en) | Hyperspectral image denoising method and system based on low-rank tensor decomposition and adaptive graph total variation | |
US20220245923A1 (en) | Image information detection method and apparatus and storage medium | |
CN109727200A (en) | Similar block based on Bayes's tensor resolution piles up Denoising method of images and system | |
Prasad | Dual stage bayesian network with dual-tree complex wavelet transformation for image denoising | |
CN104700436B (en) | The image reconstructing method based on edge constraint under changeable discharge observation | |
Kasturiwala et al. | Adaptive image superresolution for agrobased application | |
Jiao et al. | A novel image representation framework based on Gaussian model and evolutionary optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |