CN113781442A - Rice seedling plant three-dimensional structure model reconstruction method - Google Patents

Rice seedling plant three-dimensional structure model reconstruction method Download PDF

Info

Publication number
CN113781442A
CN113781442A CN202111066567.5A CN202111066567A CN113781442A CN 113781442 A CN113781442 A CN 113781442A CN 202111066567 A CN202111066567 A CN 202111066567A CN 113781442 A CN113781442 A CN 113781442A
Authority
CN
China
Prior art keywords
image
dimensional
group
rice seedling
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111066567.5A
Other languages
Chinese (zh)
Other versions
CN113781442B (en
Inventor
朱德泉
于倩男
陈霞
廖娟
张顺
况福明
薛康
陈民慧
张晓双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Agricultural University AHAU
Original Assignee
Anhui Agricultural University AHAU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Agricultural University AHAU filed Critical Anhui Agricultural University AHAU
Priority to CN202111066567.5A priority Critical patent/CN113781442B/en
Publication of CN113781442A publication Critical patent/CN113781442A/en
Application granted granted Critical
Publication of CN113781442B publication Critical patent/CN113781442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a rice seedling plant three-dimensional structure model reconstruction method, which comprises a scanning device for collecting two-dimensional microscopic images; the computer apparatus for obtaining a three-dimensional microstructure model comprises: the image processing device comprises an image shearing unit, an image denoising unit, an image reconstruction unit and an image data processing unit; the image shearing unit is used for shearing the obtained two-dimensional micro CT image to obtain a target central image Z; the image denoising unit is used for removing noise points in the target central image to obtain a corresponding clean image Y; the image reconstruction unit is used for reconstructing a three-dimensional microstructure model of the clean image by adopting a depth convolution countermeasure generation network; and the image data processing unit is used for processing according to the reconstruction data to generate a three-dimensional microstructure model of the target rice seedling plant. The three-dimensional microstructure model of the rice seedling plant obtained by the invention is closer to an entity, and the internal structure is convenient to observe.

Description

Rice seedling plant three-dimensional structure model reconstruction method
Technical Field
The invention relates to the field of plant three-dimensional model construction, in particular to a rice seedling plant three-dimensional structure model reconstruction method.
Background
Rice is one of the major crops in the world. The sowing area of Chinese rice accounts for 1/4 of national grain crops, and the yield also accounts for a great proportion. The paddy is one of the important grain crops in China, and is affected by factors such as climate, environment and the like, and the paddy planting area and the yield distribution in each region are uneven. The rice planting mode mainly comprises seedling transplantation and direct seeding, most of rice planting in China adopts a transplanting mode, but mechanical damage, pest damage and the like exist in the rice transplanting process. Therefore, the three-dimensional microscopic reconstruction of the rice seedling plant is realized, an accurate physical structure model can be provided for researches on a mechanical damage mechanism, a self-healing mechanism, substance metabolism and the like of the rice seedling, and the method has important significance for improving the survival rate, the stress resistance, the yield and the quality of the rice seedling.
The classical denoising system, such as an image denoising system based on deep learning, an algorithm based on signal filtering and a denoising system based on singular value decomposition, is the latest development direction in image denoising based on sparse representation and constraint regularization along with the development of a compressive sensing theory, wherein the BM3D image denoising algorithm and the sparse representation denoising algorithm are better algorithms in the image denoising field, but most of the algorithms are higher in complexity, and the denoising effect on high-intensity noise images is poor. Therefore, the image which utilizes the sparse residual error and combines the BM3D algorithm and the sparse algorithm to realize the efficient image noise removal is proposed.
Disclosure of Invention
The invention aims to provide a rice seedling plant three-dimensional structure model reconstruction method, which solves the problems that the prior art has no accurate physical model for the microstructure of a rice seedling plant and cannot acquire the data of a seedling mechanical damage mechanism, a self-healing mechanism, material metabolism and the like in real time.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the method for reconstructing the three-dimensional structure model of the rice seedling plant comprises the following steps:
step 1, acquiring a CT scanning two-dimensional microscopic image of a target rice seedling plant;
step 2, shearing the CT scanning two-dimensional microscopic image of the rice seedling plant obtained in the step 1 to obtain a target central image Z;
step 3, removing noise points in the target central image Z obtained in the step 2 to obtain a de-noised clean image Y;
step 4, carrying out three-dimensional model reconstruction on the denoised clean image Y obtained in the step 3 by adopting a deep convolution countermeasure generation network to obtain reconstruction data;
and 5, processing the reconstruction data obtained in the step 4 to generate a three-dimensional structure model of the target rice seedling plant.
Further, in step 2, a shearing function in OpenCV is utilized to shear the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain a target central image Z, and firstly, the pixel point coordinate (x) of the upper left end point of the target central image Z is determinedmin,ymin) Then, the width and height (w, h) occupied by the target center image Z are determined, and the shear function in OpenCV is used to obtain img (x)min,ymin,(xmin+w),(ymin+ h)) cutting the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain the meshThe center image Z is marked.
Further, in step 3, the adaptive three-dimensional block matching sparse residual algorithm removes noise points in the target central image Z, and the process is as follows:
firstly, initializing the target central image Z by using BM3D to obtain a good approximate denoising initial image Y of the denoising clean image YfindAnd obtaining a real group sparse code B;
and obtaining a denoised clean image Y by using a self-adaptive group sparse residual algorithm improved based on group sparse coding on the target central image Z.
Further, the process of obtaining the denoised clean image Y by using the adaptive group sparse residual algorithm for the target center image Z is as follows:
(1) selecting k vectors Z for image blocks with the size of NxN for a target central image ZKRepresents; for each image block zKMatching a set of image blocks with greater similarity by using adaptive block search algorithm
Figure BDA0003258629050000021
Figure BDA0003258629050000022
Wherein SSIM denotes structural similarity, yfindRepresenting the denoised initial image after BM3D processing,
Figure BDA0003258629050000023
representing the t-th iteration denoised image.
Define p as a small constant if
Figure BDA0003258629050000024
Then
Figure BDA0003258629050000025
Obtaining the index of each group of m similar blocks as the target image, otherwise
Figure BDA0003258629050000026
The index of each set of m similar blocks is acquired as the target image.
Will be provided with
Figure BDA0003258629050000027
Written in matrix form, denoted
Figure BDA0003258629050000028
Then
Figure BDA0003258629050000029
Referred to as group similarity blocks, RN×CTo represent
Figure BDA0003258629050000031
Middle image block zKDimension is NXN, and there are C similar blocks; the denoised initial image y processed by BM3D is processed by the same methodfindCarrying out block matching group to obtain similar blocks
Figure BDA0003258629050000032
(2) For each group of similar blocks
Figure BDA0003258629050000033
Calculating a sparse coefficient by using a self-adaptive group sparse residual algorithm, and then calculating to obtain a de-noised clean image Y, wherein the method specifically comprises the following steps:
first, based on a group sparse coding model:
Figure BDA0003258629050000034
wherein the content of the first and second substances,
Figure BDA0003258629050000035
represented as a set of noisy images similar to a block,
Figure BDA0003258629050000036
a dictionary of sets of noise images is represented,
Figure BDA0003258629050000037
represents the sparse coefficients of the noise image group,
Figure BDA0003258629050000038
represents LFNorm, | · | luminance1To represent
Figure BDA0003258629050000039
Is measured by the sparsity of the network,
Figure BDA00032586290500000310
representing the k-th set of regularization parameters.
Secondly, in order to make the denoising quality better, a self-adaptive group sparse residual error model is established to denoise the image:
Figure BDA00032586290500000311
wherein the content of the first and second substances,
Figure BDA00032586290500000312
representing clean image group sparse coefficients.
Then, the unknowns in the model are determined in turn
Figure BDA00032586290500000313
And
Figure BDA00032586290500000314
noise image group sparse coefficient
Figure BDA00032586290500000315
Determination of (1): updating noise image sparsity coefficients
Figure BDA00032586290500000316
Clean image group sparse coefficients
Figure BDA00032586290500000317
Determination of (1): computing real image sparse coefficients
Figure BDA00032586290500000318
Dictionary
Figure BDA00032586290500000319
Determination of (1): computing matrices
Figure BDA00032586290500000320
Covariance matrix of
Figure BDA00032586290500000321
Decomposition with SVD
Figure BDA00032586290500000322
Wherein
Figure BDA00032586290500000323
And
Figure BDA00032586290500000324
are all unitary matrices that are used for the transmission of the signal,
Figure BDA00032586290500000325
is a matrix of all 0 except the elements on the main diagonal, each element on the main diagonal is called a singular value, and the matrix is obtained
Figure BDA00032586290500000326
I.e. the dictionary sought. Regularization parameter
Figure BDA00032586290500000327
Determination of (1):
Figure BDA00032586290500000328
where C and ε are small constants, δiIs the estimated residual error
Figure BDA00032586290500000329
Variance of (a)n 2Is the variance of the noise.
Finally, the calculation is carried out to obtain
Figure BDA00032586290500000330
After each group is repeated
Figure BDA00032586290500000331
After that, polymerization is carried out
Figure BDA00032586290500000332
And obtaining a denoised clean image Y.
Further, the procedure of step 4 is as follows
Firstly, converting the obtained denoised clean image Y into a stacked three-dimensional microscopic image, and segmenting the stacked three-dimensional microscopic image into 128 multiplied by 128 training set data, namely a real image;
then setting an initial random noise vector Z as 100 and inputting the initial random noise vector Z into a generator, wherein a microscopic image generated by the generator is an unreal image G (Z); inputting a real image and an unreal image into a discriminator, fixing a generator, training the discriminator to accurately discriminate the images, wherein the true image G (Y) is accurately discriminated to be 1, and the false image D [ G (Z) ] is 0; inputting the probability of the unreal image fed back by the discriminator into a generator, fixing the discriminator and training the generator to enable the probability of the unreal image generated by the discriminator to be close to 1;
the cost function defining the penalty, i.e. the deep convolution generation countermeasure network, is as follows:
Figure BDA0003258629050000041
wherein E (#) represents the expected value of the distribution function, Pdata(Y) represents the distribution of real samples, Pz(Z) represents the distribution of false samples, Z is a random noise vector, G (Z) represents the image generated by the generator, G (Y) represents the discrimination of the discriminator on the real image, and the closer the discrimination result is to 1, the better; d [ G (Z)]Indicating that the discriminator discriminates the generated image, it is desirable that the discrimination result is as close to 0 as possible.
First, the generator is fixed, and the discriminator is trained to maximize it with the following formula:
Figure BDA0003258629050000042
then, fixing the discriminator and training the generator to minimize the generator, the following formula is given:
Figure BDA0003258629050000043
in this process, the learning rate is set to 0.0002, the iteration is set to 1000, the batch size is set to 128, and the momentum is set to 0.5, so that the three-dimensional microstructure data of the target rice seedling plant is obtained as reconstruction data.
Further, in step 5, the generated reconstruction data is processed as follows: after the generation training is finished, the generator generates an hdf5 transition file under a specified generation catalog, firstly, a post-processing program is used for converting the format of a generated image into a tiff file, then the tiff file is visualized in software, and finally, a three-dimensional structure model of the target rice seedling is generated.
Compared with the prior art, the invention has the advantages that:
(1) in order to enable the internal microstructure of the CT scanning two-dimensional microscopic image to be more accurate, a self-adaptive matching three-dimensional block matching sparse residual algorithm is provided to remove noise from the image, so that the microstructure is more accurate and the reconstruction authenticity is increased.
(2) The three-dimensional reconstruction is carried out by combining the deep convolution countermeasure generation network, so that the method is closer to an entity, is convenient for observing an internal structure, and is higher in speed and simpler and more convenient in system.
Drawings
FIG. 1 is a block diagram of the system flow of the present invention.
Fig. 2 is a schematic block diagram of the BM3D algorithm of the present invention.
Fig. 3 is a schematic diagram of the GSR algorithm of the present invention.
FIG. 4 is a schematic diagram of a three-dimensional reconstruction algorithm of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in FIG. 1, the method for reconstructing the three-dimensional structure model of the rice seedling plant comprises the following steps:
step 1, acquiring a CT scanning two-dimensional microscopic image of a target rice seedling plant;
and (5) cultivating rice seedling plants. Selecting rice seedlings of required variety and days, fixing the rice seedling plants in a closed container made of a material with low X-ray absorption rate, and adding water for moisturizing. The closed container is placed on a rotatable object stage, an X-ray source emits X-rays to penetrate through an object, a series of projection images with different light and shade are formed on a detector, then the internal three-dimensional structure of the object is reconstructed through a back projection fault, then data are transmitted into a computer, the object stage is moved up and down, then the steps are repeated, new fault images are obtained, the obtained rice seedling plant images of different faults are spliced, and therefore the images which are high enough in the longitudinal axis direction are obtained.
Step 2, shearing the CT scanning two-dimensional microscopic image of the rice seedling plant obtained in the step 1 to obtain a target center image Z, wherein the method specifically comprises the following steps:
the target central image Z is obtained by cutting the CT scanning two-dimensional microscopic image of the rice seedling plant, and the purpose is to reserve the main information of the target central image, so that the required two-dimensional microscopic image of the rice seedling plant occupies most of the central area, the blank area position is reduced, the subsequent denoising is convenient for identification, and the interference is reduced. The specific mode of respectively shearing each target is not limited, a CT scanning two-dimensional microscopic image of a rice seedling plant is sheared by utilizing a shearing function in OpenCV to obtain a target central image Z, and firstly, the pixel point coordinate (x) of the upper left end point of the target central image Z is determinedmin,ymin) Then, the width and height (w, h) occupied by the target center image Z are determined, and the shear function in OpenCV is used to obtain img (x)min,ymin,(xmin+w),(ymin+ h)) shearing the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain a target central image Z. And cutting to obtain a target central image Z, storing the target central image Z in a folder, and facilitating denoising of subsequent images.
And 3, removing noise points in the target central image Z by using an image denoising unit to obtain a denoised clean image Y, wherein the image denoising unit specifically operates as follows:
initializing the target central image Z by using BM3D to obtain a good approximate denoising initial image Y of the denoising clean image YfindAnd obtaining the real group sparse code B therefrom, as shown in fig. 2, the specific operations are as follows:
selecting image blocks of fixed size NxN from Z according to the obtained target central image ZxWhere x is the coordinate of the upper left corner of the block. Each processed block is marked as R, and the current processed image block is fixed as R
Figure BDA0003258629050000061
Hard thresholding the obtained coefficients to obtain inter-block similarity using a normalized 2D linear transform
Figure BDA0003258629050000062
Wherein the image block size is Nht×Nht,ZxRepresenting an image block located at x in the image Z,
Figure BDA0003258629050000063
representing hard-threshold filtering of a two-dimensional image block, gamma2DRepresenting a hard threshold filter factor, | | - | luminance2Represents L2And (4) norm. The block matching result is
Figure BDA0003258629050000064
Wherein
Figure BDA0003258629050000065
Is the maximum d-distance that two image blocks are similar,
Figure BDA0003258629050000066
is a processing block
Figure BDA0003258629050000067
Set of all similar blocks, ZxRepresenting an image block located at x in the image Z,
Figure BDA0003258629050000068
representing a processing block.
Will be provided with
Figure BDA0003258629050000069
All the similar blocks in the three-dimensional matrix are sorted according to the similarity from high to low to form a three-dimensional matrix
Figure BDA00032586290500000610
Wherein the size is
Figure BDA00032586290500000611
Figure BDA00032586290500000612
To represent
Figure BDA00032586290500000613
The number of similar blocks contained therein.
Figure BDA00032586290500000614
Efficient noise attenuation by hard thresholding followed by inverse transformation to produce a 3D array of block estimates
Figure BDA00032586290500000615
Wherein
Figure BDA00032586290500000616
Representing cooperative hard threshold filtering, gamma3DRepresenting a co-operative hard threshold filter factor,
Figure BDA00032586290500000617
to represent
Figure BDA00032586290500000618
Inverse transformation of (1), array
Figure BDA00032586290500000619
Included
Figure BDA00032586290500000620
Estimated value of each stack
Figure BDA00032586290500000621
Subscript xmIndicating the position of this estimated block, superscript xRA reference block is indicated.
The collaborative hard threshold filtering may generate multiple estimation values of the same reference block, which may cause multiple estimation values for each pixel point, and the process of performing weighted average calculation on the estimation values is aggregation. The basis estimation image can be obtained by aggregation.
Aggregation process
Figure BDA0003258629050000071
Wherein
Figure BDA0003258629050000072
Is a similar block
Figure BDA0003258629050000073
Is determined by the characteristic function of (a),
Figure BDA0003258629050000074
an estimate of one of the reference blocks is represented,
Figure BDA0003258629050000075
representing the weight of the similar block by the formula
Figure BDA0003258629050000076
Where, σ denotes the noise standard deviation,
Figure BDA0003258629050000077
and the number of all non-zero elements after the three-dimensional matrix is subjected to collaborative hard threshold filtering is represented.
Obtaining the preliminary estimation value of the real image in the above steps
Figure BDA0003258629050000078
Continuing grouping within the preliminary estimate and performing collaborative wienerFiltering to improve the noise reduction effect.
The image obtained by basic estimation is already obviously attenuated, so the similarity between blocks of the final estimation can be expressed by ideal L2Norm is calculated, and the coordinate set of block matching is as follows:
Figure BDA0003258629050000079
wherein the image block size is Nwie×Nwie,
Figure BDA00032586290500000710
Figure BDA00032586290500000711
Is the maximum distance between two similar blocks.
According to the obtained image block coordinate set
Figure BDA00032586290500000712
To normalize two groups, one from the preliminary estimate and the other from the observation noise,
Figure BDA00032586290500000713
from preliminary estimation blocks
Figure BDA00032586290500000714
The components are stacked to form the structure,
Figure BDA00032586290500000715
by noise blocks
Figure BDA00032586290500000716
And (4) stacking.
Empirical wiener coefficients are defined from the energy of the 3D transform coefficients estimated from the basis set:
Figure BDA00032586290500000717
will be provided with
Figure BDA00032586290500000718
Implementation of cooperative wiener filtering as noisy data 3D transform coefficients
Figure BDA00032586290500000719
And wiener shrinkage factor
Figure BDA00032586290500000720
Element by element multiplication and then inverse transformation
Figure BDA00032586290500000721
To produce a 3D array of block estimates:
Figure BDA0003258629050000081
the set including being located in a matching position
Figure BDA0003258629050000082
Block estimate of (a)
Figure BDA0003258629050000083
Weighted averaging of different estimates for the same reference block to obtain the final estimate:
Figure BDA0003258629050000084
wherein the content of the first and second substances,
Figure BDA0003258629050000085
is the weight of each similar block, expressed as
Figure BDA0003258629050000086
Since BM3D has ideal denoising performance, yfindCan be regarded as a good approximation of the original image Z and can therefore be taken from yfindThereby obtaining the real group sparse code B.
Obtaining a denoised clean image Y by using a self-adaptive group sparse residual algorithm for the target central image Z, as shown in fig. 3, specifically operating as follows:
selecting k vectors Z for image blocks with the size of NxN for a target central image ZKRepresents; for each image block zKMatching a set of image blocks with greater similarity by using adaptive block search algorithm
Figure BDA0003258629050000087
Figure BDA0003258629050000088
Wherein SSIM denotes structural similarity, yfindRepresenting the denoised initial image after BM3D processing,
Figure BDA0003258629050000089
representing the t-th iteration denoised image.
Define p as a small constant if
Figure BDA00032586290500000810
Then
Figure BDA00032586290500000811
Obtaining the index of each group of m similar blocks as the target image, otherwise
Figure BDA00032586290500000812
The index of each set of m similar blocks is acquired as the target image.
Will be provided with
Figure BDA00032586290500000813
Written in matrix form, denoted
Figure BDA00032586290500000814
Then
Figure BDA00032586290500000815
Referred to as group similarity blocks, RN×CTo represent
Figure BDA00032586290500000816
Middle image block zKDimension is NXN, and there are C similar blocks; the denoised initial image y processed by BM3D is processed by the same methodfindCarrying out block matching group to obtain similar blocks
Figure BDA00032586290500000817
For each group of similar blocks
Figure BDA00032586290500000818
Calculating a sparse coefficient by using a self-adaptive group sparse residual algorithm, and then calculating to obtain a de-noised clean image Y, wherein the method specifically comprises the following steps:
first, based on a group sparse coding model:
Figure BDA0003258629050000091
wherein the content of the first and second substances,
Figure BDA0003258629050000092
represented as a set of noisy images similar to a block,
Figure BDA0003258629050000093
a dictionary of sets of noise images is represented,
Figure BDA0003258629050000094
represents the sparse coefficients of the noise image group,
Figure BDA0003258629050000095
represents LFNorm, | · | luminance1To represent
Figure BDA0003258629050000096
Is measured by the sparsity of the network,
Figure BDA0003258629050000097
representing the k-th set of regularization parameters.
Secondly, in order to make the denoising quality better, a self-adaptive group sparse residual error model is established to denoise the image:
Figure BDA0003258629050000098
wherein the content of the first and second substances,
Figure BDA0003258629050000099
representing clean image group sparse coefficients.
Then, the unknowns in the model are determined in turn
Figure BDA00032586290500000910
And
Figure BDA00032586290500000911
noise image group sparse coefficient
Figure BDA00032586290500000912
Determination of (1): updating noise image sparsity coefficients
Figure BDA00032586290500000913
Clean image group sparse coefficients
Figure BDA00032586290500000914
Determination of (1): computing real image sparse coefficients
Figure BDA00032586290500000915
Dictionary
Figure BDA00032586290500000916
Determination of (1): computing matrices
Figure BDA00032586290500000917
Covariance matrix of
Figure BDA00032586290500000918
Decomposition with SVD
Figure BDA00032586290500000919
Wherein
Figure BDA00032586290500000920
And
Figure BDA00032586290500000921
are all unitary matrices that are used for the transmission of the signal,
Figure BDA00032586290500000922
is a matrix of all 0 except the elements on the main diagonal, each element on the main diagonal is called a singular value, and the matrix is obtained
Figure BDA00032586290500000923
I.e. the dictionary sought. Regularization parameter
Figure BDA00032586290500000924
Determination of (1):
Figure BDA00032586290500000925
where c and ε are small constants, δiIs the estimated residual error
Figure BDA00032586290500000926
Variance of (a)n 2Is the variance of the noise.
Finally, the calculation is carried out to obtain
Figure BDA00032586290500000927
After each group is repeated
Figure BDA00032586290500000928
After that, polymerization is carried out
Figure BDA00032586290500000929
And obtaining a denoised clean image Y.
Step 4, performing three-dimensional model reconstruction on the clean image by adopting a depth convolution countermeasure generation network, as shown in fig. 4, specifically operating as follows:
firstly, converting the obtained denoised clean image Y into a stacked three-dimensional microscopic image, and segmenting the stacked three-dimensional microscopic image into 128 multiplied by 128 training set data, namely a real image;
then setting an initial random noise vector Z as 100 and inputting the initial random noise vector Z into a generator, wherein a microscopic image generated by the generator is an unreal image G (Z); inputting a real image and an unreal image into a discriminator, fixing a generator, training the discriminator to accurately discriminate the images, wherein the true image D (Y) is accurately discriminated to be 1, and the false image D [ G (z) ] is 0; inputting the probability of the unreal image fed back by the discriminator into a generator, fixing the discriminator and training the generator to enable the probability of the unreal image generated by the discriminator to be close to 1;
the cost function defining the penalty, i.e. the deep convolution generation countermeasure network, is as follows:
Figure BDA0003258629050000101
wherein E (#) represents the expected value of the distribution function, Pdata(Y) represents the distribution of real samples, Pz(Z) represents the distribution of false samples, Z is a random noise vector, g (Z) represents the image generated by the generator, d (y) represents the discrimination of the true image by the discriminator, the closer its discrimination result is to 1, the better it is; d [ G (z)]Indicating that the discriminator discriminates the generated image, it is desirable that the discrimination result is as close to 0 as possible.
First, the generator is fixed, and the discriminator is trained to maximize it with the following formula:
Figure BDA0003258629050000102
then, fixing the discriminator and training the generator to minimize the generator, the following formula is given:
Figure BDA0003258629050000103
in this process, the learning rate is set to 0.0002, the iteration is set to 1000, the batch size is set to 128, and the momentum is set to 0.5, so that the three-dimensional microstructure data of the target rice seedling plant is obtained as reconstruction data.
And step 5, carrying out the following processing on the generated reconstruction data: after the generation training is finished, the generator generates an hdf5 transition file under a specified generation catalog, firstly, a post-processing program is used for converting the format of a generated image into a tiff file, then the tiff file is visualized in software, and finally, a three-dimensional structure model of the target rice seedling is generated:
the stored File is converted into an image format by using a python language, all images are imported in ImageJ software firstly, and then Plugins → 3DViewer is clicked to carry out three-dimensional visualization on the images, and in order to store the images, File → SaveView is clicked for the next view.
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.

Claims (6)

1. The method for reconstructing the three-dimensional structure model of the rice seedling plant is characterized by comprising the following steps of:
step 1, acquiring a CT scanning two-dimensional microscopic image of a target rice seedling plant;
step 2, shearing the CT scanning two-dimensional microscopic image of the rice seedling plant obtained in the step 1 to obtain a target central image Z;
step 3, removing noise points in the target central image Z obtained in the step 2 to obtain a de-noised clean image Y;
step 4, carrying out three-dimensional model reconstruction on the denoised clean image Y obtained in the step 3 by adopting a deep convolution countermeasure generation network to obtain reconstruction data;
and 5, processing the reconstruction data obtained in the step 4 to generate a three-dimensional structure model of the target rice seedling plant.
2. The method for reconstructing the three-dimensional structural model of the rice seedling plant according to claim 1, wherein in the step 2, a CT scanning two-dimensional microscopic image of the rice seedling plant is clipped by using a clipping function in OpenCV to obtain a target central image Z, and first, a pixel point coordinate (x) of an upper left end point of the target central image Z is determinedmin,ymin) Then, the width and height (w, h) occupied by the target center image Z are determined, and the shear function in OpenCV is used to obtain img (x)min,ymin,(xmin+w),(ymin+ h)) shearing the CT scanning two-dimensional microscopic image of the rice seedling plant to obtain a target central image Z.
3. The rice seedling plant three-dimensional structure model reconstruction method according to claim 1, wherein in step 3, the adaptive three-dimensional block matching sparse residual algorithm removes noise points in the target central image Z, and the process is as follows:
firstly, initializing the target central image Z by using BM3D to obtain a good approximate denoising initial image Y of the denoising clean image YfindAnd obtaining a real group sparse code B;
and obtaining a denoised clean image Y by using a self-adaptive group sparse residual algorithm improved based on group sparse coding on the target central image Z.
4. The method for reconstructing the three-dimensional structural model of the rice seedling plant as claimed in claim 3, wherein the process of obtaining the denoised clean image Y by using the adaptive group sparse residual algorithm for the target central image Z is as follows:
(1) selecting k vectors Z for image blocks with the size of NxN for a target central image ZKRepresents; for each image block zKMatching a set of image blocks with greater similarity by using adaptive block search algorithm
Figure FDA0003258629040000021
The following formula is provided:
Figure FDA0003258629040000022
wherein SSIM denotes structural similarity, yfindRepresenting the denoised initial image after BM3D processing,
Figure FDA0003258629040000023
representing a t-th iteration de-noised image;
define p as a small constant if
Figure FDA0003258629040000024
Then
Figure FDA0003258629040000025
Obtaining the index of each group of m similar blocks as the target image, otherwise
Figure FDA0003258629040000026
Obtaining the index of each group of m similar blocks as a target image;
will be provided with
Figure FDA0003258629040000027
Written in matrix form, denoted
Figure FDA0003258629040000028
Then
Figure FDA0003258629040000029
Referred to as group similarity blocks, RN×CTo represent
Figure FDA00032586290400000210
Middle image block zKDimension is NXN, and there are C similar blocks; the denoised initial image y processed by BM3D is processed by the same methodfindCarrying out block matching group to obtainGroup of similar blocks
Figure FDA00032586290400000211
(2) For each group of similar blocks
Figure FDA00032586290400000212
Calculating a sparse coefficient by using a self-adaptive group sparse residual algorithm, and then calculating to obtain a de-noised clean image Y, wherein the method specifically comprises the following steps:
firstly, a group-based sparse coding model is established as follows:
Figure FDA00032586290400000213
wherein the content of the first and second substances,
Figure FDA00032586290400000214
represented as a set of noisy images similar to a block,
Figure FDA00032586290400000215
a dictionary of sets of noise images is represented,
Figure FDA00032586290400000216
represents the sparse coefficients of the noise image group,
Figure FDA00032586290400000217
represents LFNorm, | · | luminance1To represent
Figure FDA00032586290400000218
Is measured by the sparsity of the network,
Figure FDA00032586290400000219
a regularization parameter representing a kth group;
secondly, in order to make the denoising quality better, a self-adaptive group sparse residual error model is established to denoise the image:
Figure FDA00032586290400000220
wherein the content of the first and second substances,
Figure FDA00032586290400000221
representing a clean image group sparse coefficient;
then, the unknowns in the model are determined in turn
Figure FDA00032586290400000222
And
Figure FDA00032586290400000223
noise image group sparse coefficient
Figure FDA00032586290400000224
Determination of (1): updating noise image sparsity coefficients
Figure FDA00032586290400000225
Clean image group sparse coefficients
Figure FDA00032586290400000226
Determination of (1): computing real image sparse coefficients
Figure FDA0003258629040000031
Dictionary
Figure FDA0003258629040000032
Determination of (1): computing matrices
Figure FDA0003258629040000033
Covariance matrix of
Figure FDA0003258629040000034
Decomposition with SVD
Figure FDA0003258629040000035
Wherein
Figure FDA0003258629040000036
And
Figure FDA0003258629040000037
are all unitary matrices that are used for the transmission of the signal,
Figure FDA0003258629040000038
is a matrix of all 0 except the elements on the main diagonal, each element on the main diagonal is called a singular value, and the matrix is obtained
Figure FDA0003258629040000039
The obtained dictionary is obtained; regularization parameter
Figure FDA00032586290400000310
Determination of (1):
Figure FDA00032586290400000311
where c and ε are small constants, δiIs the estimated residual error
Figure FDA00032586290400000312
Variance of (a)n 2Is the variance of the noise;
finally, the calculation is carried out to obtain
Figure FDA00032586290400000313
After each group is repeated
Figure FDA00032586290400000314
After that, polymerization is carried out
Figure FDA00032586290400000315
And obtaining a denoised clean image Y.
5. The method for reconstructing a three-dimensional structure model of a young rice plant according to claim 1, wherein the process of step 4 is as follows
Firstly, converting the obtained denoised clean image Y into a stacked three-dimensional microscopic image, and segmenting the stacked three-dimensional microscopic image into 128 multiplied by 128 training set data, namely a real image;
then setting an initial random noise vector Z as 100 and inputting the initial random noise vector Z into a generator, wherein a microscopic image generated by the generator is an unreal image G (Z); inputting a real image and an unreal image into a discriminator, fixing a generator, training the discriminator to accurately discriminate the images, wherein the true image D (Y) is accurately discriminated to be 1, and the false image D [ G (z) ] is 0; inputting the probability of the unreal image fed back by the discriminator into a generator, fixing the discriminator and training the generator to enable the probability of the unreal image generated by the discriminator to be close to 1;
the cost function defining the penalty, i.e. the deep convolution generation countermeasure network, is as follows:
Figure FDA00032586290400000316
wherein E (#) represents the expected value of the distribution function, Pdata(Y) represents the distribution of real samples, Pz(Z) represents the distribution of false samples, Z is a random noise vector, g (Z) represents the image generated by the generator, d (y) represents the discrimination of the true image by the discriminator, the closer its discrimination result is to 1, the better it is; d [ G (z)]Indicating that the discriminator discriminates the generated image, and it is desirable that the discrimination result is as close to 0 as possible;
first, the generator is fixed, and the discriminator is trained to maximize it as follows:
Figure FDA0003258629040000041
then the discriminator is fixed and the generator is trained to minimize as follows:
Figure FDA0003258629040000042
in this process, the learning rate is set to 0.0002, the iteration is set to 1000, the batch size is set to 128, and the momentum is set to 0.5, so that the three-dimensional microstructure data of the target rice seedling plant is obtained as reconstruction data.
6. The method for reconstructing a three-dimensional structure model of a young rice plant as claimed in claim 1, wherein in step 5, the generated reconstruction data is processed as follows: after the generation training is finished, the generator generates an hdf5 transition file under a specified generation catalog, firstly, a post-processing program is used for converting the format of a generated image into a tiff file, then the tiff file is visualized in software, and finally, a three-dimensional structure model of the target rice seedling is generated.
CN202111066567.5A 2021-09-13 2021-09-13 Three-dimensional structure model reconstruction method for rice seedling plants Active CN113781442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111066567.5A CN113781442B (en) 2021-09-13 2021-09-13 Three-dimensional structure model reconstruction method for rice seedling plants

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111066567.5A CN113781442B (en) 2021-09-13 2021-09-13 Three-dimensional structure model reconstruction method for rice seedling plants

Publications (2)

Publication Number Publication Date
CN113781442A true CN113781442A (en) 2021-12-10
CN113781442B CN113781442B (en) 2024-05-31

Family

ID=78842680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111066567.5A Active CN113781442B (en) 2021-09-13 2021-09-13 Three-dimensional structure model reconstruction method for rice seedling plants

Country Status (1)

Country Link
CN (1) CN113781442B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103161A1 (en) * 2015-10-13 2017-04-13 The Governing Council Of The University Of Toronto Methods and systems for 3d structure estimation
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN109801375A (en) * 2019-01-24 2019-05-24 电子科技大学 Porous material three-dimensional reconstruction method based on depth convolution confrontation neural network
CN110223231A (en) * 2019-06-06 2019-09-10 天津工业大学 A kind of rapid super-resolution algorithm for reconstructing of noisy image
CN112967210A (en) * 2021-04-29 2021-06-15 福州大学 Unmanned aerial vehicle image denoising method based on full convolution twin network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103161A1 (en) * 2015-10-13 2017-04-13 The Governing Council Of The University Of Toronto Methods and systems for 3d structure estimation
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN109801375A (en) * 2019-01-24 2019-05-24 电子科技大学 Porous material three-dimensional reconstruction method based on depth convolution confrontation neural network
CN110223231A (en) * 2019-06-06 2019-09-10 天津工业大学 A kind of rapid super-resolution algorithm for reconstructing of noisy image
CN112967210A (en) * 2021-04-29 2021-06-15 福州大学 Unmanned aerial vehicle image denoising method based on full convolution twin network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪祖辉;孙刘杰;邵雪;: "一种改进的三维块匹配图像去噪算法", 包装工程, no. 21, 15 November 2016 (2016-11-15) *

Also Published As

Publication number Publication date
CN113781442B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN112200750B (en) Ultrasonic image denoising model establishing method and ultrasonic image denoising method
CN110175630B (en) Method and system for approximating deep neural networks for anatomical object detection
JP6057881B2 (en) Method for removing noise from input image consisting of pixels containing noise
CN111951186A (en) Hyperspectral image denoising method based on low-rank and total variation constraint
CN104217406B (en) SAR image noise reduction method based on shear wave coefficient processing
CN110992292A (en) Enhanced low-rank sparse decomposition model medical CT image denoising method
CN101685158B (en) Hidden Markov tree model based method for de-noising SAR image
CN113392937A (en) 3D point cloud data classification method and related device thereof
CN109859131A (en) A kind of image recovery method based on multi-scale self-similarity Yu conformal constraint
CN112578471B (en) Clutter noise removing method for ground penetrating radar
CN106067165B (en) High spectrum image denoising method based on clustering sparse random field
CN116402825B (en) Bearing fault infrared diagnosis method, system, electronic equipment and storage medium
CN105184742B (en) A kind of image de-noising method of the sparse coding based on Laplce's figure characteristic vector
CN107590785A (en) A kind of Brillouin spectrum image-recognizing method based on sobel operators
CN107301631B (en) SAR image speckle reduction method based on non-convex weighted sparse constraint
Khmag Digital image noise removal based on collaborative filtering approach and singular value decomposition
CN113191968B (en) Method for establishing three-dimensional ultrasonic image blind denoising model and application thereof
CN113781442A (en) Rice seedling plant three-dimensional structure model reconstruction method
CN115731135A (en) Hyperspectral image denoising method and system based on low-rank tensor decomposition and adaptive graph total variation
US20220245923A1 (en) Image information detection method and apparatus and storage medium
CN109727200A (en) Similar block based on Bayes's tensor resolution piles up Denoising method of images and system
Prasad Dual stage bayesian network with dual-tree complex wavelet transformation for image denoising
CN104700436B (en) The image reconstructing method based on edge constraint under changeable discharge observation
Kasturiwala et al. Adaptive image superresolution for agrobased application
Jiao et al. A novel image representation framework based on Gaussian model and evolutionary optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant