CN113643181B - In-situ array type root phenotype monitoring system and working method thereof - Google Patents

In-situ array type root phenotype monitoring system and working method thereof Download PDF

Info

Publication number
CN113643181B
CN113643181B CN202110911319.XA CN202110911319A CN113643181B CN 113643181 B CN113643181 B CN 113643181B CN 202110911319 A CN202110911319 A CN 202110911319A CN 113643181 B CN113643181 B CN 113643181B
Authority
CN
China
Prior art keywords
image
root system
root
pixel
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110911319.XA
Other languages
Chinese (zh)
Other versions
CN113643181A (en
Inventor
孙国祥
魏佳音
朱鼎龙
周新竹
刘锦琳
王雪忠
蔡嘉奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Agricultural University
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Priority to CN202110911319.XA priority Critical patent/CN113643181B/en
Publication of CN113643181A publication Critical patent/CN113643181A/en
Application granted granted Critical
Publication of CN113643181B publication Critical patent/CN113643181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an in-situ array type root phenotype monitoring system and a working method thereof, wherein the in-situ array type root phenotype monitoring system comprises the following steps: the image wireless transmission gateway, push type switch, the hole of charging, LED lighting system, wireless camera module, constant voltage power supply module and shell. The working method comprises the following steps: s1: collecting root system images; s2: root system image distortion correction; s3: preprocessing a root system image; s4: automatically splicing root system images; s5: dividing a root system image; s6: and (5) calculating root system phenotype parameters. The invention has the advantages that: the array system adopting the multiple cameras has the advantages that the formed images and the measurement are more accurate than those of a single camera, and the cost is obviously reduced compared with that of large instruments such as CT or MRI and the like. Meanwhile, the sectional images of the root system in the large area are acquired in situ, so that the method is suitable for periodic in-situ acquisition. In addition, the automatic splicing and automatic root phenotype extraction technology is adopted, and the labor and time cost can be greatly reduced.

Description

In-situ array type root phenotype monitoring system and working method thereof
Technical Field
The invention relates to the technical field of root phenotype in-situ monitoring, in particular to an in-situ array type root phenotype monitoring system based on the Internet of things technology and a working method thereof.
Background
The root system is an important component of crops, has very key functions, such as absorption and transportation of water and nutrients, storage of organic matters, plant anchoring, root system symbiosis, production of plant hormones, sensing of changes of soil environment and the like, and is an important factor for comprehensively analyzing the relationship among the genotype, the environment and the phenotype of the crops as well as an important basis for accurate cultivation and management of the crops. The traditional root system phenotype measurement needs a large amount of manpower and material resources, the equipment operation is complex, meanwhile, the prior art method has great defects, and some methods are easy to damage the root structure and cannot continuously monitor; most micro-root window imaging systems are only suitable for phenotypic monitoring of small crops, and most are not measured in natural soil, which can cause actual production errors. In order to realize continuous observation of crop root phenotype information in opaque root system environments such as soil or substrate cultivation, an in-situ array type crop root phenotype monitoring system is developed by combining the support of technologies such as machine learning and image processing under the technical framework of the Internet of things, so that continuous observation of crop root phenotype characteristic parameters is realized, data support and theoretical guidance can be provided for crop growth modeling, crop phenotype analysis and accurate cultivation management, and sustainable development of modern accurate agricultural technology is facilitated. Under the support of an in-situ array type root phenotype monitoring system based on the internet of things technology, the crop micro-root morphological method can link multiple disciplines such as agriculture, mathematics, botany, life science, computers and the like, and can better serve modern agricultural production practice through multidiscipline cross cooperation.
Disclosure of Invention
The invention provides an in-situ array type root phenotype monitoring system and a working method thereof, aiming at the defects of the prior art.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
an in situ array root phenotype monitoring system, comprising: the image wireless transmission gateway, push type switch, the hole of charging, LED lighting system, wireless camera module, constant voltage power supply module and shell.
The image wireless transmission gateway is connected to the local area network through Wifi, and image wireless transmission is achieved.
The push switch is characterized in that: and controlling the on-off of a circuit of the in-situ array type root system phenotype monitoring system.
The charging hole is as follows: the battery board is used for connecting a power supply.
The LED illumination system comprises: root window imaging system light filling system.
The wireless camera module: and the system is used for obtaining root system images and transmitting the root system images through the image wireless transmission gateway.
The regulated power supply module includes: the battery is used for supplying power to the in-situ array type root system phenotype monitoring system. The voltage reduction plate ensures that the in-situ array type root system phenotype monitoring system operates in a working voltage.
The image wireless transmission gateway is arranged at the top of the shell, the pressing switch, the charging hole and the LED lighting system are arranged on the side face of the shell, the wireless camera module is arranged on the front surface of the shell, and the stabilized voltage power supply module is arranged inside the bottom of the shell.
The output port of the storage battery is sequentially connected with the push switch, the image wireless transmission gateway and the voltage reducing plate in series; the voltage reducing plate is connected with the wireless camera module and the LED illuminating system in parallel and supplies power.
The invention also discloses a working method of the in-situ array type root phenotype monitoring system, which comprises the following steps:
s1: root system image acquisition
The ESP32-CAM module is used as a shooting part of the root system imaging system, fixed IP addresses are set for 16 cameras respectively, the cameras are controlled through a crawler technology based on Python language, the 16 ESP32-CAM modules are controlled to acquire images of each root system measuring area, and a plurality of groups of rectangular chessboard calibration plates with different angles and distances are acquired.
S2: root image distortion correction
And measuring the parameters of the internal parameter matrix, the external parameter matrix, the radial distortion and the tangential distortion of each wireless camera by a Zhang-Yongyou calibration method.
The internal reference matrix is:
Figure BDA0003203726210000031
the external reference matrix is:
Figure BDA0003203726210000032
where f is the image distance, dX and dY respectively represent the physical length (i.e. corresponding distance) of a single pixel on the plate on X and Y, and u0,v0The coordinate of the light-sensitive plate under the pixel coordinate system is shown, and theta is the proportional angle of the transverse and longitudinal edges of the light-sensitive plate.
R denotes a rotation matrix and T denotes a translation vector.
And further utilizing the conversion relation between the graph coordinates and the pixel coordinates:
Figure BDA0003203726210000033
and according to the following radial distortion correction formula:
xdr=x(1+k1r2+k2r4)
ydr=y(1+k1r2+k2r4)
wherein r is2=x2+y2
In the formula:
u, v are ideal pixel coordinates, xdr,ydr,xdt,ydtRespectively representing coordinates of axial pixel points of the image X, Y after radial distortion and tangential distortion, x and y are ideal undistorted coordinates, and k1、k2、k3、p1、p2Is 5 distortion coefficients and r is the distance of the point from the imaging center.
Further, the following correction formula is obtained:
Figure BDA0003203726210000041
thereby obtaining distorted coordinates by using the identification calibration plate
Figure BDA0003203726210000042
And the distortion-free coordinates (u, v) obtained by the L-M (Levenberg-Marquardt) algorithm are substituted into the formula to obtain the corresponding parameter k1、k2Thereby completing the correction.
Distortion correction parameters of the OV2640 wide-angle camera are calibrated in sequence, and correction parameters of the cameras are directly used for correcting distorted root system images in the later period.
S3: root image preprocessing
Convolving the corrected root system image with each input image F by using a two-dimensional Gaussian smoothing filter H with the size of 3 multiplied by 3 and sigma of 3, and replacing the value of the central pixel point of the template by using the determined weighted average gray value of the pixels in the neighborhood to further obtain an output image G, wherein the formula is as follows:
Figure BDA0003203726210000043
in the formula:
i. j is the pixel size, h (i, j) is the Gaussian smoothing filter, f (i, j) is each pixel of the original image, and g (i, j) is each image pixel after convolution, and finally the preprocessed image is formed.
S4: automatic splicing of root system images, the substeps are as follows:
s41: and (5) solving characteristic points of the part to be spliced of the root system image by using a Harris angular point detection algorithm.
S42: the index tree is usually used for searching the key data in the multidimensional space, and the fast nearest neighbor algorithm (FLANN) can effectively reduce the calculation amount.
If the value is less than the pre-assumed threshold, the nearest neighbor is considered to be better matching, and the initial matching of the feature point pair is completed. And distinguishing a bright spot on a dark background from a dark spot on a bright background according to whether the value of each characteristic point Laplacian identifier is 1 or 0, and matching the characteristic points of the same type.
S43: using a projective transformation model:
Figure BDA0003203726210000051
coordinates (x) of a plurality of images1,y1)、(x2,y2)、......(x16,y16) Mapping the images to the same coordinate system X, Y preliminarily, further removing external points by adopting a RANSAC algorithm, and calculating a transformation matrix between the images to complete the final coordinate transformation.
S44: because the imaging is influenced by light and camera internal parameters, obvious splicing traces appear on the spliced images and the brightness of the regions is different, and the HIS transformation model is used for unifying the brightness. The components (R, G, B) of the three primary colors are added and averaged to obtain a luminance component I, the luminance component I is obtained by subtracting 1/I from I and multiplying the minimum component of RGB, and then a hue component S is obtained by using inverse cosine transform according to the following formula to obtain a saturation component H.
Figure BDA0003203726210000052
In the formula: r, G, B are the three primary color components and H is the saturation component.
And fusing the brightness component I obtained after transformation with the I component of the reference image under a histogram matching method to obtain a new fused brightness component to replace the original brightness component, performing IHS inverse transformation together with the H and S component images, and finally obtaining a fusion result expressed by an RGB space.
S5: and (3) dividing the root system image, wherein the sub-steps are as follows:
s51: and adding the RGB three components of the fused image by using an average value method to obtain an average value, and finishing graying processing.
S52: and (3) enhancing the image contrast of the root system image by gamma conversion with the gamma of 0.8, namely converting the gray value of each pixel point after the image normalization in a form of the gamma of 0.8 to obtain a corresponding output image.
S53: and performing opening operation on the image. The basic principle is as follows: and scanning all pixel points of the image A to be processed by taking the original point of the reference image B as a coordinate, performing AND operation on the reference image B and the binary X image covered by the image A to form a new image A, and performing OR operation on the new image A and the reference image B to finish image opening operation.
S54: and (3) segmenting by using a watershed algorithm, and performing thresholding treatment on the gradient image to reduce over-segmentation of the image and finally finish contour extraction.
S6: calculating root system phenotype parameters, and the substeps are as follows:
s61: and processing the edge of the crop root system by using a Canny edge detection operator to obtain edge pixel points.
S62: the image area of the crop root system is denoted by 1: the background area is denoted by 0. And scanning the root system binary image by a scanning method, calculating the number of points with pixel points of 1 in the root system image, and summing to obtain the total number.
S63: and adding the pixels with the pixel points equal to 1 to obtain the area S of the total root system.
S64: the perimeter value of the root system is the distance length of the edge pixel points, and the perimeter is calculated by increasing in a mode of one pixel point at a time, and the distance is represented as 1. The diagonal direction can use 8 neighborhood connected method to
Figure BDA0003203726210000061
The final perimeter L is the sum of all horizontal and vertical pixels on the edge, and then added with
Figure BDA0003203726210000062
And each diagonal pixel point. The perimeter is obtained by respectively averaging and then accumulating each pixel in the perimeter.
Preferably, the threshold value in S42 is set to 0.8.
Compared with the prior art, the invention has the advantages that:
the array system adopting the multiple cameras has the advantages that the formed images and the measurement are more accurate than those of a single camera, and the cost is obviously reduced compared with that of large instruments such as CT or MRI and the like. Meanwhile, the sectional images of the root system in the large area are acquired in situ, so that the method is suitable for periodic in-situ acquisition. In addition, the automatic splicing and automatic root phenotype extraction technology is adopted, and the labor and time cost can be greatly reduced.
Drawings
FIG. 1 is a schematic diagram of an in situ array type root phenotype monitoring system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of operation of an in situ array root phenotype monitoring system in accordance with an embodiment of the present invention;
FIG. 3 is a diagram of various root measurement zones in accordance with an embodiment of the present invention;
FIG. 4 is a diagram of the root system measurement areas after being merged according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings by way of examples.
As shown in fig. 1, an in situ array type root phenotype monitoring system includes: the image wireless transmission system comprises an image wireless transmission gateway 1, a push type switch 2, a charging hole 3, an LED lighting system 4, a wireless camera module 5, a voltage-stabilized power supply module 6 and a shell 7.
The image wireless transmission gateway 1 is connected to a local area network through Wifi to realize image wireless transmission.
Push switch 2: and controlling the on-off of the circuit of the root system phenotype detection system.
Charging hole 3: the power supply is connected with the battery board through the charging hole.
LED illumination system 4: root window imaging system light filling system.
The wireless camera module 5: the root system images are obtained for ESP32-CAM, which in the present invention is implemented as a 4 × 4 wireless camera module, but is not limited to 4 × 4, arranged M × N according to the actual measurement area.
The regulated power supply module 6 includes: the battery is used for supplying power to the in-situ array type root system phenotype monitoring system. The voltage reduction plate ensures that the in-situ array type root system phenotype monitoring system operates in a working voltage.
The shell 7: the protection system works properly in the soil and secures the various elements.
The ESP32 is 16 blocks of antenna with working voltage of 5V and maximum working current of 310mA, and is provided with a PCB antenna;
the wireless camera module comprises 16 120-degree wide-angle OV2640 cameras with 200 ten thousand pixels;
the image wireless transmission gateway is an ESP32 development board which has the working voltage of 5V and the maximum working current of 500mA and is loaded with a PCB antenna;
the stable voltage of the storage battery is 12V, the stable current is 5A, 2800mAh,
the input voltage of the voltage reduction plate is 12V, the output voltage is 3V-10V, and the output maximum current is 4.5A.
The storage battery has two ports of 12v input and 12v output, and the charging hole 3 is an input port of the storage battery and is connected with a 12v alternating current power supply.
The output port of the storage battery is sequentially connected with a push type switch, an image wireless transmission gateway (a local area network is established by an ESP32 development board, 16 ESP32-CAM modules are connected with a computer through the local area network, and the computer controls the ESP-CAM modules to photograph and upload images) and a voltage reduction board in series; secondly, the voltage reducing plate reduces the power supply voltage from 12v to 5v, an ESP32-CAM module, an ESP32 development plate and an LED lighting system (lamp strips uniformly distributed around) are connected in parallel, and power is supplied; finally, the whole system is operated.
As shown in fig. 2, the working method of the in-situ array type root phenotype monitoring system includes the following steps:
root system image acquisition
The hardware part firstly adopts an ESP32-CAM module as a shooting part of the root system imaging system, an ESP32 image wireless transmission gateway is used as a local area network part of the in-situ array type root system imaging system, fixed IP addresses are respectively set for 16 cameras, the cameras are controlled through a crawler technology based on Python language, and the 16 ESP32-CAM modules are controlled to acquire images of each root system measuring area. As shown in fig. 3, the plant root system images collected by the 16 cameras respectively. And awakening the ESP32-CAM module by adopting a timing triggering or manual triggering mode to realize the function of collecting the root system image.
Root system image distortion correction
The method comprises the steps of shooting a plurality of groups of 9 multiplied by 6 rectangular chessboard calibration plates with different angles and distances (10-20 cm), and measuring internal parameter matrix, radial distortion and tangential distortion parameters of each wireless camera by a Zhang-friend calibration method.
The correction formula for radial distortion is as follows:
xdr=x(1+k1r2+k2r4+k3r6)
ydr=y(1+k1r2+k2r4+k3r6)
the formula for correcting the tangential distortion is as follows:
xdt=2p1xy+p2(r2+2x2)+1
ydt=2p1(r2+2y2)+2p2xy+1
wherein r is2=x2+y2
In the formula:
xdr,ydr,xdt,ydtrespectively representing coordinates of axial pixel points of the image X, Y after radial distortion and tangential distortion, x and y are ideal undistorted coordinates, and k1、k2、k3、p1、p2Is 5 distortion coefficients and r is the distance of the point from the imaging center.
Distortion correction parameters of 16 OV2640 wide-angle cameras are calibrated in sequence, and correction parameters of the cameras are directly used for correcting distorted root system images in the later period.
Third, preprocessing the root system image
The invention uses a two-dimensional Gaussian smoothing filter H with the size of 3 multiplied by 3 and sigma of 3 to convolute with each input image F for the corrected root system image, and replaces the value of the central pixel point of the template by the determined weighted average gray value of the pixels in the neighborhood to further obtain an output image G, wherein the formula is as follows:
Figure BDA0003203726210000091
in the formula:
i. j is the pixel size, h (i, j) is the Gaussian smoothing filter, f (i, j) is each pixel of the original image, and g (i, j) is each image pixel after convolution, and finally the preprocessed image is formed.
Four, automatic concatenation of root system image
The automatic splicing steps of the plant root system images are as follows:
1) and (5) solving characteristic points of the part to be spliced of the root system image by using a Harris angular point detection algorithm.
2) The index tree is usually used for searching the key data in the multidimensional space, and the fast nearest neighbor algorithm (FLANN) can effectively reduce the calculation amount.
If the value is smaller than a pre-assumed threshold value, the nearest neighbor is considered to be better matched, the initial matching of the feature point pair is completed, and the matching accuracy is higher when the threshold value is set to be 0.8 through experimental verification. And distinguishing a bright spot on a dark background from a dark spot on a bright background according to whether the value of each characteristic point Laplacian identifier is 1 or 0, and matching the characteristic points of the same type.
3) The invention uses projection transformation model to convert the coordinates (x) of multiple images1,y1)、(x2,y2)、......(x16,y16) Preliminary mapping to the same coordinate system (x)0,y0) And then, further removing outliers by adopting a RANSAC algorithm, and calculating a transformation matrix between images (a matching item is randomly selected, the transformation matrix with the maximum number of elements in a set of matching point pairs conforming to polar line constraint is ensured to be used as an optimal result, and further mismatching is removed) so as to complete final coordinate transformation.
4) Because the imaging is influenced by light and camera internal parameters, obvious splicing traces appear on the spliced images and the brightness of the regions is different, and the HIS transformation model is used for unifying the brightness. The invention adds the components (R, G, B) of three primary colors and takes the average value to get the brightness component I, and uses I minus 1/I to multiply with the respective minimum component of RGB to get the hue component S, and then uses the inverse cosine transform according to the following formula to get the saturation component H.
Figure BDA0003203726210000101
In the formula:
r, G, B are the three primary color components and H is the saturation component
And fusing the brightness component I obtained after transformation with the I component of the reference image under a histogram matching method to obtain a new fused brightness component to replace the original brightness component, performing IHS inverse transformation together with the H and S component images, and finally obtaining a fusion result expressed by an RGB space.
The picture after splicing is shown in fig. 4.
Fifth, root system image segmentation
The plant root system image segmentation step is as follows:
1) and adding the RGB three components of the fused image by using an average value method to obtain an average value, and finishing graying processing.
2) And (3) enhancing the image contrast of the root system image by gamma conversion with the gamma of 0.8, namely converting the gray value of each pixel point after the image normalization in a form of the gamma of 0.8 to obtain a corresponding output image.
3) And performing opening operation on the image. The basic principle is as follows: and scanning all pixel points of the image A to be processed by taking the original point of the reference image B as a coordinate, performing AND operation on the reference image B and the binary X image covered by the image A to form a new image A, and performing OR operation on the new image A and the reference image B to finish image opening operation.
4) And (3) segmenting by using a watershed algorithm, and performing thresholding treatment on the gradient image to reduce over-segmentation of the image and finally finish contour extraction.
Sixthly, calculating root system phenotype parameters
The plant root system phenotype parameter calculation steps are as follows:
1) and processing the edge of the crop root system by using a Canny edge detection operator to obtain edge pixel points.
2) The invention sets the root system binary image, and uses 1 to represent the image area of the crop root system: the background area is denoted by 0. And scanning the root system binary image by a scanning method, calculating the number of points with pixel points of 1 in the root system image, and summing to obtain the total number.
3) And adding the pixels with the pixel points equal to 1 to obtain the area S of the total root system.
4) The perimeter value of the root system is the distance length of the edge pixel points, and the perimeter is calculated by increasing in a mode of one pixel point at a time, and the distance is represented as 1. The diagonal direction can use 8 neighborhood connected method to
Figure BDA0003203726210000121
The final perimeter L is the sum of all horizontal and vertical pixels on the edge, and then added with
Figure BDA0003203726210000122
And each diagonal pixel point.
The measurement of the perimeter of the pixel point can be changed due to the phenomenon of pixel overlapping, and the perimeter is obtained by respectively averaging and then accumulating the pixels in the perimeter.
It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (2)

1. A working method of an in-situ array type root phenotype monitoring system is characterized in that:
the in-situ array type root phenotype monitoring system comprises: the device comprises an image wireless transmission gateway, a push switch, a charging hole, an LED lighting system, a wireless camera module, a voltage-stabilized power supply module and a shell;
the image wireless transmission gateway is connected to a local area network through Wifi to realize image wireless transmission;
the push switch is characterized in that: controlling the on-off of a circuit of the in-situ array type root phenotype monitoring system;
the charging hole is as follows: the battery panel is used for connecting a power supply;
the LED illumination system comprises: a root window imaging system light supplement system;
the wireless camera module: the system is used for obtaining root system images and transmitting the root system images through an image wireless transmission gateway;
the regulated power supply module includes: the battery is used for supplying power to the in-situ array type root system phenotype monitoring system; the step-down plate ensures that the in-situ array type root system phenotype monitoring system operates in a working voltage;
the image wireless transmission gateway is arranged at the top of the shell, the press switch, the charging hole and the LED lighting system are arranged on the side face of the shell, the wireless camera module is arranged on the front surface of the shell, and the voltage-stabilized power supply module is arranged inside the bottom of the shell;
the output port of the storage battery is sequentially connected with the push switch, the image wireless transmission gateway and the voltage reducing plate in series; the voltage reducing plate is connected with the wireless camera module and the LED lighting system in parallel and supplies power;
the working method comprises the following steps:
s1: root system image acquisition
An ESP32-CAM module is used as a shooting part of a root system imaging system, fixed IP addresses are set for 16 cameras respectively, the cameras are controlled through a crawler technology based on Python language, the 16 ESP32-CAM modules are controlled to acquire images of each root system measuring area, and a plurality of groups of rectangular chessboard calibration plates with different angles and distances are obtained;
s2: root image distortion correction
Measuring internal parameter matrix, external parameter matrix, radial distortion and tangential distortion parameters of each wireless camera by a Zhang-Yongyou calibration method;
the internal reference matrix is:
Figure FDA0003537917730000021
the external reference matrix is:
Figure FDA0003537917730000022
where f is the image distance, dX and dY respectively represent the physical length of a single pixel on the plate on X and Y, and u0、v0The coordinate of the light-sensitive plate under the pixel coordinate system is shown, and theta is the proportional angle of the transverse and longitudinal edges of the light-sensitive plate;
r represents a rotation matrix, and T represents a translation vector;
and further utilizing the conversion relation between the graph coordinates and the pixel coordinates:
Figure FDA0003537917730000023
and according to the following radial distortion correction formula:
xdr=x(1+k1r2+k2r4+k3r6)
ydr=y(1+k1r2+k2r4+k3r6)
and the following correction formula for tangential distortion:
xdt=2p1xy+p2(r2+2x2)+1
ydt=2p1(r2+2y2)+2p2xy+1
wherein r is2=x2+y2
In the formula:
u, v are ideal pixel coordinates, xdr,ydr,xdt,ydtRespectively representing coordinates of axial pixel points of the image X, Y after radial distortion and tangential distortion, x and y are ideal undistorted coordinates, and k1、k2、k3、p1、p25 distortion coefficients, r is the distance of the point from the imaging center;
further, the following correction formula is obtained:
Figure FDA0003537917730000031
thereby obtaining distorted coordinates by using the identification calibration plate
Figure FDA0003537917730000032
And substituting the undistorted coordinates (u, v) obtained by the L-M algorithm into the formula to obtain the corresponding parameter k1、k2Further finishing the correction;
sequentially calibrating distortion correction parameters of the OV2640 wide-angle camera, and directly correcting the distorted root system image by using the correction parameters of each camera in the later period;
s3: root image preprocessing
Convolving the corrected root system image with each input image F by using a two-dimensional Gaussian smoothing filter H with the size of 3 multiplied by 3 and sigma of 3, and replacing the value of the central pixel point of the template by using the determined weighted average gray value of the pixels in the neighborhood to further obtain an output image G, wherein the formula is as follows:
Figure FDA0003537917730000033
in the formula:
i. j is the pixel size, h (i, j) is a Gaussian smoothing filter, f (i, j) each pixel of the original image, g (i, j) each image pixel after convolution, and finally a preprocessed image is formed;
s4: automatic splicing of root system images, the substeps are as follows:
s41: using a Harris angular point detection algorithm to obtain characteristic points of the part to be spliced of the root system image;
s42: the index tree is usually used for searching the key data of the multidimensional space, and the fast nearest neighbor algorithm can effectively reduce the calculation amount;
firstly, establishing a K-D tree index according to the similarity distance, calculating the ratio of the nearest neighbor to the next nearest neighbor for the calculation of a certain feature vector distance, and if the value is smaller than a pre-assumed threshold value, considering the nearest neighbor as better matching to complete the initial matching of a feature point pair; distinguishing a bright spot on a dark background from a dark spot on a bright background according to whether the value of each characteristic point Laplacian identifier is 1 or 0, and matching the characteristic points of the same type;
s43: using a projective transformation model:
Figure FDA0003537917730000041
coordinates (x) of a plurality of images1,y1)、(x2,y2)、......(x16,y16) Mapping the image to the same coordinate system X, Y preliminarily, further removing external points by adopting a RANSAC algorithm, and calculating a transformation matrix between the images to complete final coordinate transformation;
s44: because the imaging is influenced by light and camera internal parameters, the spliced images have obvious splicing traces and different regional brightness, and the brightness is unified by using an HIS (hardware in the system) transformation model; adding the components of the three primary color light R, G, B, taking the average value to obtain a brightness component I, subtracting 1/I from I, multiplying the minimum component of RGB by the minimum component of RGB to obtain a hue component S, and further obtaining a saturation component H by using inverse cosine transform according to the following formula;
Figure FDA0003537917730000042
in the formula: r, G, B are the three primary color components, H is the saturation component;
fusing the brightness component I obtained after transformation with the component I of the reference image under a histogram matching method to obtain a new fused brightness component to replace the original brightness component, and performing IHS inverse transformation together with the H and S component images to finally obtain a fused result expressed by an RGB space;
s5: and (3) dividing the root system image, wherein the sub-steps are as follows:
s51: adding the RGB three components of the fused image by using an average value method to obtain an average value, and finishing graying processing;
s52: the gamma transformation with gamma being 0.8 is used for enhancing the image contrast of the root system image, namely, the gray value of each pixel point after the image normalization is transformed in the form of gamma being 0.8 power, and a corresponding output image is obtained;
s53: performing opening operation on the image; the basic principle is as follows: scanning all pixel points of the image A to be processed by taking the original point of the reference image B as a coordinate, performing AND operation on the reference image B and the binary X image covered by the processed image A to form a new image A, and performing OR operation on the new image A and the reference image B to finish image opening operation;
s54: segmenting by using a watershed algorithm, performing thresholding processing on the gradient image, reducing over-segmentation of the image, and finally finishing contour extraction;
s6: calculating root system phenotype parameters, and the substeps are as follows:
s61: processing the edge of the crop root system by using a Canny edge detection operator to obtain edge pixel points;
s62: the image area of the crop root system is denoted by 1: the background area is denoted by 0; scanning the root system binary image by a scanning method, calculating the number of points with pixel points of 1 in the root system image, and then summing to obtain the total number;
s63: adding pixels with pixel points equal to 1 to obtain the area S of the total root system;
s64: the perimeter value of the root system is the distance length of the edge pixel points, and the perimeter is calculated by increasing in a mode of one pixel point at a time, and the distance is represented as 1; the diagonal direction applies 8 neighborhood connected method to
Figure FDA0003537917730000051
Calculating the distance between the edge pixels in a pixel increasing mode, adding all the horizontal and vertical pixels on the edge to the final perimeter L, and adding
Figure FDA0003537917730000052
A diagonal pixel point; the perimeter is obtained by respectively averaging and then accumulating each pixel in the perimeter.
2. The method of claim 1, wherein the system further comprises: the threshold value is set to 0.8 in S42.
CN202110911319.XA 2021-08-10 2021-08-10 In-situ array type root phenotype monitoring system and working method thereof Active CN113643181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110911319.XA CN113643181B (en) 2021-08-10 2021-08-10 In-situ array type root phenotype monitoring system and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110911319.XA CN113643181B (en) 2021-08-10 2021-08-10 In-situ array type root phenotype monitoring system and working method thereof

Publications (2)

Publication Number Publication Date
CN113643181A CN113643181A (en) 2021-11-12
CN113643181B true CN113643181B (en) 2022-04-19

Family

ID=78420307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110911319.XA Active CN113643181B (en) 2021-08-10 2021-08-10 In-situ array type root phenotype monitoring system and working method thereof

Country Status (1)

Country Link
CN (1) CN113643181B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103454279A (en) * 2013-09-05 2013-12-18 毕昆 In-situ scanning device
CN106105845A (en) * 2016-08-30 2016-11-16 恩奈瑟斯(北京)光机电技术有限公司 Intelligence root window
CN108107048A (en) * 2018-01-03 2018-06-01 中国科学院寒区旱区环境与工程研究所 A kind of field root system of plant remote monitoring device
CN108694727A (en) * 2018-04-17 2018-10-23 南京农业大学 A kind of miniature multiple spot Root morphology acquisition and processing system in real time
CN210958512U (en) * 2020-01-20 2020-07-07 中国科学院植物研究所 Root system micro-root window remote observation system
CN212034187U (en) * 2020-04-21 2020-11-27 魏长拴 Automatic full-field in-situ root scanning system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090236035A1 (en) * 2005-06-29 2009-09-24 Ifa Nurseries, Inc. Device for protecting container-grown seedling root systems and method for its manufacture
US10699100B2 (en) * 2016-11-07 2020-06-30 Institute Of Automation, Chinese Academy Of Sciences Method for microscopic image acquisition based on sequential section
CN110225315A (en) * 2019-07-12 2019-09-10 北京派克盛宏电子科技有限公司 Electric system screen monitored picture fusion method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103454279A (en) * 2013-09-05 2013-12-18 毕昆 In-situ scanning device
CN106105845A (en) * 2016-08-30 2016-11-16 恩奈瑟斯(北京)光机电技术有限公司 Intelligence root window
CN108107048A (en) * 2018-01-03 2018-06-01 中国科学院寒区旱区环境与工程研究所 A kind of field root system of plant remote monitoring device
CN108694727A (en) * 2018-04-17 2018-10-23 南京农业大学 A kind of miniature multiple spot Root morphology acquisition and processing system in real time
CN210958512U (en) * 2020-01-20 2020-07-07 中国科学院植物研究所 Root system micro-root window remote observation system
CN212034187U (en) * 2020-04-21 2020-11-27 魏长拴 Automatic full-field in-situ root scanning system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于腐蚀生长算法的不同活力玉米种子根系表型研究;卢伟;《农业机械学报》;20200430;第51卷(第04期);224-231 *
植物根系图像监测分析系统;刘九庆;《中国优秀博硕士学位论文全文数据库(博士)基础科学辑(月刊)》;20051215(第08期);A006-19 *
高茬水田耕整路径机器视觉识别方法研究;张甜;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊)》;20140915(第09期);I138-1135 *

Also Published As

Publication number Publication date
CN113643181A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN109978755B (en) Panoramic image synthesis method, device, equipment and storage medium
US6618076B1 (en) Method and apparatus for calibrating projector-camera system
WO2020042153A1 (en) Correction method, apparatus, and system for full-screen arbitrary splicing
CN106485751B (en) Unmanned aerial vehicle photographic imaging and data processing method and system applied to foundation pile detection
CN103942796A (en) High-precision projector and camera calibration system and method
CN109360203A (en) Method for registering images, image registration device and storage medium
CN106920221A (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN110400278A (en) A kind of full-automatic bearing calibration, device and the equipment of color of image and geometric distortion
CN111932504A (en) Sub-pixel positioning method and device based on edge contour information
CN111724354B (en) Image processing-based method for measuring wheat ear length and wheat ear number of multiple wheat plants
CN103839236A (en) Image white balance method based on sparse representation
Xia et al. A closed-form solution for multi-view color correction with gradient preservation
CN112700488A (en) Living body long blade area analysis method, system and device based on image splicing
CN115687850A (en) Method and device for calculating irrigation water demand of farmland
CN113643181B (en) In-situ array type root phenotype monitoring system and working method thereof
CN112529498B (en) Warehouse logistics management method and system
CN113506230B (en) Photovoltaic power station aerial image dodging processing method based on machine vision
CN111738936A (en) Image processing-based multi-plant rice spike length measuring method
CN104899854A (en) Detection method and detection device of grain piling height line
Tu et al. 2D in situ method for measuring plant leaf area with camera correction and background color calibration
CN107203766B (en) It is accurately positioned the method, apparatus and system of character in image
CN112365399B (en) Deep learning-based panoramic stitching method and system for fan blade images
CN114119780A (en) Image annotation method and device and electronic equipment
Feng et al. Fine-grained damage detection of cement concrete pavement based on UAV remote sensing image segmentation and stitching
CN114049390A (en) Wheat seedling planting density measuring device and method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant