CN116817754B - Soybean plant phenotype extraction method and system based on sparse reconstruction - Google Patents

Soybean plant phenotype extraction method and system based on sparse reconstruction Download PDF

Info

Publication number
CN116817754B
CN116817754B CN202311082530.0A CN202311082530A CN116817754B CN 116817754 B CN116817754 B CN 116817754B CN 202311082530 A CN202311082530 A CN 202311082530A CN 116817754 B CN116817754 B CN 116817754B
Authority
CN
China
Prior art keywords
key points
bean
dimensional
phenotype
pod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311082530.0A
Other languages
Chinese (zh)
Other versions
CN116817754A (en
Inventor
黎晨阳
徐晓刚
冯献忠
何鹏飞
王军
贾昕晔
陈满丽
曹卫强
韩强
李萧缘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Institute of Geography and Agroecology of CAS
Zhejiang Lab
Original Assignee
Northeast Institute of Geography and Agroecology of CAS
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Institute of Geography and Agroecology of CAS, Zhejiang Lab filed Critical Northeast Institute of Geography and Agroecology of CAS
Priority to CN202311082530.0A priority Critical patent/CN116817754B/en
Publication of CN116817754A publication Critical patent/CN116817754A/en
Application granted granted Critical
Publication of CN116817754B publication Critical patent/CN116817754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Abstract

A soybean plant phenotype extraction method and system based on sparse reconstruction, the method comprises: carrying out multi-view imaging on soybean plants, extracting two-dimensional key points of the plants in each view through density map estimation, wherein the two-dimensional key points comprise endpoint key points, node key points and bean key points, simultaneously, giving out a bean association relation in the same bean pod through affinity field estimation, associating the same key points in each view with the same bean pod based on symmetric polar line distance and binary matching, further calculating three-dimensional coordinates of each key point through triangulation, and the three-dimensional coordinates are used for measuring plant height, counting spatial distribution of bean kernels, calculating node number, single plant number, pod number and the like. The method can accurately and efficiently extract the phenotype of the soybean plants, and has higher feasibility and practicability.

Description

Soybean plant phenotype extraction method and system based on sparse reconstruction
Technical Field
The invention relates to the field of computer vision, in particular to a soybean plant phenotype extraction method and system based on sparse reconstruction.
Background
In the field of intelligent breeding, precise coupling of genotypes and phenotypes is a central research task. For soybean plants, common phenotypes are plant type, plant height, leaf shape, flower color, number of main stem nodes, number of individual pods, number of individual grains, and the like. At present, a certain technical bottleneck exists in automatic and accurate acquisition of phenotype data. The RGB camera can obtain the color and shape of the plant, but it is difficult to measure the plant accurately. The 3D camera can reconstruct the plant in three dimensions, but the self-shielding of the plant is easy to cause point cloud deficiency, the three-dimensional processing algorithm is complex, and the calculated amount is large. CT imaging can model the internal structure of plants, but is expensive and limited in scope due to the long imaging time. Hyperspectral, multispectral imaging can perceive spectral features of plants in different wavebands, but the same equipment is expensive and has lower resolution.
In the method disclosed in document 1 (Ning S, chen H, zhao Q, et al Detection of pods and stems in soybean based on IM-SSD+ACO algorithm [ J ]. Trans., chip. Soc. Agric. Mach, 2021, 52:182-190.), soybean plants are laid on the ground, soybean plant data are collected by vertically photographing from top to bottom by using a fixed single RGB camera, and a flash lamp is placed on each side of the camera for light supplement.
Document 2 (Guo Y, gao Z, zhang Z, et al Automatic and Accurate Acquisition of Stem-Related Phenotypes of Mature Soybean Based on Deep Learning and Directed Search Algorithms [ J ] Frontiers in plant science, 2022, 13: 906751.) establishes a black, light-shielding collection box in which a light source is added and a RGB camera is used to photograph soybean plants laid on the bottom of the collection box, which concentrates all the collection equipment in the collection box, more standardized than the method of Ning et al [1 ]. However, the method does not solve the problems of single angle and missing information of the soybean plant photo.
Document 3 (Guo Xiyue, li Jinsong, zheng Lihua, etc.) acquires soybean plant phenotype parameters [ J ]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38 (15) by using Re-YOLOv5 and a detection area search algorithm, randomly places and rotates each soybean in the shooting process, so that each soybean has several pictures with different placing postures, and adjusts the height of a camera in the shooting process, so that the background space occupied by all soybean plants is close, and the richness of a soybean image dataset is increased in this way, thereby improving the effect of identifying soybean phenotypes. However, the mode requires large manpower, each photo is taken by adjusting the height of the camera and the placing mode of the soybean plants, and the collection efficiency is low.
In terms of three-dimensional image acquisition, document 4 (Fang Hui, hu Lingchao, he Rentao, etc. plant three-dimensional information acquisition method research [ J ]. Agricultural engineering journal, 2012,28 (03): 142-147.) uses a laser scanner to scan plants in all directions, obtain multi-view three-dimensional point cloud data, and reconstruct plants in three dimensions by means of calibration and denoising. However, the laser scanner used in the method is high in price and complex in operation, the plants are required to be manually scanned finely from different angles, the time for acquiring the data of the single plant is long, and the efficiency and the cost are limited greatly. Many of the studies mentioned above are limited by the acquisition of plant phenotype images, and therefore, there is a need for a method and system for extracting soybean plant phenotype that can be accurately and efficiently used to acquire plant phenotype data more rapidly.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a soybean plant phenotype extraction method and system based on sparse reconstruction, which are used for measuring plant height, counting spatial distribution of beans, calculating node number, single plant number, pod number and the like.
The invention discloses a soybean plant phenotype extraction method based on sparse reconstruction, which comprises the following steps:
s1, inserting the mature soybean plants into a plant fixing bracket of a phenotype extraction system, and performing multi-view imaging by cameras fixed around the system;
s2, extracting two-dimensional key points of plants in each view, wherein the two-dimensional key points comprise end point key points, bean key points, node key points and the like, and giving bean association relations in the same bean pod to the bean key points;
s3, associating the same key point and the same pod in each view;
s4, calculating three-dimensional coordinates of each key point through a triangulation algorithm;
s5, calculating plant height through three-dimensional coordinate differences of the endpoint key points, counting the spatial distribution of beans through a bean key point histogram, and respectively calculating the node number, the single plant number and the pod number through the node key points, the bean key points and the bean association number.
Further, the camera in S1 needs to calibrate the internal reference in advanceExternal parameters->Wherein->Is a camera->Rotation matrix of>Is a camera->Is a translation vector of (a).
Further, the number N of cameras in the step S1 is more than or equal to 2.
Still further, step S2 includes the steps of:
s21, extracting plant endpoints, beans and node key points through a density map estimation algorithm and a maximum value extraction algorithm based on a convolutional neural network;
s22, estimating the association relation of the beans by an affinity field estimation method for the key points of the beans so as to represent pods.
Further, step S3 includes the steps of:
s31, calculating the distance loss of pods between different viewsWherein->The number of pairs matched for inter-pod beans is +.>Is the key point pair of the bean>Is a symmetric polar distance;
s32, according to the distance lossAssociating the same pod in each view by bipartite matching algorithm according to symmetric line distance +.>Associating the same bean key points in the same bean pod through a binary matching algorithm;
s33, calculating the symmetrical line distance of the endpoint key points among different views, and associating the same endpoint key points through binary matching;
s34, calculating the symmetrical line distance of the node key points between different views, and associating the same node key point through binary matching.
Further, the three-dimensional key points obtained by the triangulation algorithm in S4 areWherein->For the associated key points, ++>Is a two-dimensional key pointConfidence of->Is two-dimensional key point->Is a homogeneous coordinate of (c).
The invention also relates to a soybean plant phenotype extraction system based on sparse reconstruction, which comprises an imaging unit, a phenotype extraction unit and a display unit.
The imaging unit is a camera fixing bracket and camera combinations distributed around the camera fixing bracket;
the phenotype extraction unit is used for measuring plant height, counting the spatial distribution of beans, calculating the number of nodes, the number of single plants, the number of pods and the like by extracting two-dimensional key points in multiple views and performing sparse reconstruction;
the display unit comprises three-dimensional display and statistical data display and is used for displaying the sparsely reconstructed plants and related phenotype data in real time.
The invention also relates to a computer readable storage medium having stored thereon a program which, when executed by a processor, implements a sparse reconstruction based soybean plant phenotype extraction method of the invention.
The invention also relates to a computing device, which comprises a memory and a processor, wherein executable codes are stored in the memory, and the processor realizes the soybean plant phenotype extraction method based on sparse reconstruction when executing the executable codes.
The invention also relates to a computer program product characterized by comprising a computer program which, when executed by a processor, implements a sparse reconstruction based soybean plant phenotype extraction method of the invention.
The invention has the advantages that:
the soybean plant phenotype extraction method and system based on sparse reconstruction are accurate and efficient, can be used as an efficiency tool in the field of intelligent breeding, and have high feasibility and practicality.
Drawings
Fig. 1 is a schematic diagram of the system of the present invention.
Fig. 2 is a flow chart of the algorithm of the present invention.
Fig. 3 is a density map estimation of the key points of the beans.
Fig. 4 is a correlation of beans obtained by an affinity field estimation method.
Fig. 5 is a system configuration diagram of the present invention.
Fig. 6 is a sample image acquired by the imaging unit of the present invention.
FIG. 7 is a schematic diagram of multi-view imaging of a phenotype extraction unit of the present invention.
FIG. 8 is a graph showing the effect of the phenotype extraction unit of the present invention.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
Example 1
As shown in fig. 1 and 2, a soybean plant phenotype extraction method based on sparse reconstruction comprises the following steps:
step one: inserting mature soybean plants into plant fixing bracket of phenotype extraction system, performing multi-view imaging by N=12 cameras fixed around the system, and calibrating internal reference by the cameras in advanceExternal parameters->Wherein->Is a camera->Rotation matrix of>Is a camera->Is a translation vector of (a).
Step two: extracting plant endpoints, beans and node key points through a density map estimation algorithm and a maximum value extraction algorithm based on a convolutional neural network as shown in fig. 3; for the bean key points, as shown in fig. 4, the association relationship of beans is estimated by an affinity field estimation method to represent pods.
Step three: the same keypoints and the same pods in each view are associated. The specific substeps are as follows:
(3.1) calculating the distance loss of pods between different viewsWherein->The number of pairs matched for inter-pod beans is +.>Is the key point pair of the bean>Is a symmetric polar distance;
(3.2) loss according to distanceAssociating the same pod in each view by bipartite matching algorithm according to symmetric line distance +.>Associating the same bean key points in the same bean pod through a binary matching algorithm;
(3.3) calculating the symmetrical line distance of the endpoint key points among different views, and associating the same endpoint key points through bipartite matching;
and (3.4) calculating the symmetrical line distance of the node key points between different views, and associating the same node key point through binary matching.
Step four: three-dimensional coordinates of each key point are calculated through a triangulation algorithmWherein->For the associated key points, ++>Is two-dimensional key point->Confidence of->Is two-dimensional key point->Is a homogeneous coordinate of (c).
Step five: and calculating plant height through three-dimensional coordinate differences of the endpoint key points, counting the spatial distribution of beans through a bean key point histogram, and respectively calculating the node number, the single plant number and the pod number through the node key point number, the bean key point number and the bean association number.
Example 2
This example relates to a sparse reconstruction-based soybean plant phenotype extraction system for implementing the method of example 1, comprising an imaging unit, a phenotype extraction unit, a display unit,
the imaging unit is a camera fixing bracket and camera combinations distributed around the camera fixing bracket;
the phenotype extraction unit is used for measuring plant height, counting the spatial distribution of beans, calculating the number of nodes, the number of single plants and the number of pods by extracting two-dimensional key points in multiple views and performing sparse reconstruction; extracting soybean plant phenotype and carrying out phenotype display units such as nodes, bean particles, plant height and the like on an original image.
The display unit comprises three-dimensional display and statistical data display and is used for displaying the sparsely reconstructed plants and related phenotype data in real time. Is a unit for counting and displaying the final identification result of soybean phenotype.
In a specific embodiment, plants requiring phenotype collection were subjected to plant image collection as in example 1 using the imaging unit mentioned in fig. 5, as shown in fig. 6.
During the harvest, the soybean plants were monitored in real time for phenotypic harvest from different angles by the phenotype extraction unit mentioned in fig. 5, as shown in fig. 7.
The corresponding cameras in the imaging unit selected by the phenotype extraction unit can observe the display condition of various phenotypes in the phenotype extraction unit on the original image, as shown in fig. 8.
The display unit mentioned in fig. 5 obtains the final recognition result of soybean phenotype by summing up the phenotypes obtained by the respective images in the phenotype extraction unit, as shown in table 1.
TABLE 1
The above is a specific example of phenotype acquisition of individual soybean plants using a soybean plant phenotype extraction system based on sparse reconstruction. In practical application, the embodiment of the invention is adopted to collect and identify each phenotype of 65 soybean plants belonging to 7 varieties, and the identification results are shown in table 2.
TABLE 2
The recognition accuracy of each phenotype was obtained by comparison with the artificial statistical soybean phenotype data as shown in table 3. The calculation formula of the recognition Accuracy (ACC) is as follows:
wherein MAE represents the Mean absolute error and Mean represents the Mean of the phenotype data of the soybean plant artificial statistics. In statistics, absolute error is defined as the value of the error between the predicted value and the actual value.
And the average absolute error is calculated by dividing the sum of absolute errors by the number of samples involved in the calculation. The formula is as follows:
in the present invention, n refers to the total number of samples 65,the invention acquires and identifies the soybean plants to obtain the numerical value of each phenotype, namely, the ∈10>Refers to the phenotype value obtained by artificial statistics of soybean plants used in the invention. The accuracy of each phenotype obtained by calculation is shown in table 3:
TABLE 3 Table 3
According to experimental results, the method can efficiently and accurately obtain various important soybean phenotypes, is very similar to the artificial statistics results, can replace the manual collection and identification of soybean phenotypes, and is used for comparing the growth and development conditions and the yield of different soybean varieties.
Example 3
This embodiment relates to a computer readable storage medium for implementing the method of embodiment 1, having stored thereon a program which, when executed by a processor, implements a soybean plant phenotype extraction method based on sparse reconstruction as described in embodiment 1.
Example 4
This embodiment relates to a computer program product comprising a computer program which when executed by a processor implements a sparse reconstruction based soybean plant phenotype extraction method as described in embodiment 1.
Example 5
The present embodiment relates to a computing device for implementing the method of embodiment 1, including a memory and a processor, wherein the memory stores executable code, and the processor implements the sparse reconstruction-based soybean plant phenotype extraction method of embodiment 1 when executing the executable code.
At the hardware level, the computing device includes a processor, internal bus, network interface, memory, and non-volatile storage, although other services may be required. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs to implement the method described in embodiment 1 above. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present invention, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Improvements to one technology can clearly distinguish between improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) and software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present invention.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present invention and is not intended to limit the present invention. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are to be included in the scope of the claims of the present invention.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the technical solutions according to the embodiments of the present invention.

Claims (4)

1. A soybean plant phenotype extraction method based on sparse reconstruction comprises the following steps:
s1, inserting the mature soybean plants into a plant fixing bracket of a phenotype extraction system, and performing multi-view imaging by cameras fixed around the system;
the camera needs to calibrate the internal reference in advanceExternal parameters->,/>Wherein->Is a camera->Rotation matrix of>Is a camera->The number N of cameras is more than or equal to 2;
s2, extracting two-dimensional key points of plants in each view, wherein the two-dimensional key points of the plants comprise: endpoint key points, bean key points and node key points, and giving bean association relations in the same bean pod for the bean key points; the method specifically comprises the following steps:
s21, extracting plant endpoints, beans and node key points through a density map estimation algorithm and a maximum value extraction algorithm based on a convolutional neural network;
s22, estimating the association relation of the beans by an affinity field estimation method for the key points of the beans so as to represent pods;
s3, associating the same key point and the same pod in each view; the method specifically comprises the following steps:
s31, calculating the distance loss of pods between different viewsWherein->The number of pairs matched for inter-pod beans is +.>Is the key point pair of the bean>Is a symmetric polar distance;
s32, according to the distance lossAssociating the same pod in each view by bipartite matching algorithm according to symmetric line distance +.>Associating the same bean key points in the same bean pod through a binary matching algorithm;
s33, calculating the symmetrical line distance of the endpoint key points among different views, and associating the same endpoint key points through binary matching;
s34, calculating the symmetrical line distance of the node key points between different views, and associating the same node key point through binary matching;
s4, calculating three-dimensional coordinates of each key point through a triangulation algorithm; the three-dimensional key points obtained by the triangulation algorithm areWherein->For the associated key points, ++>Is two-dimensional key point->Confidence of->Is two-dimensional key point->Is a homogeneous coordinate of (3);
s5, calculating plant height through three-dimensional coordinate differences of the endpoint key points, counting the spatial distribution of beans through a bean key point histogram, and respectively calculating the node number, the single plant number and the pod number through the node key points, the bean key points and the bean association number.
2. A system for realizing a soybean plant phenotype extraction method based on sparse reconstruction according to claim 1, which comprises an imaging unit, a phenotype extraction unit and a display unit,
the imaging unit is a camera fixing bracket and camera combinations distributed around the camera fixing bracket;
the phenotype extraction unit is used for measuring plant height, counting the spatial distribution of beans, calculating the number of nodes, the number of single plants and the number of pods by extracting two-dimensional key points in multiple views and performing sparse reconstruction;
the display unit comprises three-dimensional display and statistical data display and is used for displaying the sparsely reconstructed plants and related phenotype data in real time.
3. A computer readable storage medium having stored thereon a program which, when executed by a processor, implements a sparse reconstruction based soybean plant phenotype extraction method according to claim 1.
4. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, and wherein the processor, when executing the executable code, implements a sparse reconstruction based soybean plant phenotype extraction method of claim 1.
CN202311082530.0A 2023-08-28 2023-08-28 Soybean plant phenotype extraction method and system based on sparse reconstruction Active CN116817754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311082530.0A CN116817754B (en) 2023-08-28 2023-08-28 Soybean plant phenotype extraction method and system based on sparse reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311082530.0A CN116817754B (en) 2023-08-28 2023-08-28 Soybean plant phenotype extraction method and system based on sparse reconstruction

Publications (2)

Publication Number Publication Date
CN116817754A CN116817754A (en) 2023-09-29
CN116817754B true CN116817754B (en) 2024-01-02

Family

ID=88126015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311082530.0A Active CN116817754B (en) 2023-08-28 2023-08-28 Soybean plant phenotype extraction method and system based on sparse reconstruction

Country Status (1)

Country Link
CN (1) CN116817754B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10160179A1 (en) * 2001-12-07 2003-07-31 Klaus Rudolf Halbritter Method for remote sensing of morphologically and structurally complex objects in an object space, particularly for acquisition of surface data for agricultural and forestry terrain for evaluation of biodiversity data
CN107392956A (en) * 2017-06-08 2017-11-24 北京农业信息技术研究中心 Crop root Phenotypic examination method and apparatus
KR20190107401A (en) * 2018-03-12 2019-09-20 광주과학기술원 a Real-time Visual Anomaly Detection device and a method thereof
CN110653166A (en) * 2019-10-08 2020-01-07 河南科技大学 Fruit detection and classification method and device
CN110796694A (en) * 2019-10-13 2020-02-14 西北农林科技大学 Fruit three-dimensional point cloud real-time acquisition method based on KinectV2
KR102223484B1 (en) * 2020-11-09 2021-03-08 한국건설기술연구원 System and method for 3D model generation of cut slopes without vegetation
CN114792372A (en) * 2022-06-22 2022-07-26 广东工业大学 Three-dimensional point cloud semantic segmentation method and system based on multi-head two-stage attention
WO2023003566A1 (en) * 2021-07-23 2023-01-26 Accuray Inc. Sparse background measurement and correction for improving imaging
CN115880429A (en) * 2022-12-07 2023-03-31 北京市农业技术推广站 Method and system for determining vegetable strong seedling judgment model
CN116051783A (en) * 2022-12-05 2023-05-02 华南农业大学 Multi-view-based soybean plant three-dimensional reconstruction and shape analysis method
CN116311218A (en) * 2023-01-03 2023-06-23 广东工业大学 Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11189020B2 (en) * 2019-02-06 2021-11-30 Thanh Phuoc Hong Systems and methods for keypoint detection

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10160179A1 (en) * 2001-12-07 2003-07-31 Klaus Rudolf Halbritter Method for remote sensing of morphologically and structurally complex objects in an object space, particularly for acquisition of surface data for agricultural and forestry terrain for evaluation of biodiversity data
CN107392956A (en) * 2017-06-08 2017-11-24 北京农业信息技术研究中心 Crop root Phenotypic examination method and apparatus
KR20190107401A (en) * 2018-03-12 2019-09-20 광주과학기술원 a Real-time Visual Anomaly Detection device and a method thereof
CN110653166A (en) * 2019-10-08 2020-01-07 河南科技大学 Fruit detection and classification method and device
CN110796694A (en) * 2019-10-13 2020-02-14 西北农林科技大学 Fruit three-dimensional point cloud real-time acquisition method based on KinectV2
KR102223484B1 (en) * 2020-11-09 2021-03-08 한국건설기술연구원 System and method for 3D model generation of cut slopes without vegetation
WO2023003566A1 (en) * 2021-07-23 2023-01-26 Accuray Inc. Sparse background measurement and correction for improving imaging
CN114792372A (en) * 2022-06-22 2022-07-26 广东工业大学 Three-dimensional point cloud semantic segmentation method and system based on multi-head two-stage attention
CN116051783A (en) * 2022-12-05 2023-05-02 华南农业大学 Multi-view-based soybean plant three-dimensional reconstruction and shape analysis method
CN115880429A (en) * 2022-12-07 2023-03-31 北京市农业技术推广站 Method and system for determining vegetable strong seedling judgment model
CN116311218A (en) * 2023-01-03 2023-06-23 广东工业大学 Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Exploring Better Speculation and Data Locality in Sparse Matrix-Vector Multiplication on Intel Xeon;Haoran Zhao等;2020 IEEE 38th International Conference on Computer Design;全文 *
Low Illumination Soybean Plant Reconstruction and Trait Perception;Yourui Huang等;Agriculture;第12卷(第12期);第2.1-2.3节 *
基于三维重建的大豆植株叶面积自动测量方法的研究;李晨雨;中国优秀硕士学位论文全文数据库 农业科技辑(第1期);全文 *

Also Published As

Publication number Publication date
CN116817754A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
Carvalho et al. Deep depth from defocus: how can defocus blur improve 3d estimation using dense neural networks?
CN101356546B (en) Image high-resolution upgrading device, image high-resolution upgrading method image high-resolution upgrading system
CN104240264B (en) The height detection method and device of a kind of moving object
Santos et al. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
CA2840436A1 (en) System for mapping and identification of plants using digital image processing and route generation
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
Nguyen et al. Plant phenotyping using multi-view stereo vision with structured lights
US8989505B2 (en) Distance metric for image comparison
Masuda Leaf area estimation by semantic segmentation of point cloud of tomato plants
CN112053371A (en) Water body extraction method and device in remote sensing image
Ijiri et al. Digitization of natural objects with micro CT and photographs
Paturkar et al. 3D reconstruction of plants under outdoor conditions using image-based computer vision
Kurmi et al. Pose error reduction for focus enhancement in thermal synthetic aperture visualization
CN109857895B (en) Stereo vision retrieval method and system based on multi-loop view convolutional neural network
CN116817754B (en) Soybean plant phenotype extraction method and system based on sparse reconstruction
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
CN108447092A (en) The method and device of vision positioning marker
CN111563895A (en) Picture definition determining method, device, equipment and storage medium
CN109785369A (en) A kind of virtual reality portrait acquisition method and device
CN114419133A (en) Method and device for judging whether container of plant is suitable for maintaining plant
CN111862098B (en) Individual matching method, device, equipment and medium based on light field semantics
Peng et al. Automatic recognition of pointer meter reading based on Yolov4 and improved U-net algorithm
Sun et al. Unsupervised object extraction by contour delineation and texture discrimination based on oriented edge features
Zheng et al. Object-Detection from Multi-View remote sensing Images: A case study of fruit and flower detection and counting on a central Florida strawberry farm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant