CN112927287A - Phenotype data analysis method of target object, storage medium and terminal - Google Patents

Phenotype data analysis method of target object, storage medium and terminal Download PDF

Info

Publication number
CN112927287A
CN112927287A CN202110314527.1A CN202110314527A CN112927287A CN 112927287 A CN112927287 A CN 112927287A CN 202110314527 A CN202110314527 A CN 202110314527A CN 112927287 A CN112927287 A CN 112927287A
Authority
CN
China
Prior art keywords
target object
point cloud
cloud image
dimensional point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110314527.1A
Other languages
Chinese (zh)
Inventor
李博
刘建刚
宋坚利
曹黎俊
贾哲新
张旭中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Translate Technology Zhejiang Co ltd
Original Assignee
Heshu Technology Zhejiang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heshu Technology Zhejiang Co ltd filed Critical Heshu Technology Zhejiang Co ltd
Priority to CN202110314527.1A priority Critical patent/CN112927287A/en
Publication of CN112927287A publication Critical patent/CN112927287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a phenotype data analysis method, a storage medium and a terminal of a target object, belonging to the technical field of appearance phenotype analysis, wherein the method comprises the following steps of: calculating the curvature of each point in the three-dimensional point cloud image of the target object, and clustering the points with negative curvature to further obtain the number of the concave points of the target object; calculating the average curvature of corresponding points of each concave point in the three-dimensional point cloud image, analyzing the linear relation between the average curvature of the corresponding points of each concave point and the actual measurement depth, namely fitting the average curvature of the corresponding points of each concave point and the actual measurement depth, and regressing the relation between the depth of the concave point of the target object and the average curvature, thereby accurately estimating the depth of the concave point, wherein the whole process of calculating the parameters (quantity and depth) of the concave point is quick, accurate and objective.

Description

Phenotype data analysis method of target object, storage medium and terminal
Technical Field
The invention relates to the technical field of appearance phenotype analysis, in particular to a phenotype data analysis method of a target object, a storage medium and a terminal.
Background
The appearance phenotype analysis of the target object can accurately acquire the phenotype parameters of the target object, such as size, shape, symmetry parameters and the like, the appearance phenotype analysis technology is applied to a specific application scene, the development of the field to which the target object belongs can be promoted to a certain extent, for example, the technology is applied to the analysis of the appearance phenotype of fruits, the size, shape, symmetry, bud eye parameters and the like of the fruits can be acquired, guiding opinions and the like can be provided for the breeding of the fruits, the potatoes are taken as an example and are one of the most important grain crops in the world, the global yield of the potatoes reaches nearly four hundred million tons in 2017, and the international planting area in the development is in a growing trend. According to the strategic arrangement of the food in China, the potatoes are promoted to the fourth major food products. The potato varieties are various, and the different varieties are obviously different in the aspects of skin hardness, size and shape, so that the potatoes with different varieties need to be accurately classified in both production and breeding work of the potatoes. Proper size and aesthetic appearance have always been the most important purchasing factor for consumers. For production, the potatoes are quantified according to the facies characteristics and classified, so that the satisfaction degree of consumers can be improved, and the sales volume is increased; for breeding work, accurate phenotypic characteristic parameters can be provided for gene breeding work, the new variety breeding process is accelerated, a more competitive potato variety is provided for the market, and the income of farmers and dealers is increased.
At present, potato grading is still in a stage of depending on manual judgment, and has the defects of large labor force and high labor cost; the subjective factor is high in specific gravity, and classification is not objective and accurate enough, so that waste is caused; slow, rely on sampling only, and are limited by man-hours. With the development of computer graphics technology, it becomes possible to efficiently quantify the potato appearance parameters using image analysis. The difficulty in performing phenotypic analysis on potato tubers by using an image technology is that the potato shape symmetry is poor, and an ordinary two-dimensional image can only be obtained at a certain angle and analyzed, so that the following problems are caused: 1) the shape judgment is not accurate enough due to the stacking angle; 2) due to a single shooting angle, the number and density of the bud eyes (recessed points) cannot be accurately acquired; 3) the two-dimensional image has no depth information, so that the bud eye depth information cannot be acquired. Therefore, the manual detection and the image analysis technology based on the two-dimensional image cannot achieve satisfactory effects for solving the rapid and accurate potato appearance evaluation.
Disclosure of Invention
The invention aims to solve the problem that the number and the depth of depressed points (eyes) of a target object cannot be objectively and accurately obtained in the prior art, and provides a target object phenotype data analysis method, a storage medium and a terminal.
The purpose of the invention is realized by the following technical scheme: a method of analysing phenotypic data of a target object, the method comprising the steps of number of dip points and depth calculation:
calculating the curvature of each point in the three-dimensional point cloud image of the target object, and clustering the points with negative curvature to further obtain the number of the concave points of the target object;
and calculating the average curvature of corresponding points of each concave point in the three-dimensional point cloud image, analyzing the linear relation between the average curvature of the corresponding points of each concave point and the actually measured depth, and further estimating the depth of the concave point.
As an option, the calculation formula of the average curvature is:
Figure BDA0002990580560000021
wherein N (i) is the vertex x in the three-dimensional mesh surface modeliTotal number of peripheral triangles (x)i,xj) For two sides of the triangle coinciding, alphaij、βijRespectively being vertex xiAngle of two opposite vertex angles, AmRepresenting a vertex xiThe surface area of all triangles attached.
As an option, the average curvature of the corresponding point of the depression is linearly related to the actual measurement depth by:
Deye=a×C+b
wherein D iseyeRepresenting the measured depth of the pit point, a and b are respectively correlation coefficients obtained by unary linear regression, and C represents the curvature value of a single pit point.
As an option, the method further comprises a symmetry parameter extraction step:
transversely intercepting a plurality of section images of the three-dimensional point cloud image of the target object, and extracting the roundness circulation and the roundness variation coefficient CV of the maximum section of the section imagesCIR
And longitudinally intercepting and intercepting a plurality of section images of the three-dimensional point cloud image of the target object, and extracting the ratio of the maximum area to the minimum area and the area variation coefficient of the section images.
As an option, the roundness circulation and the roundness variation coefficient CV of the maximum section of the section diagramCIRThe calculation formula of (2) is as follows:
Circularity=4*S/C2
Figure BDA0002990580560000031
wherein S represents the cross-sectional area, C represents the cross-sectional perimeter, σ represents the standard deviation of the roundness of all the sections, and μ represents the average of the roundness of all the sections.
As an option, the method further comprises a size parameter and volume information extraction step:
calculating a minimum volume external cuboid of the three-dimensional point cloud image according to three principal component vectors of the three-dimensional point cloud image of the target object, and further acquiring length, width and height information of the target object; the three-dimensional point cloud image of the target object is specifically a three-dimensional point cloud image comprising the target object and a base, and the target object is placed on the base;
and constructing a three-dimensional mesh curved surface model based on the three-dimensional point cloud image of the target object, connecting all triangles through the central point, calculating the volume sum of all the triangles, and further calculating the volume information of the target object.
As an option, before the step of extracting the size parameter and the volume information, the method further comprises the following step of point cloud position and direction normalization:
calculating the barycentric coordinate of the point cloud data according to the coordinate information of the three-dimensional point cloud image, and translating the point cloud of the three-dimensional point cloud image to enable the barycenter to be coincident with the origin;
performing principal component transformation by taking coordinates of all points in the three-dimensional point cloud image as a data set, determining three principal directions according to the feature vectors, and determining a first principal direction based on the base;
the first principal component direction is coincided with the coordinate z axis through rotation change, and normalization processing of the point cloud position and direction is achieved.
As an option, the method further comprises a pre-processing step:
analyzing the neighborhood of each point in the three-dimensional point cloud image, and eliminating outliers and/or gross errors caused by measurement errors;
and (4) resampling the point cloud data based on smoothing and normal estimation of a mobile least square method, and further repairing surface unevenness or loopholes generated in the three-dimensional point cloud image acquisition process.
It should be further noted that the technical features corresponding to the above options can be combined with each other or replaced to form a new technical solution.
The present invention also includes a storage medium having stored thereon computer instructions which, when executed, perform the steps of the method for analyzing phenotypic data of a target object as described above.
The invention also includes a terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the processor executing the computer instructions to perform the steps of the method for analyzing phenotypic data of a target object as described above.
Compared with the prior art, the invention has the beneficial effects that:
(1) according to the method, the curvature of the three-dimensional point cloud image of the target object is calculated, and the points with negative curvature are clustered, so that the number of the sunken points of the target object can be accurately obtained; and further, by analyzing the linear relation between the average curvature of corresponding points of each depression point and the actual measurement depth, namely fitting the average curvature of the corresponding points of each depression point and the actual measurement depth, the relation between the depth of the depression point of the target object and the average curvature is regressed, the depth of the depression point is estimated accurately, and the whole depression point parameter (quantity and depth) calculation process is quick, accurate and objective.
(2) The method can accurately evaluate the symmetry of the target object by calculating the roundness of the tangent plane of the target object, the roundness variation coefficient, the ratio of the maximum area to the minimum area and the area variation coefficient.
(3) According to the method, the minimum volume of the three-dimensional point cloud image is calculated to connect the cuboid externally, so that the length, width and height information of a target object can be rapidly and accurately acquired; by constructing the three-dimensional mesh surface model, the volume information of the target object is calculated, and the size parameter and the volume information of the target object are accurately provided.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention.
FIG. 1 is a flowchart of a method of example 1 of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional mesh surface according to embodiment 1 of the present invention;
FIG. 3 is a schematic illustration of potato eyes of example 1 of the present invention;
FIG. 4 shows average potato bud size C of example 1 of the present inventionmeanA schematic diagram of linear relation with the depth value of the bud eye measured manually;
FIG. 5 is a schematic cross-sectional view of a potato according to example 1 of the present invention;
FIG. 6 is a schematic diagram of the three-dimensional volume calculation of potato according to example 1 of the present invention;
fig. 7 is a flow chart of a preferred embodiment of the present invention, embodiment 1.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated by "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are directions or positional relationships based on the drawings, and are only for convenience of description and simplification of description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention discloses a phenotype data analysis method, a storage medium and a terminal of a target object, and mainly aims to solve the problems that the existing artificial phenotype evaluation technology is large in workload, high in cost, low in speed and low in efficiency, and grading, classification accuracy and classification accuracy are greatly influenced by subjective factors.
Example 1
As shown in fig. 1, in embodiment 1, a method for analyzing phenotypic data of a target object specifically includes a step of calculating the number and depth of eyes of S1:
s11: calculating the curvature of each point in the three-dimensional point cloud image of the potato, and clustering the points with negative curvature to further obtain the number of eyes of the potato; specifically, the potato three-dimensional point cloud image can be acquired by various three-dimensional imaging sensors on the market, including laser scanners, structured light cameras, and three-dimensional imaging systems based on Structure-from-motion technology.
S12: and calculating the average curvature of each bud eye corresponding point in the three-dimensional point cloud image, analyzing the linear relation between the average curvature of each bud eye corresponding point and the actually measured depth, and further estimating the depth of the bud eye.
Specifically, points with negative curvatures in the three-dimensional point cloud image are sunken points, a plurality of cluster clusters can be obtained by calculating the curvature of the three-dimensional point cloud image of the target object and clustering the points with negative curvatures, each cluster is a bud eye, and the number of the bud eyes of the potatoes is accurately obtained; more specifically, if the average curvature is a negative value, the three-dimensional mesh curved surface is regarded as a concave surface, otherwise, the three-dimensional mesh curved surface is a convex surface, and since the bud eyes are both concave surfaces, only the vertexes of which the average curvature is a negative value participate in the subsequent processing, namely, the linear relation between the average curvature of the corresponding point of each bud eye and the actual measurement depth is further analyzed, namely, the average curvature of the corresponding point of each concave point is fitted with the actual measurement depth, and the relation between the depth of the concave point of the target object and the average curvature is regressed, so that the depth of the bud eyes is accurately estimated, and the whole bud eye parameter (quantity and depth) calculation process is rapid.
Specifically, the bud eye identification and depth estimation of the surface of the potato stem block are obtained by point cloud mean curvature calculation. The mean curvature is used to measure the surface curvature, which locally describes the curvature of a curved surface embedded in the surrounding space (e.g. a two-dimensional surface embedded in a three-dimensional euclidean space). The mean curvature is the average of any two orthogonal curvatures perpendicular to each other at a point on the spatially curved surface. If a set of orthogonal curvatures that are perpendicular to each other can be represented as K1 and K2, then the average curvature is: k ═ K1+ K2)/2.
Furthermore, the calculation formula of the average curvature of each triangle vertex in the three-dimensional mesh curved surface of the invention is as follows:
Figure BDA0002990580560000071
wherein N (i) is the vertex x in the three-dimensional mesh surface modeliTotal number of peripheral triangles (x)i,xj) For two sides of the triangle coinciding, alphaij、βijRespectively being vertex xiAngle of two opposite vertex angles, AmRepresenting a vertex xiSurface area of all triangles joined, AmThe calculation is performed in an iterative manner as follows:
a)Aminitialization is 0;
b) for vertex xiAdjacent each triangle TiIf it is a non-obtuse triangle, Am+=Av,(xi,xj) For two sides of the triangle coinciding, alphaij and βijRespectively, the angle of two opposite vertex angles, AvThe calculation is as follows:
Figure BDA0002990580560000081
as shown in fig. 2, for an obtuse triangle, if the vertex angle xiIs an obtuse angle, then AmArea (T)/2, otherwise Am+=area(T)/4。
Furthermore, in order to reduce the calculation amount in the clustering process, noise points with negative curvature need to be further removed, namely, experiments show that when the average curvature is lower than-1.8, the concave surface is too shallow and is not bud eye, so that the threshold value is set to-1.8, and the top points with the average curvature lower than the threshold value are extracted and do not participate in the subsequent clustering process.
Further, as shown in fig. 3, clustering is performed on points with negative curvature, that is, classification is performed according to the degree of closeness of sample distribution, a core object without a class is arbitrarily selected as a seed, then a sample set in which all the core objects can reach density is found, that is, a cluster, points which cannot meet the requirement that all the core objects reach density are marked as noise points, then another core object without a class is continuously selected to search for a sample set in which the density can reach, that is, another cluster is obtained, and a plurality of clusters, that is, a plurality of eyes are obtained until all the core objects have classes. The clustering method can cluster dense data sets with any shapes, and the conventional K-Means and other clustering algorithms are generally only suitable for convex data sets; in addition, the clustering method can find abnormal points (noise points) while clustering, and is insensitive to the abnormal points in the data set; finally, the clustering result of the invention has no bias, and the initial value of the K-Means and other clustering algorithms has great influence on the clustering result. The specific implementation steps of the clustering method of the present invention are shown in fig. 4.
According to the method, the number of the potato stem bud eyes and the corresponding depth are obtained through curvature calculation and clustering, the efficiency of bud eye phenotype evaluation in breeding work can be greatly improved, and the accuracy is higher compared with the traditional labor division grading method.
To further illustrate the accuracy of the present invention in predicting the number of potato buds, in this embodiment, 50 potatoes are randomly selected as test samples, and the similarity between the artificial measurement value and the curvature-based software measurement value is evaluated by Precision (Precision) and Recall (Recall), where the Precision and Recall are defined as:
Figure BDA0002990580560000091
Figure BDA0002990580560000092
wherein TP represents the number of correctly predicted bud eyes, FP represents the number of misjudged bud eyes in the non-bud eye region, and FN represents the number of unidentified bud eyes. The accuracy rate represents the proportion of the number of correctly predicted pictures to the total number of positive type predictions, the recall rate represents the proportion of the positive type predicted to the number of all samples, and the higher the two evaluation indexes are, the closer to 1, the more accurate the prediction result is represented. In the embodiment, the accuracy rate and the recall rate are 0.96 and 0.95 respectively, the eye-bud recognition effect is good, and the counting is accurate.
Further, in step S12, the average curvature of the corresponding point of each bud eye is analyzed to be linear with the actually measured depth, that is, 20 potatoes are selected and the depths of 100 bud eyes are randomly and manually measured, and the average C of the curvature is compared with the average CmeanThe linear relationship was compared and calculated, and C was found by unary linear regression as shown in FIG. 4meanThe depth value has high correlation with the manually measured depth value, the correlation coefficient reaches 0.81, and the mean square error is 0.51 mm. According to the unary linear regression coefficient, the calculation formula of the bud eye depth of the potato in the embodiment is as follows:
depth (mm) — 0.139 × Cmean+0.024
Furthermore, the method also comprises a symmetry parameter extraction step:
s21: transversely cutting a plurality of section images of the three-dimensional point cloud image of the potato, and extracting the roundness and the roundness variation coefficient of the maximum section of the section images;
s22: and longitudinally intercepting a plurality of section images of the three-dimensional point cloud image of the potato, and extracting the ratio of the maximum area to the minimum area and the area variation coefficient of the section images. According to the method, the roundness of the cut surface of the potato, the roundness variation coefficient, the ratio of the maximum area to the minimum area and the area variation coefficient are calculated, so that the symmetry of the target object can be accurately evaluated.
Specifically, in this embodiment, the height of the potato point cloud is divided into 100 parts, and a two-dimensional cross-section picture perpendicular to the z-axis of the coordinate of the potato point cloud at each height is extracted to obtain a cross-section edge image, as shown in fig. 5, a curve a in the image represents a cross-section edge, a rectangular solid b represents a rectangular solid with a minimum area obtained by principal component analysis, the total number of pixels in the edge is calculated to represent a cross-section area S, the cross-section perimeter C is the total number of pixels of the image edge, and thus the Circularity (Circularity) of the cross-section with the maximum shape symmetry parameter is obtained, and a specific calculation formula of the Circularity (Circularity) of the maximum cross-section of the cross-section image is:
Circularity=4*S/C2
further, the roundness variation coefficient (CV _ CIR) is obtained by the roundness of 100 cross sections, and the specific calculation formula is
Figure BDA0002990580560000101
Wherein, σ and μ are the standard deviation and the mean value of the roundness of all the sections respectively. Extracting 99 longitudinal sections at intervals of 3.6 degrees around the z axis of the potato point cloud coordinate, extracting the edge of each section and calculating the total number of pixels in the edge to represent the AREA of the cross section, calculating the RATIO (RATIO _ A) of the maximum AREA to the minimum AREA, and calculating the coefficient of variation (CV _ AREA) of 99 cross sections, namely the RATIO of standard deviation to the average value, so as to obtain the potato symmetrical parameter information.
Further, the method also comprises the steps of extracting size parameters and volume information:
s31: calculating a minimum volume external cuboid of the three-dimensional point cloud image according to three principal component vectors of the three-dimensional point cloud image of the potato, and further acquiring length, width and height information of the potato; the three-dimensional point cloud image of the potato specifically comprises the potato and a base, and the potato is placed on the base;
s32: a three-dimensional mesh curved surface model is constructed based on a three-dimensional point cloud image of the potato, all triangles are connected through a central point, the volume sum of all the triangles is calculated, and then the volume information of the potato is calculated.
Further, in step S31, the potato stem is placed on a blue cuboid base, the color of the base is in greater contrast with the color of the potato stem, if a blue base is adopted, the subsequent image segmentation processing is facilitated, and the point cloud data is stored in a ply format in order to retain color information. Preferably, the length and width of the base are less than the length and width of the potato stem, in this embodiment, the height of the base is 60mm, and the length and width are 30cm respectively. More specifically, the principal component of the point cloud coordinate, that is, the direction in which the point cloud data set is most dispersed and changes most in the direction of the feature vector, is obtained by calculating the variance of the sample set of the point cloud data feature values on the feature vector thereof, and the direction of the feature vector with the largest variance is the principal component of the point cloud coordinate; and further obtaining the coordinate of the minimum external cuboid through three principal component vectors, wherein the height of the cuboid is the sum of the height of the potatoes and the height of the base, and the length and the width of the base are both smaller than those of the potatoes, so that the length and the width of the cuboid respectively correspond to those of the potatoes. Further, the point cloud color information is from the RGB color space to the HSV color space, and based on the difference between the color of the potato and the blue color of the base, the point cloud information related to the base can be removed by setting a threshold value on the H color component. The height of the base in the coordinate system is obtained through the difference between the maximum z value and the minimum z value of the point cloud corresponding to the base, so that the height of the potato is obtained through the difference between the external cuboid height with the minimum volume and the base height, and the actual height, the length and the width information of the potato stem block can be calculated due to the fact that the actual size of the base is known.
Further, in step S32 of this embodiment, the point cloud and its normal vector are converted into a three-dimensional mesh by using a reconstruction based on the poisson algorithm, which is better for sharpness processing at edges, and does not introduce additional smoothing, and is more robust to noise. As shown in fig. 6, the whole three-dimensional mesh model can be regarded as being composed of many tetrahedrons with the center of gravity as the vertex, and the total volume is the sum of the volumes of all the tetrahedrons, and the specific calculation formula is as follows:
Figure BDA0002990580560000111
Figure BDA0002990580560000112
wherein, Vi' volume of the ith tetrahedron, (x)i1,yi1,zi1)、(xi2,yi2,zi2)、(xi3,yi3,zi3) Respectively the coordinates of three vertexes of the ith triangle on the curved surface of the three-dimensional mesh and the total volume V of the closed three-dimensional meshtotalIs the sum of the areas of all tetrahedra.
Furthermore, before the step of extracting the size parameter and the volume information, the method also comprises the step of point cloud position and direction normalization treatment:
s41: calculating the barycentric coordinate of the point cloud data according to the coordinate information of the three-dimensional point cloud image, and translating the point cloud of the three-dimensional point cloud image to enable the barycenter to be coincident with the origin; wherein the barycentric coordinates are obtained by calculating the average of the coordinates of the three dimensions, respectively.
S42: performing principal component transformation by taking coordinates of all points in the three-dimensional point cloud image as a data set, determining three principal directions according to the feature vectors, and determining a first principal direction based on the base; wherein the direction of the first main component is consistent with the representative potato and the height direction of the base.
S43: the first principal component direction is coincided with the coordinate z axis through rotation change, and normalization processing of the point cloud position and direction is achieved.
Further, the method of the present invention further comprises a pretreatment step of:
s51: analyzing the neighborhood of each point in the three-dimensional point cloud image, and eliminating outliers and/or gross errors caused by measurement errors;
s52: and (3) resampling the point cloud data based on smoothing and normal estimation of a moving least square method, and further repairing surface unevenness or loopholes generated in the three-dimensional point cloud image acquisition process so as to eliminate errors generated in the three-dimensional reconstruction process and ensure the accuracy of subsequent calculation.
Further, in step S51, the neighborhood of each point in the three-dimensional point cloud image is analyzed, that is, a statistical outlier removing filter is used to remove outliers or gross errors caused by measurement errors, specifically, a statistical analysis is performed on the neighborhood of each point to calculate the average distance between the neighborhood of each point and all the nearby points. Assuming that the result is a Gaussian distribution whose shape is determined by the mean and standard deviation, then points whose mean distance is outside the standard range (defined by the global distance mean and variance) can be defined as outliers and removed from the data, and the mean distance d of nearby pointsmeanThe calculation formula of the standard deviation sigma is as follows:
dmean=(d1+d2+d3+…+dn)/n
Figure BDA0002990580560000131
wherein d isiIs the distance of the ith adjacent point, and n is the total number of the adjacent points. When a certain proximity point piIs a distance satisfying di>dmeanIn the case of +1 × σ condition, piAnd the point cloud data set is regarded as an outlier and removed, so that the influence of the point cloud data set with uneven degree on the subsequent calculation accuracy is prevented.
As a preferred embodiment, as shown in fig. 7, on the basis of obtaining a three-dimensional point cloud image of potatoes, the present invention sequentially performs an image preprocessing step, a point cloud position and direction normalization processing step, a size parameter and volume information extraction step, a symmetry parameter extraction step, and a bud eye number and depth calculation step, so as to quickly and accurately extract phenotypic parameter information of each potato.
The method mainly solves the problems of subjectivity, limitation and fuzziness of naked eye judgment in the appearance phenotype analysis of the potato tuber, quantifies the shape symmetry which cannot be evaluated in the traditional breeding, is simple to operate and convenient and fast to use, can provide more accurate and objective phenotype data for bioinformatics and potato breeding workers, and accelerates the breeding work.
Of course, the present invention can be extended to other fields, such as fruit breeding, where accurate and objective phenotypic data can be provided by phenotypic analysis of fruit appearance.
Example 2
The present embodiment provides a storage medium having the same inventive concept as embodiment 1, and having stored thereon computer instructions which, when executed, perform the steps of the method for analyzing phenotypic data of a target object described in embodiment 1.
Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Example 3
The present embodiment also provides a terminal, which has the same inventive concept as that of embodiment 1, and includes a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the steps of the method for analyzing phenotype data of the target object in embodiment 1 when executing the computer instructions. The processor may be a single or multi-core central processing unit or a specific integrated circuit, or one or more integrated circuits configured to implement the present invention.
Each functional unit in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above detailed description is for the purpose of describing the invention in detail, and it should not be construed that the detailed description is limited to the description, and it will be apparent to those skilled in the art that various modifications and substitutions can be made without departing from the spirit of the invention.

Claims (10)

1. A method of analyzing phenotypic data of a target object, comprising: the method comprises the following steps of calculating the number of the concave points and the depth:
calculating the curvature of each point in the three-dimensional point cloud image of the target object, and clustering the points with negative curvature to further obtain the number of the concave points of the target object;
and calculating the average curvature of corresponding points of each concave point in the three-dimensional point cloud image, analyzing the linear relation between the average curvature of the corresponding points of each concave point and the actually measured depth, and further estimating the depth of the concave point.
2. The method for analyzing phenotypic data of a target object according to claim 1, wherein: the calculation formula of the average curvature is as follows:
Figure FDA0002990580550000011
wherein N (i) is the vertex x in the three-dimensional mesh surface modeliTotal number of peripheral triangles (x)i,xj) For two sides of the triangle coinciding, alphaij、βijRespectively being vertex xiAngle of two opposite vertex angles, AmRepresenting a vertex xiThe surface area of all triangles attached.
3. The method for analyzing phenotypic data of a target object according to claim 1, wherein: the linear relation between the average curvature of the corresponding points of the concave points and the actual measurement depth is as follows:
Deye=a×C+b
wherein D iseyeRepresenting the measured depth of the pit point, a and b are respectively correlation coefficients obtained by unary linear regression, and C represents the curvature value of a single pit point.
4. The method for analyzing phenotypic data of a target object according to claim 1, wherein: the method also comprises a symmetry parameter extraction step:
transversely intercepting a plurality of section images of the three-dimensional point cloud image of the target object, and extracting the roundness circulation and the roundness variation coefficient CV of the maximum section of the section imagesCIR
And longitudinally intercepting and intercepting a plurality of section images of the three-dimensional point cloud image of the target object, and extracting the ratio of the maximum area to the minimum area and the area variation coefficient of the section images.
5. The method for analyzing phenotypic data of a target object according to claim 4, wherein: roundness circulation and roundness variation coefficient CV of the maximum section of the section diagramCIRThe calculation formula of (2) is as follows:
Circularity=4*S/C2
Figure FDA0002990580550000021
wherein S represents the cross-sectional area, C represents the cross-sectional perimeter, σ represents the standard deviation of the roundness of all the sections, and μ represents the average of the roundness of all the sections.
6. The method for analyzing phenotypic data of a target object according to claim 1, wherein: the method also comprises the steps of extracting size parameters and volume information:
calculating a minimum volume external cuboid of the three-dimensional point cloud image according to three principal component vectors of the three-dimensional point cloud image of the target object, and further acquiring length, width and height information of the target object; the three-dimensional point cloud image of the target object is specifically a three-dimensional point cloud image comprising the target object and a base, and the target object is placed on the base;
and constructing a three-dimensional mesh curved surface model based on the three-dimensional point cloud image of the target object, connecting all triangles through the central point, calculating the volume sum of all the triangles, and further calculating the volume information of the target object.
7. The method for analyzing phenotypic data of a target object according to claim 6, wherein: before the step of extracting the size parameter and the volume information, the method also comprises the following step of point cloud position and direction normalization treatment:
calculating the barycentric coordinate of the point cloud data according to the coordinate information of the three-dimensional point cloud image, and translating the point cloud of the three-dimensional point cloud image to enable the barycenter to be coincident with the origin;
performing principal component transformation by taking coordinates of all points in the three-dimensional point cloud image as a data set, determining three principal directions according to the feature vectors, and determining a first principal direction based on the base;
the first principal component direction is coincided with the coordinate z axis through rotation change, and normalization processing of the point cloud position and direction is achieved.
8. The method for analyzing phenotypic data of a target object according to claim 6, wherein: the method further comprises a pre-processing step:
analyzing the neighborhood of each point in the three-dimensional point cloud image, and eliminating outliers and/or gross errors caused by measurement errors;
and (4) resampling the point cloud data based on smoothing and normal estimation of a mobile least square method, and further repairing surface unevenness or loopholes generated in the three-dimensional point cloud image acquisition process.
9. A storage medium having stored thereon computer instructions, characterized in that: the computer instructions when executed perform the steps of the method for analyzing phenotypic data of a target object according to any one of claims 1-8.
10. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the terminal comprising: the processor, when executing the computer instructions, performs the steps of the method for analyzing phenotypic data of a target object according to any one of claims 1-8.
CN202110314527.1A 2021-03-24 2021-03-24 Phenotype data analysis method of target object, storage medium and terminal Pending CN112927287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110314527.1A CN112927287A (en) 2021-03-24 2021-03-24 Phenotype data analysis method of target object, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110314527.1A CN112927287A (en) 2021-03-24 2021-03-24 Phenotype data analysis method of target object, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN112927287A true CN112927287A (en) 2021-06-08

Family

ID=76175868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110314527.1A Pending CN112927287A (en) 2021-03-24 2021-03-24 Phenotype data analysis method of target object, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112927287A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643350A (en) * 2021-07-21 2021-11-12 宜宾中星技术智能系统有限公司 Method, device and terminal equipment for performing stereo measurement on video picture

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0898247A2 (en) * 1997-08-15 1999-02-24 The Institute Of Physical & Chemical Research Method of synthesizing measurement data of free-form surface
CN201602022U (en) * 2010-03-14 2010-10-13 林桂发 Oriented groove for cultivation of Chinese yam rhizome supergene
CN104807406A (en) * 2014-01-27 2015-07-29 康耐视公司 System and method for determining 3d surface features and irregularities on an object
WO2018036138A1 (en) * 2016-08-24 2018-03-01 大连理工大学 Method for processing actually measured three-dimensional morphology point cloud data of thin-wall shell obtained for digital photography
CN110647835A (en) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data
JPWO2020225886A1 (en) * 2019-05-08 2020-11-12
WO2021000719A1 (en) * 2019-06-30 2021-01-07 华中科技大学 Three-dimensional point cloud-based robot processing boundary extraction method for small curvature thin-walled part

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0898247A2 (en) * 1997-08-15 1999-02-24 The Institute Of Physical & Chemical Research Method of synthesizing measurement data of free-form surface
CN201602022U (en) * 2010-03-14 2010-10-13 林桂发 Oriented groove for cultivation of Chinese yam rhizome supergene
CN104807406A (en) * 2014-01-27 2015-07-29 康耐视公司 System and method for determining 3d surface features and irregularities on an object
US20150213606A1 (en) * 2014-01-27 2015-07-30 Cognex Corporation System and method for determining 3d surface features and irregularities on an object
WO2018036138A1 (en) * 2016-08-24 2018-03-01 大连理工大学 Method for processing actually measured three-dimensional morphology point cloud data of thin-wall shell obtained for digital photography
JPWO2020225886A1 (en) * 2019-05-08 2020-11-12
WO2021000719A1 (en) * 2019-06-30 2021-01-07 华中科技大学 Three-dimensional point cloud-based robot processing boundary extraction method for small curvature thin-walled part
CN110647835A (en) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田海韬;赵军;蒲富鹏;: "马铃薯芽眼图像的分割与定位方法", 浙江农业学报, no. 11, 25 November 2016 (2016-11-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643350A (en) * 2021-07-21 2021-11-12 宜宾中星技术智能系统有限公司 Method, device and terminal equipment for performing stereo measurement on video picture
CN113643350B (en) * 2021-07-21 2023-09-12 宜宾中星技术智能系统有限公司 Method, device and terminal equipment for carrying out stereo measurement on video picture

Similar Documents

Publication Publication Date Title
CN106815481B (en) Lifetime prediction method and device based on image omics
US10229488B2 (en) Method and system for determining a stage of fibrosis in a liver
CN107464249B (en) Sheep contactless body ruler measurement method
US8831311B2 (en) Methods and systems for automated soft tissue segmentation, circumference estimation and plane guidance in fetal abdominal ultrasound images
Ghazal et al. Automated framework for accurate segmentation of leaf images for plant health assessment
CN110321943B (en) CT image classification method, system and device based on semi-supervised deep learning
Esteves et al. Gradient convergence filters and a phase congruency approach for in vivo cell nuclei detection
Morales et al. Characterization and extraction of the synaptic apposition surface for synaptic geometry analysis
CN110569735A (en) Analysis method and device based on back body condition of dairy cow
CN112861872A (en) Penaeus vannamei phenotype data determination method, device, computer equipment and storage medium
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN112651955A (en) Intestinal tract image identification method and terminal device
Zhu et al. 3D reconstruction of plant leaves for high-throughput phenotyping
Cai et al. Automated extraction of three-dimensional cereal plant structures from two-dimensional orthographic images
CN117635615B (en) Defect detection method and system for realizing punching die based on deep learning
CN112927287A (en) Phenotype data analysis method of target object, storage medium and terminal
Zvietcovich et al. A novel method for estimating the complete 3D shape of pottery with axial symmetry from single potsherds based on principal component analysis
KR101394895B1 (en) Object segmentation using block clustering based on automatic initial region of interest estimation
CN114549513A (en) Part identification method, part identification device, quality inspection method, electronic device, and storage medium
Purnama et al. Follicle detection on the usg images to support determination of polycystic ovary syndrome
Molina-Giraldo et al. Image segmentation based on multi-kernel learning and feature relevance analysis
CN117853722A (en) Steel metallographic structure segmentation method integrating superpixel information
JP6132485B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
Vandenberghe et al. How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques
CN117292181A (en) Sheet metal part hole group classification and full-size measurement method based on 3D point cloud processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220829

Address after: Room 10-1026, 10th Floor, Building 6, No. 1366 Hongfeng Road, Huzhou City, Zhejiang Province 313000

Applicant after: Translate Technology (Zhejiang) Co.,Ltd.

Address before: 1213-223, 12 / F, building 3, No. 1366, Hongfeng Road, Kangshan street, Huzhou City, Zhejiang Province, 313000

Applicant before: Heshu Technology (Zhejiang) Co.,Ltd.

TA01 Transfer of patent application right