CN112102496B - Cattle physique measurement method, model training method and system - Google Patents

Cattle physique measurement method, model training method and system Download PDF

Info

Publication number
CN112102496B
CN112102496B CN202011034291.8A CN202011034291A CN112102496B CN 112102496 B CN112102496 B CN 112102496B CN 202011034291 A CN202011034291 A CN 202011034291A CN 112102496 B CN112102496 B CN 112102496B
Authority
CN
China
Prior art keywords
point
points
cloud data
dimensional lattice
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011034291.8A
Other languages
Chinese (zh)
Other versions
CN112102496A (en
Inventor
赵拴平
金海�
贾玉堂
徐磊
吴娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Animal Husbandry and Veterinary Medicine of Anhui Academy of Agricultural Sciences
Original Assignee
Institute of Animal Husbandry and Veterinary Medicine of Anhui Academy of Agricultural Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Animal Husbandry and Veterinary Medicine of Anhui Academy of Agricultural Sciences filed Critical Institute of Animal Husbandry and Veterinary Medicine of Anhui Academy of Agricultural Sciences
Priority to CN202011034291.8A priority Critical patent/CN112102496B/en
Publication of CN112102496A publication Critical patent/CN112102496A/en
Application granted granted Critical
Publication of CN112102496B publication Critical patent/CN112102496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method for measuring physique, a method for training a model and a system thereof, and relates to an artificial intelligence technology, wherein the method comprises the following steps: acquiring 3D point cloud data of an object to be measured; acquiring a three-dimensional lattice under the same coordinate system with the 3D point cloud data; according to the position relation between each point in the 3D point cloud data and the midpoint of the three-dimensional lattice, mapping the position information of each point in the 3D point cloud data into the three-dimensional lattice to obtain lattice information; obtaining a multichannel image according to the lattice information; and inputting the multichannel image into a trained model to obtain a measurement result. The invention can improve the precision of the measurement result.

Description

Cattle physique measurement method, model training method and system
Technical Field
The invention relates to an artificial intelligence technology, in particular to a bovine body constitution measuring method, a model training method and a model training system.
Background
The measurement of the bovine body ruler mainly measures the data of the bovine body such as height, body oblique length, cross part height, chest circumference, chest depth, abdomen circumference and the like. The cattle body size character data reflects the growth and development conditions of cattle and can be directly used for evaluating the nutrition conditions of cattle bodies. Meanwhile, body size data of different ages of months are also the basis and the basis of performance measurement in the breeding process of cattle. The rapid and accurate beef cattle body size character measurement has extremely high production value.
The traditional beef cattle body ruler data measurement is to manually use tools such as a tape measure, a stick measure and the like to directly measure the cattle body. Because beef cattle physique is high, at least two people need to cooperate to accomplish the statistics of whole set of body ruler data during manual measurement, and measuring process wastes time and energy and accompanies certain risk. On the other hand, the manual measurement has a certain error in the obtained result due to the different measurement experience of the staff.
The non-contact measurement is carried out on the cattle, so that time and labor are saved, various defects such as injury to the human body of the cattle can be avoided, and the non-contact measurement is one of the problems in the livestock industry. At present, one way is to measure the cow body on an image by detecting and dividing the cow body in the image by adopting 2D visual imaging; another way is to perform 3D imaging on the cow, give the position point of the data to be measured on the 3D data, and directly calculate the distance between the two points for measurement.
The 2D-based method is easily affected by factors such as illumination and cannot accurately divide the cow body. In addition, since 2D imaging has no depth information, data like the chest circumference of a cow cannot be measured. Based on the 3D method, a specific position of the cow to be detected is usually found on the three-dimensional data, and the distance calculation of the corresponding point is carried out. This mode of operation is extremely cumbersome and is susceptible to point cloud noise data.
In recent years, with the strong performance of the deep learning technology, some students have introduced the deep learning technology into the processing task of 3D point cloud data. Such methods typically arrange n three-dimensional data into an nx3 matrix, and use convolution, pooling, etc. operations in deep learning to accomplish the relevant tasks. However, the prior art ignores the spatial relationship between the point cloud data, and the accuracy is not ideal.
Disclosure of Invention
In view of the above, the present invention aims to: provided are a physical measurement method, a model training method and a system for improving measurement accuracy.
According to a first scheme provided by the embodiment of the invention:
a method of physical measurement comprising the steps of:
acquiring 3D point cloud data of an object to be measured;
acquiring a three-dimensional lattice under the same coordinate system with the 3D point cloud data, wherein the size of the three-dimensional lattice is A, B, C, A, B and C are positive integers, and the distances between any two adjacent points in the same axial direction in the three-dimensional lattice are the same;
according to the position relation between each point in the 3D point cloud data and the midpoint of the three-dimensional lattice, mapping the position information of each point in the 3D point cloud data into the three-dimensional lattice to obtain lattice information;
obtaining a multichannel image according to the lattice information;
and inputting the multichannel image into a trained model to obtain a measurement result.
In some embodiments, the three-dimensional lattice is a three-dimensional matrix, and distances between any two adjacent points are equal, where two points are adjacent, which means that the distance between two points is the nearest.
In some embodiments, the lattice information is represented by gray values of points in the three-dimensional lattice.
In some embodiments, mapping the position information of each point of the 3D point cloud data to the three-dimensional lattice according to the position relationship between each point in the 3D point cloud data and the three-dimensional lattice midpoint to obtain lattice information includes:
and calculating gray information of each point in the three-dimensional lattice, wherein the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, and the gray values of the points of the multi-channel image are represented by the gray values of the points in the corresponding positions in the lattice information.
In some embodiments, the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, specifically, the gray information is calculated by the following formula:
wherein II 2 Is an L2 norm, is used for calculating the coordinate distance of two points, M (M) represents the gray value of the point M in the image of the M channel, s refers to the point corresponding to the point M in the three-dimensional matrix,is a point in the 3D point cloud data, represents the coordinate of the kth supporting point of the point s, t s The number of support points representing point s, when point +.>The nearest point from the three-dimensional lattice is point s, which is called pointIs the support point for point s.
In some embodiments, the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, specifically, the gray information is calculated by the following formula:
wherein σ is a value set by the user, II 2 Is an L2 norm, is used for calculating the coordinate distance of two points, M (M) represents the gray value of the point M in the image of the M channel, s refers to the point corresponding to the point M in the three-dimensional matrix,and (3) representing the coordinates of the supporting points of the kth point s, wherein T represents the number of the supporting points of the point s, and all the points in the 3D point cloud data are the supporting points of the point s.
In some embodiments, the model is a convolutional neural network model.
According to a second scheme provided by the embodiment of the invention:
a physical measurement model training method comprises the following steps:
acquiring a plurality of training samples and labels corresponding to the training samples, wherein the training samples are 3D point cloud data;
acquiring a three-dimensional lattice under the same coordinate system with the 3D point cloud data, wherein the size of the three-dimensional lattice is A, B, C, A, B and C are positive integers, and the distances between any two adjacent points in the same axial direction in the three-dimensional lattice are the same;
initializing model parameters;
training the model through a plurality of training samples and labels corresponding to the training samples until a stopping condition is met;
in each training, the method comprises the following steps:
according to the position relation between each point in the 3D point cloud data and the midpoint of the three-dimensional lattice, mapping the position information of each point in the 3D point cloud data into the three-dimensional lattice to obtain lattice information;
obtaining a multichannel image according to the lattice information;
inputting the multichannel image into a current model to obtain a measurement result;
and updating model parameters according to the labels corresponding to the measurement results and the training samples.
According to a third scheme provided by the embodiment of the invention:
a physical measurement system comprising:
one or more processors;
and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of physical measurement of cattle.
According to a fourth scheme provided by the embodiment of the invention:
a physical measurement model training method comprises the following steps:
one or more processors;
and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of training a physical measurement model of a cow.
As can be seen from the above embodiments, the present solution has the following technical effects: the position information of the point cloud data is mapped into the three-dimensional grid, the multichannel image is obtained through the obtained dot matrix information, the position information in the point cloud data is converted into the multichannel image, and then a final measurement result is obtained through a trained model.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is a flow chart of a method for measuring physical constitution of a cow according to an embodiment of the present invention;
fig. 2 is another schematic diagram of a method for measuring physical constitution of a cow according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a training method for a bovine physical fitness measurement model according to an embodiment of the present invention.
Detailed Description
The terms appearing in the embodiments of the present invention are explained below to assist in understanding the embodiments of the present invention.
Artificial intelligence (Artificial Intelligence), english is abbreviated AI. It is a new technical science for researching, developing theory, method, technology and application system for simulating, extending and expanding human intelligence.
Artificial neural networks (Artificial Neural Networks, abbreviated as ANNs) are also simply called Neural Networks (NNs) or Connection models (Connection models), which are mathematical models of algorithms that mimic the behavior of animal neural networks and perform distributed parallel information processing. The network relies on the complexity of the system and achieves the purpose of processing information by adjusting the relationship of the interconnection among a large number of nodes.
Convolutional neural networks (Convolutional Neural Networks, CNN) are a type of feedforward neural network (Feedforward Neural Networks) that contains convolutional calculations and has a deep structure, and are one of the representative algorithms of deep learning. Convolutional neural networks have the capability of token learning (representation learning) and are capable of performing a Shift-invariant classification (Shift-invariant classification) on input information in their hierarchical structure, and are therefore also referred to as "Shift-invariant artificial neural networks (SIANN).
And (3) point cloud: in the reverse engineering, the point data set of the product appearance surface obtained by a measuring instrument is also called point cloud, the number of points obtained by a three-dimensional coordinate measuring machine is usually smaller, the point-to-point distance is also larger, and the point data set is called sparse point cloud; the point cloud obtained by using the three-dimensional laser scanner or the photographic scanner has larger and denser point number, and is called dense point cloud.
In the related art, a deep learning technique is introduced to a processing task of 3D point cloud data. Such methods typically arrange n three-dimensional data into an nx3 matrix, and use convolution, pooling, etc. operations in deep learning to accomplish the relevant tasks. This operation has a certain discrimination capability in tasks such as object classification. The three-dimensional points are directly arranged into an nx3 matrix, the adjacent relation of the three-dimensional points on the space position is ignored, and the local structure information of the object cannot be accurately extracted when the convolution operation is used, so that the measurement task is difficult to accurately realize. Because it is the point-to-point relationship of the local position that often contributes to the measurement. In order to be able to measure the bovine body by using deep learning, the invention converts the 3D point cloud data into multi-channel image data of specific local spatial structure information, and then the bovine body ruler data is regressed by using the existing deep learning technology. By means of the strong feature mining capability of the deep learning technology, the method does not need to specify specific measuring points by a user, and related data of the cattle body can be directly given.
Referring to fig. 1 and 2, the embodiment discloses a method for measuring physique, comprising the following steps:
step 110, obtaining 3D point cloud data of the object to be measured. The data acquired in the step can be acquired from a hard disk, or 3D point cloud data can be acquired by scanning the cattle body by using binocular vision or laser scanning equipment on site, and in practical application, only one side of the cattle can be scanned when the 3D point cloud is constructed for the cattle due to bilateral symmetry of Niu Cheng. Thus, the data volume can be reduced, and the model volume and the operation amount can be reduced. Similarly, in the process of training the model, the training set can be formed by scanning the volumes of a plurality of cows and measuring corresponding data as labels by a manual method.
Step 120, obtaining a three-dimensional lattice under the same coordinate system with the 3D point cloud data, wherein the size of the three-dimensional lattice is A, B, C, A, B and C are positive integers, and the distances between any two adjacent points in the same axial direction in the three-dimensional lattice are the same; a three-dimensional lattice may be constructed in a coordinate system of the 3D point cloud data, wherein the number of points of the three-dimensional lattice in X-axis, Y-axis, and Z-axis directions are A, B and C, respectively. In this embodiment, the lattice information can be mapped into the multi-channel image as long as the pitch of the three-dimensional lattice points on the same axis is ensured to be equidistant. Of course, in some embodiments, in order to reduce the amount of computation, the distances between all adjacent points may be set to be the same, and the sizes of A, B and C may be reduced.
And 130, mapping the position information of each point of the 3D point cloud data into the three-dimensional lattice according to the position relation between each point in the 3D point cloud data and the three-dimensional lattice point to obtain lattice information.
The space relation of each point in the point cloud data can be mapped into the three-dimensional lattice by mapping according to the relation between the points in the point cloud data and the points in the three-dimensional lattice, so that the model can learn the information, and an accurate result is obtained. It should be understood that, the position information of the data points satisfying a certain condition with the position relationship between the points in the three-dimensional lattice may be mapped. In this embodiment, the concept of supporting points is introduced, each of the three-dimensional lattice points has 0 to n supporting points, and all the point cloud data can be used as the supporting point of each point in the three-dimensional lattice, or the data point close to a certain point in the three-dimensional lattice can be used as the supporting point of the point. The purpose of this is to determine which data points in the point cloud data are mapped into a certain point of the three-dimensional lattice through the spatial relationship, so as to extract local spatial relationship information.
And 140, obtaining a multi-channel image according to the lattice information. In this step, the points in the three-dimensional lattice can be mapped onto the multi-channel image from left to right, top to bottom, front to back, i.e., columns of the X-direction coordinate map image, rows of the Y-direction coordinate map image, and channels of the Z-direction coordinate map image.
And 150, inputting the multichannel image into a trained model to obtain a measurement result. In this step, the selected model may be a convolutional neural network. The convolutional neural network can be adopted to better process the multichannel image.
A convolutional neural network structure is given below:
after the 3D point cloud is converted into the multi-channel image, the embodiment of the invention can use a common deep learning network to return the bovine body size information. The input is a multi-channel image, and the label is a body ruler value. The former network adopts convolution and pooling layers (the structure of two convolution layers is shown in table 1, and the practical application is not limited to two layers, but can be multiple layers). The rear of the network adopts a full connection layer, and finally the information is outputted as the bovine body ruler information. Assuming that the size of the multi-channel image calculated after equally spaced spreading is 224 x 30, a specific network structure is given in table 1.
TABLE 1
Input device 224*224*30 Output of
Convolution kernel 3*3*30@16 224*224*16
Maximum pooling 112*112*16
Convolution kernel 3*3*16@32 112*112*32
Maximum pooling 56*56*32
Convolution kernel 3*3*32@48 56*56*48
Maximum pooling 28*28*48
Convolution kernel 3*3*48@56 28*28*56
Maximum pooling 14*14*56
Convolution kernel 3*3*56@64 14*14*64
Maximum pooling 7*7*64
Feature expansion 7*7*64=1*3136
Full connection 3136*1000 1*1000
Full connection 1000*100 1*100
Full connection 100*1 L*1
Wherein 224×224×30: the size of the image is 224×224, and a total of 30 images, i.e., 30 channels. 3x 30@16: there are 16 filters with convolution kernel sizes of 3x 30, which are parameters of the current layer. After maximum pooling, the width and height of the image become half of the original. The feature expansion is to pull a multidimensional vector into a one-dimensional vector, and the data content is unchanged. And finally, the output length L is the number of the data of the bovine body ruler.
In some embodiments, the three-dimensional lattice is a three-dimensional matrix, and distances between any two adjacent points are equal, where two points are adjacent, which means that the distance between two points is the nearest. This embodiment helps to simplify the calculation process.
In some embodiments, the lattice information is represented by gray values of points in the three-dimensional lattice.
In some embodiments, mapping the position information of each point of the 3D point cloud data to the three-dimensional lattice according to the position relationship between each point in the 3D point cloud data and the three-dimensional lattice midpoint to obtain lattice information includes:
and calculating gray information of each point in the three-dimensional lattice, wherein the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, and the gray values of the points of the multi-channel image are represented by the gray values of the points in the corresponding positions in the lattice information.
The gray value of each point of the multi-channel image can be calculated in the following two ways. For convenience of description, normalized point cloud data is represented by symbol C; s represents a regularly arranged point set formed by scattering points at equal intervals in a three-dimensional space containing the point cloud data C, i.e., a three-dimensional lattice of a×b×c, wherein A, B, C represents the number of points in three axial directions of the coordinate system. Since S is a set of points arranged regularly, it can be mapped onto the image M of multiple channels one by one. One mapping process is to sequentially correspond the points of S from left to right, top to bottom, and front to back to corresponding coordinates of the multi-channel image M, i.e., columns of the x-direction coordinate mapping image, rows of the y-direction coordinate mapping image, and channels of the z-direction coordinate mapping image. Let the points S in the regularly arranged point set S correspond to the coordinates M in the multi-channel image M, the values of the multi-channel image M at the coordinates M being denoted by M (M).
In the first way, each point in the point cloud data C is searched for a point closest to the point in the regularly arranged point set S, and is used as a supporting point of the point. Thus, each point in the regularly arranged point set S has t.gtoreq.0 supporting points. Without loss of generality, it is assumed that the points S in the regularly arranged point set S have t s The supporting points are marked asThe average distance is used as the gray value of the coordinate M in the multi-channel image M corresponding to the point s.
Therefore, the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, specifically by the following formula:
wherein, because of s andall are coordinates, a vector of 3x1, and cannot directly calculate distance. Thus, taking the norm. II 2 Is an L2 norm for calculating the coordinate distance between two points, M (M) represents the gray value of point M in the M-channel image, s represents the point corresponding to point M in the three-dimensional matrix, it should be understood that the multi-channel image can be understood as a three-dimensional matrix, and when the point s is mapped to the multi-channel image, each point s actually has the point M, corresponding to the position>Is a point in the 3D point cloud data, represents the coordinate of the kth supporting point of the point s, t s The number of support points representing point s, when point +.>The nearest point from the three-dimensional lattice is point s, then the point is named +.>Is the support point for point s. For example, in a multi-channel image with three channels, the image size of each channel is 100 x 100, so the nth point m in the multi-channel image n Can be described as (x) 1n ,y 1n ,z 1n ),Wherein x is 1n ={0,1,2,…,99},y 1n ={0,1,2,…,99},z 1n = {0,1,2}, n= {0,1, …,29999}. Assuming that the scale in the three-dimensional lattice is 200×200×6, the nth point s n Can be described as (x) 2n ,y 2n ,z 2n ),x 2n ={0,2,4,…,198},y 2n ={0,2,4,…,198},z 2n = {0,2,4}, the number of points of the three-dimensional lattice is exactly the same as the number of points of the multi-channel image, and thus each point has a corresponding relationship.
In the second way, all points in the point cloud data C support each point in the regularly arranged point set S. The supporting strength is determined by the distance between the supporting strength and the supporting strength. Without loss of generality, the points S in the regular array point set S are considered, and all point cloud data are support points of S. Assuming that there are T points in the point cloud data, the point cloud data is marked as C 1 ,C 2 ,…,C T . A gaussian function is used to measure the contribution of each point.
Therefore, the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, specifically by the following formula:
where σ is a value set by the user, and σ is typically 1-3 times the S-spread interval. II 2 Is an L2 norm, is used for calculating the coordinate distance of two points, M (M) represents the gray value of the point M in the image of the M channel, s refers to the point corresponding to the point M in the three-dimensional matrix,the points in the 3D point cloud data represent the coordinates of the kth supporting point of the point s, T represents the number of the supporting points of the point s, and all the points in the 3D point cloud data are the supporting points of the point s. As can be seen from the above formula, this is when point C of the point cloud k Far from s, the drug is added>Tend to 0, contributing little; conversely, when point C of the point cloud k When the distance from s is very short, the person is left to the system (I)>Has larger value and large contribution. Thus, the gray value may reflect local position information.
Referring to fig. 3, the embodiment discloses a physical measurement model training method, which includes the following steps:
acquiring a plurality of training samples and labels corresponding to the training samples, wherein the training samples are 3D point cloud data;
acquiring a three-dimensional lattice under the same coordinate system with the 3D point cloud data, wherein the size of the three-dimensional lattice is A, B, C, A, B and C are positive integers, and the distances between any two adjacent points in the same axial direction in the three-dimensional lattice are the same;
initializing model parameters;
training the model through a plurality of training samples and labels corresponding to the training samples until a stopping condition is met;
in each training, the method comprises the following steps:
according to the position relation between each point in the 3D point cloud data and the midpoint of the three-dimensional lattice, mapping the position information of each point in the 3D point cloud data into the three-dimensional lattice to obtain lattice information;
obtaining a multichannel image according to the lattice information;
inputting the multichannel image into a current model to obtain a measurement result;
and updating model parameters according to the labels corresponding to the measurement results and the training samples.
According to the embodiment, a model with good precision can be obtained through training of a large amount of annotation data. The training method adopted in the embodiment can improve the robustness by means of data enhancement, countermeasure training and the like.
The embodiment discloses a physical measurement system, including:
one or more processors;
and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of physical measurement of cattle.
The embodiment discloses a physical measurement model training method, which comprises the following steps:
one or more processors;
and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of training a physical measurement model of a cow.
In summary, the embodiments provided by the present invention have the following advantages:
a. the local structure information of the point cloud can be well described; each gray value in the multi-channel image M describes a corresponding local point cloud structure.
b. The traditional 3D deep learning technology based on video processing can be completely applied to 3D point cloud data; at this time, the input multi-frame video data is equivalent to the input multi-channel image.
c. The design of the network structure for the image becomes flexible, and the existing network structure for directly using the point cloud data for convolution is not completely relied on.
d. The density of the scattering points of the regular arrangement point set S is different, and the phase change represents the spatial scale information, so that the method can be completely used for improving the precision. Thus, the model accuracy can be adjusted by the scattering point distance.
It should be noted that, the computer readable medium according to the embodiments of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in embodiments of the present invention, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the method embodiments described above.
Computer program code for carrying out operations for embodiments of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes a receiving unit, an obtaining unit, a first generating unit, and a second generating unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the receiving unit may also be described as "a unit that receives a query request sent by a terminal".
The above description is only illustrative of the preferred embodiments of the present invention and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present invention is not limited to the specific combination of the above technical features, but also encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments of the present invention.

Claims (6)

1. The physique measurement method is characterized by comprising the following steps of:
acquiring 3D point cloud data of an object to be tested, and acquiring a plurality of training samples and labels corresponding to the training samples, wherein the training samples are 3D point cloud data;
acquiring a three-dimensional lattice under the same coordinate system with the 3D point cloud data, wherein the size of the three-dimensional lattice is A, B, C, A, B and C are positive integers, and the distances between any two adjacent points in the same axial direction in the three-dimensional lattice are the same;
initializing model parameters;
training the model through a plurality of training samples and labels corresponding to the training samples until a stopping condition is met;
in each training, the method comprises the following steps:
mapping the position information of each point in the 3D point cloud data into the three-dimensional lattice according to the position relation between each point in the 3D point cloud data and each point in the three-dimensional lattice to obtain lattice information, including calculating gray information of each point in the three-dimensional lattice, wherein the gray information of each point in the three-dimensional lattice is calculated according to the distance between the point in the 3D point cloud data and the point in the three-dimensional lattice, the gray values of each point in the multi-channel image are represented by the gray values of the points at corresponding positions in the lattice information,
the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, specifically by the following formula:
wherein II + Is an L2 norm, is used to calculate the coordinate distance between two points, M (M) represents the gray value of point M in the M-channel image, s refers to the point corresponding to point M in the three-dimensional matrix,is a point in the 3D point cloud data, represents the coordinate of the kth supporting point of the point s, t $ The number of support points representing point s, when point +.>The nearest point from the three-dimensional lattice is point s, then the point is named +.>Is the supporting point of the point s;
or, the gray information of the points in the three-dimensional lattice is calculated according to the distance between the points in the 3D point cloud data and the points in the three-dimensional lattice, specifically by the following formula:
wherein σ is a value set by the user, II + Is an L2 norm, is used to calculate the coordinate distance between two points, M (M) represents the gray value of point M in the M-channel image, s refers to the point corresponding to point M in the three-dimensional matrix,is a point in the 3D point cloud data, represents the coordinate of the kth supporting point of the point s, T represents the number of the supporting points of the point s, and the points in all the 3D point cloud data are the supporting points of the point s
Obtaining a multichannel image according to the lattice information;
inputting the multichannel image into a current model to obtain a measurement result;
and updating model parameters according to the labels corresponding to the measurement results and the training samples.
2. The method for measuring the physical constitution of the cattle according to claim 1, wherein the three-dimensional lattice is a three-dimensional matrix, and distances between any two adjacent points are equal, wherein two adjacent points are the nearest distances between two points.
3. The bovine physical fitness measurement method of claim 1, wherein the lattice information is represented by gray values of points in the three-dimensional lattice.
4. The bovine physical fitness measurement method of claim 1, wherein the model is a convolutional neural network model.
5. A physical measurement system, comprising:
one or more processors;
storage means having stored thereon one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
6. The method for training the physical measurement model is characterized by comprising the following steps of:
one or more processors;
storage means having stored thereon one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of claim 1.
CN202011034291.8A 2020-09-27 2020-09-27 Cattle physique measurement method, model training method and system Active CN112102496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011034291.8A CN112102496B (en) 2020-09-27 2020-09-27 Cattle physique measurement method, model training method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011034291.8A CN112102496B (en) 2020-09-27 2020-09-27 Cattle physique measurement method, model training method and system

Publications (2)

Publication Number Publication Date
CN112102496A CN112102496A (en) 2020-12-18
CN112102496B true CN112102496B (en) 2024-03-26

Family

ID=73782314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011034291.8A Active CN112102496B (en) 2020-09-27 2020-09-27 Cattle physique measurement method, model training method and system

Country Status (1)

Country Link
CN (1) CN112102496B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102564347A (en) * 2011-12-30 2012-07-11 中国科学院上海光学精密机械研究所 Object three-dimensional outline measuring device and method based on Dammann grating
WO2016185637A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Point-cloud-image generation device and display system
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107862656A (en) * 2017-10-30 2018-03-30 北京零壹空间科技有限公司 A kind of Regularization implementation method, the system of 3D rendering cloud data
CN110059579A (en) * 2019-03-27 2019-07-26 北京三快在线科技有限公司 For the method and apparatus of test alive, electronic equipment and storage medium
CN110632608A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device based on laser point cloud
CN111368605A (en) * 2018-12-26 2020-07-03 易图通科技(北京)有限公司 Lane line extraction method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201407270D0 (en) * 2014-04-24 2014-06-11 Cathx Res Ltd 3D data in underwater surveys
US10438371B2 (en) * 2017-09-22 2019-10-08 Zoox, Inc. Three-dimensional bounding box from two-dimensional image and point cloud data
US11127202B2 (en) * 2017-12-18 2021-09-21 Parthiv Krishna Search and rescue unmanned aerial system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102564347A (en) * 2011-12-30 2012-07-11 中国科学院上海光学精密机械研究所 Object three-dimensional outline measuring device and method based on Dammann grating
WO2016185637A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Point-cloud-image generation device and display system
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107862656A (en) * 2017-10-30 2018-03-30 北京零壹空间科技有限公司 A kind of Regularization implementation method, the system of 3D rendering cloud data
CN110632608A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device based on laser point cloud
CN111368605A (en) * 2018-12-26 2020-07-03 易图通科技(北京)有限公司 Lane line extraction method and device
CN110059579A (en) * 2019-03-27 2019-07-26 北京三快在线科技有限公司 For the method and apparatus of test alive, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于激光点云的三维测量系统设计;张婧婧;白晓;;现代电子技术(14);全文 *
基于点云数据的三维目标识别和模型分割方法;牛辰庚;刘玉杰;李宗民;李华;;图学学报(02);全文 *

Also Published As

Publication number Publication date
CN112102496A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN109521403B (en) Parameter calibration method, device and equipment of multi-line laser radar and readable medium
CN106203432B (en) Positioning system of region of interest based on convolutional neural network significance map
CN111243005B (en) Livestock weight estimation method, apparatus, device and computer readable storage medium
CN100566660C (en) Medical information processing device and absorption coefficient calibration method
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
WO2021088849A1 (en) Ultrasonic imaging method and apparatus, readable storage medium, and terminal device
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN105654483B (en) The full-automatic method for registering of three-dimensional point cloud
CN1864074A (en) Determining patient-related information on the position and orientation of mr images by the individualisation of a body model
CN112037146B (en) Automatic correction method and device for medical image artifacts and computer equipment
CN112102294A (en) Training method and device for generating countermeasure network, and image registration method and device
CN112598790A (en) Brain structure three-dimensional reconstruction method and device and terminal equipment
CN113706473A (en) Method for determining long and short axes of lesion region in ultrasonic image and ultrasonic equipment
Taşdemir et al. Determination of body measurements of a cow by image analysis
CN112861872A (en) Penaeus vannamei phenotype data determination method, device, computer equipment and storage medium
CN102132322A (en) Apparatus for determining modification of size of object
CN108597589B (en) Model generation method, target detection method and medical imaging system
CN111652168B (en) Group detection method, device, equipment and storage medium based on artificial intelligence
CN110517300A (en) Elastic image registration algorithm based on partial structurtes operator
CN112102496B (en) Cattle physique measurement method, model training method and system
CN110838179B (en) Human body modeling method and device based on body measurement data and electronic equipment
CN109238264B (en) Livestock position and posture normalization method and device
Lynn et al. Automatic assessing body condition score from digital images by active shape model and multiple regression technique
US20230036897A1 (en) A method and system for improved ultrasound plane acquisition
CN100573594C (en) Being used for the automatic optimal view that cardiac image obtains determines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant