CN117373661A - Animal body condition scoring method - Google Patents
Animal body condition scoring method Download PDFInfo
- Publication number
- CN117373661A CN117373661A CN202311275528.5A CN202311275528A CN117373661A CN 117373661 A CN117373661 A CN 117373661A CN 202311275528 A CN202311275528 A CN 202311275528A CN 117373661 A CN117373661 A CN 117373661A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- data
- scoring
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241001465754 Metazoa Species 0.000 title claims abstract description 67
- 238000013077 scoring method Methods 0.000 title abstract description 5
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 230000007613 environmental effect Effects 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 30
- 238000012545 processing Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000000528 statistical test Methods 0.000 claims description 3
- 241000283690 Bos taurus Species 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 230000008021 deposition Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 210000003041 ligament Anatomy 0.000 description 2
- 239000008267 milk Substances 0.000 description 2
- 235000013336 milk Nutrition 0.000 description 2
- 210000004080 milk Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000287828 Gallus gallus Species 0.000 description 1
- 241001494479 Pecora Species 0.000 description 1
- 241000282887 Suidae Species 0.000 description 1
- 238000013019 agitation Methods 0.000 description 1
- 235000013330 chicken meat Nutrition 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 210000004394 hip joint Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 235000003715 nutritional status Nutrition 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 244000144977 poultry Species 0.000 description 1
- 235000013594 poultry meat Nutrition 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Public Health (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an animal condition scoring method, which comprises the following scoring steps: s1, erecting animal condition equipment, and using a tof camera at a proper machine position; s2, controlling the body condition equipment to acquire image data by triggering an external signal, wherein the acquired image data comprises color image data and depth image data, and aligning pixel coordinates of the color image and the depth image; and S3, judging whether the detection area has a target to be detected or not through an animal target prediction algorithm. According to the invention, equipment is used for fully automatically and scientifically scoring animal body conditions, interference of environmental factors is avoided, scientific algorithms are applied to each scoring link, scoring efficiency and accuracy are improved, manual experience scoring is replaced, scoring standards are uniform, influence of personnel mobilization is avoided, and scoring data are more reliable.
Description
Technical Field
The invention relates to the technical field of animal condition assessment, in particular to an animal condition scoring method.
Background
Animal condition scoring is a common indicator used to assess animal health and nutritional status. It generally evaluates the animal's body condition based on a series of observable physiological indicators and performance. The purpose of the scoring is to determine if the animal is in good physical condition and to provide important reference information for proper management and care.
Different animal condition scoring systems are suitable for different animal species and needs. The following are examples of some common animal condition scoring systems:
BCS (Body Condition Score): the method is suitable for a body condition scoring system of livestock (such as cattle, sheep, pigs and the like), and is used for evaluating fat deposition, weight change and the like of animals through appearance observation and hand feeling;
HCS (Hen Condition Score): a body condition scoring system suitable for poultry (e.g., chickens) focused on assessing the overall body condition and health status of hens (laying hens);
MCS (Milk Condition Score): the body condition scoring system is suitable for cows, and the in-vivo fat deposition condition and milk production capacity of the cows are evaluated by observing and evaluating the back morphology and touch of the cows;
WCS (Wildlife Condition Score): the body condition scoring system is suitable for wild animals, and the overall health condition and the survival capability of the wild animals are usually evaluated according to the behavior, appearance, weight change and other factors of the animals;
these body condition scoring systems provide a relatively standard way to assess the body condition of an animal. The specific steps, indices and criteria of scoring may vary from system to system. In practice, scoring and judgment is usually made by a professional according to an evaluation manual or guideline.
Currently mainstream fluid condition scoring is performed empirically by humans; some scientific research institutions try to adopt a pure visual scheme, are greatly influenced by environmental factors, and can influence scoring results when light is insufficient or too bright; when the hair color of the scored individual cannot be clearly distinguished from the environmental background, the scoring result is also affected, and particularly, the black and white flower appearance of the Holstein cows can affect the visual recognition of the individual outline; pure vision techniques are inefficient.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide an animal condition scoring method for solving the problems in the background art.
In order to achieve the above purpose, the present invention adopts the following technical scheme.
A method of scoring animal body conditions comprising the steps of:
s1, erecting animal condition equipment, and using a tof camera at a proper machine position;
s2, controlling the body condition equipment to acquire image data by triggering an external signal, wherein the acquired image data comprises color image data and depth image data, and aligning pixel coordinates of the color image and the depth image;
s3, judging whether a target to be detected exists in the detection area through an animal target prediction algorithm;
s4, converting the collected image data into pseudo-color image data through an image conversion and background filtering algorithm (Depth 2color algorithm), and filtering a complex background environment at the same time, so that the aim of separating a detection target from an image background is fulfilled;
s5, comparing the specific area of the pseudo color image data with a limit threshold deduced under a large number of similar environments through an attitude and position evaluation algorithm, so as to judge whether the detection target and the detection environment meet the scoring requirement;
s6, inputting the pseudo-color image data meeting the requirements into YOLOv7 (You Only Look Once version), extracting image characteristic information through a backbone network, and obtaining a body condition score from the characteristic information;
and S7, screening the calculated result through a body condition scoring screening algorithm (bcs_filter algorithm), and screening out the result which is most in line with the target.
As a further description of the above technical solution:
the alignment operation in step S2 includes obtaining a new frame input by the camera, performing color alignment processing on the obtained frame, obtaining a depth frame from the aligned frame, wherein the depth frame includes depth information of each point in the scene, converting the depth frame into a NumPy array for subsequent numerical calculation and image processing, aligning frame data again to obtain a depth frame and a color frame, and performing scaling and normalization operations on the depth frame.
As a further description of the above technical solution:
the judging mode in the step S3 is to obtain target prediction parameters (x 1, y 1), (x 2, y 2), a minimum distance judging parameter depth_distance, and a maximum distance judging parameter min_value through a large number of statistical tests, input depth data into an animal target prediction algorithm, obtain a local detection array after carrying the depth data according to the parameters (x 1, y 1), (x 2, y 2), calculate a non-0 minimum value in the array, and compare the obtained minimum value with the minimum distance judging parameter and the maximum distance judging parameter respectively.
As a further description of the above technical solution:
the method of converting the image data into pseudo color image data in the step S4 is to create an all 1 array with the same shape and an integer data type according to the image pixels, limit the depth image between a minimum value and a minimum value +255, generate a depth image with the minimum value multiplied by the all 1 array and normalized, map JET colors onto the depth image, convert the single-channel depth image into a color image, and stack three identical depth images together to form a 3D depth image.
As a further description of the above technical solution:
the separation method in the step S4 is to return the depth data value greater than the minimum value +255 or less than the minimum value to black as the background part according to the 3D depth image, so as to complete the filtering of the environmental background.
As a further description of the above technical solution:
the comparison item in the step S5 is as follows:
and obtaining point location parameters, namely a point number 1, a point number 2, a point number 3, a point number 4, a pixel number 1 sum and a pixel number 2 sum, and calculating the total number of pixels in a depth data designated area 1[1 and a point number 2 and the total number of pixels in a designated area 2[ point number 3 and a point number 4 ] by a numpy.sum method.
And judging whether the designated area 1 is larger than the pixel number 1 sum and the designated area 2 is smaller than the pixel number 2 sum according to the total number of the two pixels.
The numpy.sum method formula is as follows:
numpy.sum(a,axis=None,dtype=None,keepdims=False);
a is an array or matrix of desired sum operations; axis is to sum over the designated area, and None is the default value; dtype: the data type of the result is specified. Under the default condition, the result type is consistent with the data type of the input array; keepdims is the dimension of whether to hold the result, defaults to False, and the result shape is compressed into a one-dimensional array; if set to True, the result will maintain the original array dimensions.
As a further description of the above technical solution:
the extraction mode of extracting the image features in the step S6 is that image labeling is carried out through labelme, animal condition grading parts are selected through frames, label files with labels and image point location information in json format are obtained through storage, the label files corresponding to each image are converted into txt text files, a training set and a data set are arranged, and the training set and the data set are input into YOLOv7 for model training.
As a further description of the above technical solution:
the conversion mode of the annotation file is pseudo-color image input of a TOF camera, numpy array is converted into PyTorch tensor, then data are transferred to GPU equipment, then data image pixel values are scaled from [0,255] to [0.0,1.0] intervals, result prediction is carried out on the data by using a YOLOv7 model trained by a training set and the data set, and the result is converted into [ x, y, w, h ] format from [ xmin, xmax, ymax ] and then packaged into [ score, confidence, prediction frame position, prediction frame size ] format and returned.
As a further description of the above technical solution:
the acquired image data in the step S2 are multiple groups, namely a plurality of shot images, the shooting interval of each shot image is 0.1-1S, and after the animal target prediction algorithm in the step S3 judges that the target to be detected exists, the image processing is carried out under the following two conditions;
when the illumination is sufficient, selecting clear image data from a plurality of photographed pictures, and then performing S4-S7;
when the illumination is insufficient, the depth image is mainly utilized, one clear image data is selected from a plurality of photographed pictures, and other image data in the same batch are registered through image fusion and then fused into the selected image data.
The image fusion formula is as follows:
for two input images I1 and I2;
fused image = α×i1= (1- α) ×i2;
where α represents the weight of the first image I1 and (1- α) represents the weight of the second image I2. By adjusting the value of the weight alpha, the contribution degree of the two images in the fusion result can be controlled.
Compared with the prior art, the invention has the advantages that:
(1) According to the scheme, equipment is adopted to carry out full automation to scientifically score animal body conditions, interference of environmental factors is avoided, scientific algorithms are applied to each scoring link, scoring efficiency and accuracy are improved, manual experience scoring is replaced, scoring standards are unified, influence of personnel mobilization is avoided, and scoring data are more reliable.
(2) According to the scheme, the animals do not need to be manually contacted, the safety of the scoring process is improved, the disturbance to the animals is reduced, the agitation of the animals in the scoring process is reduced, and the data acquisition is more convenient and accurate in the calm state of the animals.
(3) According to the scheme, when the shooting light is sufficient, partial image data can be rapidly analyzed, animal characteristic parts are extracted for grading, data processing efficiency is improved, when the light is insufficient, depth color images are fused through a depth image fusion method, pseudo color images are manufactured through a Deth2color algorithm, definition is improved, and accuracy of grading data is further improved.
Drawings
FIG. 1 is a schematic diagram of the scoring step of the present invention;
FIG. 2 is a schematic diagram of the scoring principle of the present invention;
FIG. 3 is a schematic diagram of a scoring process according to the present invention;
fig. 4 is a schematic diagram of an image fusion process according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention;
referring to fig. 1-3, the present invention provides example 1:
a method of scoring animal body conditions comprising the steps of:
s1, erecting animal condition equipment, and using a tof camera at a proper machine position.
S2, controlling the body condition equipment to acquire image data by triggering an external signal, wherein the acquired image data comprises color image data and depth image data, and aligning pixel coordinates of the color image and the depth image.
And S3, judging whether the detection area has a target to be detected or not through an animal target prediction algorithm.
S4, converting the collected image data into pseudo-color image data through an image conversion and background filtering algorithm (Depth 2color algorithm), and filtering a complex background environment at the same time, so that the aim of separating a detection target from an image background is fulfilled.
S5, comparing the specific area of the pseudo color image data with a limit threshold deduced under a large number of similar environments through an attitude and position evaluation algorithm, so as to judge whether the detection target and the detection environment meet the scoring requirement.
S6, inputting the pseudo-color image data meeting the requirements into YOLOv7 (You Only Look Once version), extracting image characteristic information through a backbone network, and obtaining a body condition score from the characteristic information.
And S7, screening the calculated result through a body condition scoring screening algorithm (bcs_filter algorithm), and screening out the result which is most in line with the target.
According to the invention, the intelligent model is adopted to analyze and judge the shot and collected animal pictures, so that animal body conditions are directly obtained and scored, a traditional artificial experience scoring mode is replaced, the scoring efficiency of the animal body conditions is improved, meanwhile, due to the fact that a tof camera is adopted to shoot, the process of manually approaching the animal is saved, thereby avoiding the animal panic and pacifying the animal, the animal scoring process is safer and more convenient, equipment can only score, the influence caused by mobilization and replacement of scoring personnel is avoided, and the scoring standard is more identical.
Wherein, the comparison item in the step S5 is as follows:
and obtaining point location parameters, namely a point number 1, a point number 2, a point number 3, a point number 4, a pixel number 1 sum and a pixel number 2 sum, and calculating the total number of pixels in a depth data designated area 1[1 and a point number 2 and the total number of pixels in a designated area 2[ point number 3 and a point number 4 ] by a numpy.sum method.
And judging whether the designated area 1 is larger than the pixel number 1 sum and the designated area 2 is smaller than the pixel number 2 sum according to the total number of the two pixels.
The numpy.sum method formula is as follows:
numpy.sum(a,axis=None,dtype=None,keepdims=False);
a is an array or matrix of desired sum operations; axis is to sum over the designated area, and None is the default value; dtype: the data type of the result is specified. Under the default condition, the result type is consistent with the data type of the input array; keepdims is the dimension of whether to hold the result, defaults to False, and the result shape is compressed into a one-dimensional array; if set to True, the result will maintain the original array dimensions.
The extraction mode of extracting the image features in the step S6 is that image labeling is carried out through labelme, animal condition grading parts are selected through frames, label files with labels and image point location information in json format are obtained through storage, the label files corresponding to each image are converted into txt text files, a training set and a data set are arranged, and the training set and the data set are input into YOLOv7 for model training.
The conversion mode of the annotation file is pseudo-color image input of a TOF camera, numpy array is converted into PyTorch tensor, then data are transferred to GPU equipment, then data image pixel values are scaled from [0,255] to [0.0,1.0] intervals, result prediction is carried out on the data by using a YOLOv7 model trained by a training set and the data set, and the result is converted into [ x, y, w, h ] format from [ xmin, xmax, ymax ] and then packaged into [ score, confidence, prediction frame position, prediction frame size ] format and returned.
Wherein the animal condition scoring part scores, for example, cows, and detects lumbar plummet transverse processes, sacral ligaments, coccyx ligaments, lumbar angles, ischial tuberosities, hip joints, height, body length and the like of the cows as characteristic items.
Referring to fig. 1-3, the present invention further provides embodiment 2 on the basis of embodiment 1:
the alignment operation in step S2 includes obtaining a new frame input by the camera, performing color alignment processing on the obtained frame, obtaining a depth frame from the aligned frame, wherein the depth frame includes depth information of each point in the scene, converting the depth frame into a NumPy array for subsequent numerical calculation and image processing, aligning frame data again to obtain a depth frame and a color frame, and performing scaling and normalization operations on the depth frame.
The judging mode in the step S3 is to obtain target prediction parameters (x 1, y 1), (x 2, y 2), a minimum distance judging parameter depth_distance, and a maximum distance judging parameter min_value through a large number of statistical tests, input depth data into an animal target prediction algorithm, obtain a local detection array after carrying the depth data according to the parameters (x 1, y 1), (x 2, y 2), calculate a non-0 minimum value in the array, and compare the obtained minimum value with the minimum distance judging parameter and the maximum distance judging parameter respectively.
The method of converting the image data into pseudo color image data in the step S4 is to create an all 1 array with the same shape and an integer data type according to the image pixels, limit the depth image between a minimum value and a minimum value +255, generate a depth image with the minimum value multiplied by the all 1 array and normalized, map JET colors onto the depth image, convert the single-channel depth image into a color image, and stack three identical depth images together to form a 3D depth image.
The separation method in the step S4 is to return the depth data value greater than the minimum value +255 or less than the minimum value to black as the background part according to the 3D depth image, so as to complete the filtering of the environmental background.
Combining the color image data and the depth image data, and obtaining a depth frame and a color frame after processing, completing preliminary processing of the shot image data, and combining the two images can reduce the influence of patterns on the animal body on the characteristic parts, so that the characteristic parts of the animal required scoring are extracted more accurately, and the accuracy and the reliability of scoring are improved.
The method comprises the steps of separating and filtering other background image data except for detected animals in an image, reducing the influence of the environment on animal characteristic parts in the image, reducing data errors and improving the accuracy of grading animal body conditions.
Referring to fig. 4, the present invention further provides embodiment 3 on the basis of embodiment 1 and embodiment 2:
the acquired image data in the step S2 are multiple groups, that is, multiple shot images, the shooting interval of each shot image is 0.1-1S, and after the animal target prediction algorithm in the step S3 judges that the target to be detected exists, the image processing is performed under the following two conditions:
when the illumination is sufficient, selecting clear image data from a plurality of photographed pictures, and then performing S4-S7;
when the illumination is insufficient, a depth image is mainly utilized, one clear image data is selected from a plurality of photographed pictures, other image data in the same batch are registered through image fusion, and then the registration is fused into the selected image data;
the image fusion formula is as follows:
for two input images I1 and I2;
fused image=α×i1= (1- α) ×i2.
Where α represents the weight of the first image I1 and (1- α) represents the weight of the second image I2. By adjusting the value of the weight alpha, the contribution degree of the two images in the fusion result can be controlled.
By shooting and collecting multiple groups of image data in the same time domain, selectively extracting characteristic information to score according to whether the light of a shooting environment is sufficient or not, or fusing the image data in the same time domain, the definition of the image data is improved, and animal characteristic parts are clear, so that the method is suitable for the requirement of timing shooting in overcast and rainy days.
The foregoing is a preferred embodiment of the present invention; the scope of the invention is not limited in this respect. Any person skilled in the art, within the technical scope of the present disclosure, may apply to the present invention, and the technical solution and the improvement thereof are all covered by the protection scope of the present invention.
Claims (9)
1. A method for scoring animal body conditions, comprising the steps of:
s1, erecting animal condition equipment, and using a tof camera at a proper machine position;
s2, controlling the body condition equipment to acquire image data by triggering an external signal, wherein the acquired image data comprises color image data and depth image data, and aligning pixel coordinates of the color image and the depth image;
s3, judging whether a target to be detected exists in the detection area through an animal target prediction algorithm;
s4, converting the collected image data into pseudo-color image data through an image conversion and background filtering algorithm (Depth 2color algorithm), and filtering a complex background environment at the same time, so that the aim of separating a detection target from an image background is fulfilled;
s5, comparing the specific area of the pseudo color image data with a limit threshold deduced under a large number of similar environments through an attitude and position evaluation algorithm, so as to judge whether the detection target and the detection environment meet the scoring requirement;
s6, inputting the pseudo-color image data meeting the requirements into YOLOv7 (You Only Look Once version), extracting image characteristic information through a backbone network, and obtaining a body condition score from the characteristic information;
and S7, screening the calculated result through a body condition scoring screening algorithm (bcs_filter algorithm), and screening out the result which is most in line with the target.
2. The method for scoring animal body conditions of claim 1, wherein: the alignment operation in step S2 includes obtaining a new frame input by the camera, performing color alignment processing on the obtained frame, obtaining a depth frame from the aligned frame, wherein the depth frame includes depth information of each point in the scene, converting the depth frame into a NumPy array for subsequent numerical calculation and image processing, aligning frame data again to obtain a depth frame and a color frame, and performing scaling and normalization operations on the depth frame.
3. The method for scoring animal body conditions of claim 1, wherein: the judging mode in the step S3 is to obtain target prediction parameters (x 1, y 1), (x 2, y 2), a minimum distance judging parameter depth_distance, and a maximum distance judging parameter min_value through a large number of statistical tests, input depth data into an animal target prediction algorithm, obtain a local detection array after carrying the depth data according to the parameters (x 1, y 1), (x 2, y 2), calculate a non-0 minimum value in the array, and compare the obtained minimum value with the minimum distance judging parameter and the maximum distance judging parameter respectively.
4. The method for scoring animal body conditions of claim 1, wherein: the method of converting the image data into pseudo color image data in the step S4 is to create an all 1 array with the same shape and an integer data type according to the image pixels, limit the depth image between a minimum value and a minimum value +255, generate a depth image with the minimum value multiplied by the all 1 array and normalized, map JET colors onto the depth image, convert the single-channel depth image into a color image, and stack three identical depth images together to form a 3D depth image.
5. The method for scoring animal condition of claim 4, wherein: the separation method in the step S4 is to return the depth data value greater than the minimum value +255 or less than the minimum value to black as the background part according to the 3D depth image, so as to complete the filtering of the environmental background.
6. The method for scoring animal body conditions of claim 1, wherein: the comparison item in the step S5 is as follows:
obtaining point location parameters, namely a point number 1, a point number 2, a point number 3, a point number 4, a pixel number 1 sum and a pixel number 2 sum, and calculating the total number of pixels in a depth data designated area 1[1 and a point number 2 and the total number of pixels in a designated area 2[ point number 3 and a point number 4 ] by a numpy.sum method;
judging whether the designated area 1 is larger than the pixel number 1 sum and the designated area 2 is smaller than the pixel number 2 sum according to the total number of the two pixels;
the numpy.sum method formula is as follows:
numpy.sum(a,axis=None,dtype=None,keepdims=False);
a is an array or matrix of desired sum operations; axis is to sum over the designated area, and None is the default value; dtype: the data type of the result is specified. Under the default condition, the result type is consistent with the data type of the input array; keepdims is the dimension of whether to hold the result, defaults to False, and the result shape is compressed into a one-dimensional array; if set to True, the result will maintain the original array dimensions.
7. The method for scoring animal body conditions of claim 1, wherein: the extraction mode of extracting the image features in the step S6 is that image labeling is carried out through labelme, animal condition grading parts are selected through frames, label files with labels and image point location information in json format are obtained through storage, the label files corresponding to each image are converted into txt text files, a training set and a data set are arranged, and the training set and the data set are input into YOLOv7 for model training.
8. The method for scoring animal conditions of claim 7, wherein: the conversion mode of the annotation file is pseudo-color image input of a TOF camera, numpy array is converted into PyTorch tensor, then data are transferred to GPU equipment, then data image pixel values are scaled from [0,255] to [0.0,1.0] intervals, result prediction is carried out on the data by using a YOLOv7 model trained by a training set and the data set, and the result is converted into [ x, y, w, h ] format from [ xmin, xmax, ymax ] and then packaged into [ score, confidence, prediction frame position, prediction frame size ] format and returned.
9. The method for scoring animal body conditions of claim 1, wherein: the collected image data in the step S2 are multiple groups, that is, multiple photographed images are taken, the photographing interval of each photographed image is 0.1-1S, and after the animal target prediction algorithm in the step S3 judges that the target to be detected exists, the image processing is performed under two conditions:
when the illumination is sufficient, selecting clear image data from a plurality of photographed pictures, and then performing S4-S7;
when the illumination is insufficient, selecting clear image data from a plurality of photographed pictures, registering other image data in the same batch through image fusion, and then fusing the data into the selected image data, wherein the image fusion formula is as follows:
for two input images I1 and I2;
fused image = α×i1= (1- α) ×i2;
where α represents the weight of the first image I1 and (1- α) represents the weight of the second image I2. By adjusting the value of the weight alpha, the contribution degree of the two images in the fusion result can be controlled.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311275528.5A CN117373661A (en) | 2023-09-28 | 2023-09-28 | Animal body condition scoring method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311275528.5A CN117373661A (en) | 2023-09-28 | 2023-09-28 | Animal body condition scoring method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117373661A true CN117373661A (en) | 2024-01-09 |
Family
ID=89399450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311275528.5A Pending CN117373661A (en) | 2023-09-28 | 2023-09-28 | Animal body condition scoring method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117373661A (en) |
-
2023
- 2023-09-28 CN CN202311275528.5A patent/CN117373661A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104008367B (en) | The automatism analysis system and method for growing and fattening pigs based on computer vision | |
CN109166094A (en) | A kind of insulator breakdown positioning identifying method based on deep learning | |
CN109635875A (en) | A kind of end-to-end network interface detection method based on deep learning | |
CN110046631A (en) | System and method for inferring the variation of time-space image automatically | |
CN208188853U (en) | A kind of milk cow face recognition device | |
CN108717523A (en) | Oestrus of sow behavioral value method based on machine vision | |
CN111723729A (en) | Intelligent identification method for dog posture and behavior of surveillance video based on knowledge graph | |
CN106295558A (en) | A kind of pig Behavior rhythm analyzes method | |
CN109784200B (en) | Binocular vision-based cow behavior image acquisition and body condition intelligent monitoring system | |
CN113223035A (en) | Intelligent inspection system for cage-rearing chickens | |
CN108491807B (en) | Real-time monitoring method and system for oestrus of dairy cows | |
Bhoj et al. | Image processing strategies for pig liveweight measurement: Updates and challenges | |
CN110569735A (en) | Analysis method and device based on back body condition of dairy cow | |
CN110188657A (en) | Corn arid recognition methods based on crimping blade detection | |
CN112288793B (en) | Method and device for detecting backfat of livestock individuals, electronic equipment and storage medium | |
CN117115754B (en) | Intelligent duck shed monitoring method based on computer vision | |
CA3230401A1 (en) | Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes | |
CN111814698A (en) | Method for detecting calf-protecting behavior of cows in pasturing area based on artificial intelligence and aerial images | |
CN110414369B (en) | Cow face training method and device | |
CN114898405A (en) | Portable broiler chicken abnormity monitoring system based on edge calculation | |
Li et al. | Extraction of key regions of beef cattle based on bidirectional tomographic slice features from point cloud data | |
CN117373661A (en) | Animal body condition scoring method | |
CN112801118B (en) | Pork pig marketing benefit evaluation system and method based on artificial intelligence and big data | |
CN115661717A (en) | Livestock crawling behavior marking method and device, electronic equipment and storage medium | |
CN115918571A (en) | Fence passageway type cattle body health data extraction device and intelligent extraction method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination |