CN115918571A - Fence passageway type cattle body health data extraction device and intelligent extraction method thereof - Google Patents

Fence passageway type cattle body health data extraction device and intelligent extraction method thereof Download PDF

Info

Publication number
CN115918571A
CN115918571A CN202310001539.8A CN202310001539A CN115918571A CN 115918571 A CN115918571 A CN 115918571A CN 202310001539 A CN202310001539 A CN 202310001539A CN 115918571 A CN115918571 A CN 115918571A
Authority
CN
China
Prior art keywords
camera
image
cattle
cow
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310001539.8A
Other languages
Chinese (zh)
Inventor
张淦
张东彦
李威风
严海峰
周云飞
陈汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Kuiniu Electronic Technology Co ltd
Original Assignee
Hefei Kuiniu Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Kuiniu Electronic Technology Co ltd filed Critical Hefei Kuiniu Electronic Technology Co ltd
Priority to CN202310001539.8A priority Critical patent/CN115918571A/en
Publication of CN115918571A publication Critical patent/CN115918571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a fence passageway type cattle health data extraction device and an intelligent extraction method thereof, and compared with the prior art, the fence passageway type cattle health data extraction device overcomes the defect that the cattle health identification cannot meet the actual use requirement. The electronic weighing scale is installed at the bottom of the fence passageway, the camera A, the camera B, the camera C, the camera D, the camera E and the camera F which are all connected with the server are installed on the fence passageway, the camera A, the camera B and the camera C are all visible light image acquisition cameras, and the camera D, the camera E and the camera F are all visible light and infrared image acquisition cameras. The method generates the cow frontal image by using the image synthesis strategy, solves the problem that the frontal image acquisition equipment of the aisle type cow health information acquisition equipment is easily damaged by the cow, and can accurately measure the temperature of the forehead position of the cow through the synthesized non-contact cow face body temperature.

Description

Fence passageway type cattle body health data extraction device and intelligent extraction method thereof
Technical Field
The invention relates to the technical field of livestock raising informatization, in particular to a fence passageway type cattle health data extraction device and an intelligent extraction method thereof.
Background
The livestock breeding industry is an important component of agriculture in China, and the informatization of the livestock breeding is an important way for improving the efficiency of the livestock breeding industry. In a large-scale cattle farm, daily fine management of individual cattle automation and informatization is realized, the health condition tracking of each cattle and the tracing of milk sources and meat products are realized, the establishment and the perfection of a quality tracing system are required, and the key point is the identification of the individual identity of the cattle. The traditional cattle identification is mainly based on manual observation methods such as ear marks, branding, neck chains, stabbing marks and the like, the methods are time-consuming and labor-consuming, stress reaction is easy to cause, and cattle and personnel are injured, so that a non-contact monitoring model is necessary to be established.
Some famous researchers and biometric technology companies at home and abroad also start research on non-contact animal individual tracking identification. Allen et al abandons traditional RFID technology identification, uses the iris of a cow to carry out identity identification, collects 1738 retina images (all taken from two eyes of a cow) of 869 cows to carry out the iris identification of the cow for experimental needs, judges the type of the cow according to the uniqueness of the iris, and has the maximum identification rate of 98.30 percent. Xia and the like try to describe the facial features of the face of the cow by combining sparse coding classification with principal component analysis, chi-square distance detection and the like, but the method only focuses on the face of the cow, and the early acquisition workload is large, so that the method is difficult to realize in practical application.
Kim et al collects 12 Japanese and bovine face data sets without obvious body pattern features, inputs the data sets into an associative memory neural network for training and learning, calculates characteristic parameters of the data sets, and finally performs bovine face recognition. Kumar et al combined and matched traditional feature extraction, feature reduction and classifier models, and analyzed and compared the effects of the combinations in face recognition.
CN106778902A also discloses a method for identifying individual cows based on a convolutional neural network, which extracts trunk images of cows by an optical flow method or an interframe difference method, and uses the convolutional neural network to extract features, and combines the textural features of cows to realize effective identification of individual cows.
The method adopts self-built cow facial data, firstly adopts an optimized histogram of oriented gradients (HOC) to extract cow facial features, then uses a space pyramid matching principle (SPM) to classify, and finally achieves the recognition accuracy rate of 95.3%. Zhao Kai Xue revolves and the like introduce deep learning into the identification of cattle individuals, firstly, the trunk images of the cows are extracted and then input into a convolutional neural network, the accurate identification of the cow individuals is realized, 30 cows are tested, and the accuracy rate of video band identification can reach 93.33%. Zhuyin Ling and the like propose an algorithm and a model for recognizing and detecting the cattle face by taking CNN as a main body and introducing ResNet and SVM, and the accuracy of an experimental result reaches more than 95.1%. Xu et al propose a novel cattle face recognition framework integrating lightweight RetinaFace-mobilene and additive angular edge distance loss (ArcFace), namely CattleFaceNet, and the cattle face recognition accuracy rate reaches 91.3%. Li and the like design a lightweight neural network containing 6 convolutional layers for bovine face detection, and the experimental results of 103 cows show that the accuracy of the model is 98.37%. The deep learning algorithm has higher application value in the individual detection of the cattle.
In addition, in the invention patent with patent number CN202110952783.3 and patent name of an intelligent passageway device for acquiring body type parameters of beef cattle and identifying exercise health, an infrared camera is used for shooting three-dimensional infrared images of the cattle. But the positioning and the identification of the cattle body are difficult to realize only through three-dimensional infrared images, and only the general health monitoring of the cattle body can be realized. In the invention patent with patent number CN202110952774.4 and the patent name of the invention patent of an automatic beef cattle health monitoring system, although the face of a cattle is positioned by a camera, the camera and an infrared temperature measuring probe are driven by a driving unit to move up and down at the front part of the cattle so as to shoot the face of the cattle to form the positioning identification of the cattle body. In practical application, however, after the cattle enter the passageway, the front part of the cattle is found to be easy to be disturbed to carry out forward rushing after moving down to pick up a camera and collect an object, and a camera and an infrared temperature measurement probe on a driving unit are jacked up to be damaged; particularly, the infrared temperature probe cannot be positioned on the forehead of the cow to measure the temperature of other positions of the face of the cow, so that the infrared temperature probe cannot meet the practical application.
Therefore, how to combine the design of a method which can meet the practical application and identify the individual identity of the cattle by using the cattle side face image becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The invention aims to solve the defect that the cattle body health identification in the prior art cannot meet the actual use requirement, and provides a fence passageway type cattle body health data extraction device and an intelligent extraction method thereof to solve the problems.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the utility model provides a fence passageway formula cattle body health data extraction element, includes the fence passageway, electronic weighing scale is installed to the bottom in fence passageway, the fence passageway on install camera A, camera B, camera C, camera D, camera E and the camera F that all is connected with the server, camera A, camera B and camera C be visible light image acquisition camera, camera D, camera E and camera F be visible light and infrared image acquisition camera.
The camera A is arranged at the top end of the middle part of the fence passageway, the camera range of the camera A is vertically downward, the camera B is arranged at the left side of the fence passageway, the camera range of the camera B faces the camera C, the camera C is arranged at the right side of the fence passageway, and the camera range of the camera C faces the camera B; the camera D is arranged at the top end of the front part of the fence passageway, the camera range of the camera D is 45 degrees below the back, the camera E is arranged below the left front part of the fence passageway, the camera range and the included angle between the horizontal plane and the front view tangent plane are 45 degrees, and the camera F is arranged below the right front part of the fence passageway, the camera range and the included angle between the horizontal plane and the front view tangent plane are 45 degrees.
An intelligent extraction method of a fence passageway type cattle health data extraction device comprises the following steps:
acquiring the state of the cattle entering and exiting the fence: monitoring data of the electronic weight scale in real time, and recording the time when the cattle enter the passageway as t1 when the data of the electronic weight scale exceeds a threshold value; when the data of the weighing scale is lower than a threshold value, recording the time when the cattle leaves the passageway as t2, storing the weight of the cattle at the time between t1 and t2 and the image data collected by the camera A, the camera B, the camera C, the camera D, the camera E and the camera F, and recording the weight of the cattle as a health data label of the current cattle body;
and (3) synthesizing the front face image of the cow: synthesizing a visible light image of the cow face without inclination on the front side by using the visible light images of the cow face at three angles collected by the camera D, the camera E and the camera F;
and (3) identifying the identity of the cattle: according to the synthesized face image of the cattle without the inclined front face, a yolov5 target detection model is adopted to position the cattle face, another yolov5 network is used to position the cattle eyes, nose, ears and mouth, and the following eight indexes are calculated: the number of the ox eye pixels, the average gray value of the ox eye region, the distance between the ox eye centers, the number of the ox nose pixels, the average gray value of the ox nose region, the number of the ox mouth pixels, the average gray value of the ox mouth region and the distance between the ox ears are matched with the ox face image in the database, and the identity of the ox is judged;
calculating the bovine body type parameters: the method comprises the steps of obtaining images collected by a camera A, a camera B and a camera C when a cow passes through a fence, separating the cow from a background by an image segmentation method, calculating the body length, the body width and the body height of the cow in each image, taking the maximum value as the body length, the body width and the body height of the cow, and calculating body type parameters of the cow, wherein the expression is as follows:
BL=max(bl 1 、bl 2 、……、bl n ),
BW=max(bw 1 、bw 2 、……、bw n ),
BH=max(bh 1 、bh 2 、……、bh n ),
wherein BL, BW and BH represent the measurement results of body length, body width and body height of cattle respectively, BL n 、bw n 、bh n Respectively representing the body length, the body width and the body height obtained by the image analysis of each cow, wherein n is the number of the image acquisition;
measurement of cattle body temperature data:
synthesizing infrared images collected by a camera D, a camera E and a camera F at the same moment into an infrared image on the front face of the cow face by using an ITG network model, segmenting the cow face and a background by using a u-net semantic segmentation model, and calculating to obtain the length and the width of the cow face;
aligning the synthesized cattle face infrared image with the smooth cattle face image, and directly positioning the positions of the cattle eyes and the cattle nostril middle points in the cattle face image;
taking the ox face length H which is 0.2 times of the middle point connecting line of the ox eyes as the circle center and the ox face width W which is 0.4 times of the middle point connecting line of the ox eyes as the radius, and performing circle processing to take the ox face length H as the position of the forehead; taking the average value as the cattle body temperature of the image of the frame;
analyzing all forehead infrared images of a cow, calculating the cow body temperature of each frame of image according to the method, taking the maximum value of the calculated cow body temperature as a cow body temperature measurement result, and adopting the following formula:
BT=max(bt 1 ,bt 2 ,……bt n ),
wherein,BT is the final measurement of bovine body temperature, BT n Calculating the temperature of the cattle body for each frame of image, wherein n is the number of image acquisition;
measurement of bovine body weight: removing abnormal values from the measured cattle weight data, fitting the abnormal values into a normal distribution curve, and taking the mean value of the normal distribution curve as the cattle weight data;
and (3) storing the health data of the cattle body: and storing the cattle body identity, the cattle body type parameters, the cattle body temperature data and the cattle body weight data to form cattle body health data.
The method for synthesizing the cattle frontal image comprises the following steps:
constructing a two-path full convolution network model;
training a two-way full convolution network model;
constructing an ITG network model for synthesizing the front face image of the cow face:
an image synthesis algorithm is designed based on a deep learning full convolution network model, and the front non-inclined plane part image of a synthetic cow is as follows:
ITG(image_4,image_5,image_6)=image_front,
the image _4, the image _5and the image _6 are images collected by a camera D, a camera E and a camera F respectively, the ITG is a deep learning full convolution network model, the image _ front is a synthesized cattle front non-inclined plane part image, and visible light and infrared cattle front images are synthesized;
setting an ITG network model to comprise three same two-path full convolution network models, and respectively processing the cattle face images shot on the upper side, the left side and the right side to obtain three characteristic diagrams, namely generating cattle face front characteristic diagrams with three different angles;
setting and combining the front face characteristic diagrams of the cow faces at three different angles into a 9-channel characteristic diagram to be processed;
setting that a feature map to be processed of 9 channels passes through 3 layers of convolution layers and 2 pooling layers, extracting and fusing features of three visual angle images, and recovering resolution through 1 layer of deconvolution layer to obtain a synthesized face front image;
and acquiring images collected by the camera D, the camera E and the camera F, inputting the images into the ITG network model, and outputting a synthesized real-time cow face front image.
The identification of the cattle body identity comprises the following steps:
detecting the position of the cow face in the synthesized image by using a yoloV5 network model according to the synthesized visible light image of the cow face front;
the positions of the eyes, the nose, the ears and the mouth of the cow in the image are positioned by using another yoloV5 network model, and then the images are matched with the cow face images in the database, and the matching degree is calculated by calculating the following eight index characteristics: the number of the ox eye pixels, the average gray value of the ox eye region, the distance between the ox eye centers, the number of the ox nose pixels, the average gray value of the ox nose region, the number of the ox mouth pixels, the average gray value of the ox mouth region and the distance between the ox ears;
taking a cow face image for analysis, comparing the eight index features with the images of all cows in the database, and keeping the result when the eight index features are all higher than a threshold value; and analyzing all the face images of the cattle from the time t1 to the time t2 according to the steps, taking the optimal result as the face image of the cattle, and identifying the cattle body from the database.
The construction of the two-way full convolution network model comprises the following steps:
the method comprises the steps that a double-path full convolution network model is set to comprise two parts, namely a local feature extraction part, a global feature extraction part and a fusion part which are arranged in parallel, wherein the input of the local feature extraction part is a marked image, and the input of the global feature extraction part is an unmarked image;
setting a local feature extraction part to sequentially comprise an input layer, a convolutional layer 1, a convolutional layer 2, a convolutional layer 3, a convolutional layer 4, an anti-convolutional layer 1 and an anti-convolutional layer 2;
setting a global feature extraction part to sequentially comprise an input layer, a convolutional layer 5, a convolutional layer 6, a convolutional layer 7, a convolutional layer 8 and a deconvolution layer 3;
the set fusion part sequentially comprises an overlapping layer, a convolution layer 9 and a convolution layer 10; and overlapping the local feature extraction result and the global feature extraction result by an overlapping layer of the fusion part to obtain the cattle face front feature map of the current angle.
The training method of the two-path full convolution network model comprises the following steps:
the method comprises the following steps that a training data set is not less than 3000 groups of cow face images comprising a front face, an upper side face, a left side face and a right side face, wherein the upper side face, the left side face and the right side face are used for inputting into a network, a trained result is compared with the front face image, errors are calculated by using a Pixel-wise strategy, then training is carried out by using an SGD gradient descent method, the learning rate is initially set to be 0.1, the attenuation rate is set to be 0.01, the inertia coefficient is set to be 0.1, the maximum training frequency is set to be 5000 times, and the probability is set to be 15;
and verifying the model obtained after training by using a prediction set, wherein the prediction set comprises not less than 1000 groups of cattle face images, and when the error between the synthesized image and the true image is less than a threshold value, the model finishes training.
Advantageous effects
Compared with the prior art, the fence passageway type cattle body health data extraction device and the intelligent extraction method thereof generate the cattle front image by using the image synthesis strategy, solve the problem that the front image acquisition equipment of passageway type cattle health information acquisition equipment is easily damaged by cattle, and can accurately measure the temperature of the forehead position of the cattle through the synthesized non-contact cattle face body temperature; reduces the stress reaction of the cattle, avoids the damage of the cattle, and has simple integral operation and high practicability.
The invention also has the following advantages:
1. based on the deep learning technology, the technology that the three cameras synthesize the front images of the cattle is realized, and the problem that the front image acquisition equipment of the aisle type cattle health information acquisition equipment is easily damaged by the cattle is solved;
2. the beef cattle weight and body temperature data are rapidly acquired in a non-contact manner, and the accuracy and efficiency of beef cattle health data acquisition are improved.
Drawings
FIG. 1 is a top view of the structure of a cattle health data extraction device according to the present invention;
FIG. 2 is a method sequence framework diagram of an intelligent extraction method according to the present invention;
FIG. 3 is a diagram of a two-way full convolution network model architecture in accordance with the present invention;
FIG. 4 is a synthetic diagram of a bovine face according to the present invention;
the system comprises a camera A, a camera B, a camera C, a camera D, a camera E, a camera F, a fence passageway and an electronic weighing scale, wherein the camera A is 1-a camera A, the camera B is 2-a camera B, the camera C is 3-a camera D, the camera E is 5-a camera F, the camera F is 6-a camera F, and the electronic weighing scale is 8-an electronic weighing scale.
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
the invention discloses a fence passageway type cattle body health data extraction device and an intelligent extraction method thereof in order to realize automatic acquisition of health data of beef cattle, and the background technologies of the device comprise an animal husbandry breeding technology, a machine vision technology, a deep learning technology and a sensor technology. The livestock breeding technology comprises the following steps: comprises a beef cattle slaughtering channel design technology; machine vision technology: the method comprises a visible light and infrared camera image acquisition technology and an image synthesis technology; deep learning technology: target detection techniques including convolutional neural network techniques and the like; sensor technology: including beef cattle weight measurement sensor technology.
As shown in FIG. 1, the fence aisle type cattle health data extraction device comprises a fence aisle 7, and an electronic weighing scale 8 is installed at the bottom of the fence aisle 7.
Install camera A1, camera B2, camera C3, camera D4, camera E5 and camera F6 that all are connected with the server on the fence passageway 7, camera A1, camera B2 and camera C3 are visible light image acquisition cameras, camera D4, camera E5 and camera F6 be visible light and infrared image acquisition cameras.
As the camera is arranged on the front face of the cow, the cow is easily collided by the cow, but the front face of the cow is necessary data content for acquiring the health data of the cow body. Therefore, the movement of the cattle is limited in the barrier passage, so that the cattle can only move forwards, the cameras are arranged on the side part and the upper part of the cattle, and the images on the side part and the upper part of the cattle are processed by an algorithm, so that the cattle face image is obtained.
The camera A1 is arranged at the top end of the middle part of the fence passageway 7, the shooting range of the camera A is vertically downward, and a top view image of the middle part of the cow body is obtained. The camera B2 is arranged on the left side of the fence passageway 7, the camera range of the camera B faces the camera C3, the camera C3 is arranged on the right side of the fence passageway 7, the camera range of the camera C faces the camera B2, and images of the left belly and the right belly of the cow are obtained respectively.
The camera D4 is arranged at the top end of the front part of the fence passageway 7, the shooting range of the camera D is 45 degrees from the rear lower part, and a top image of the ox head is obtained from the upper side. The camera E5 is disposed below the left front portion of the fence aisle 7, and the angles between the shooting range and the horizontal plane and the tangent plane of the front view are both 45 °, that is, 45 ° is formed between the camera E5 and the ground (horizontal plane) and between the camera E and the longitudinal plane (vertical plane) based on the front end of the bovine face, so as to obtain an image of the left face portion of the bovine face with the nose as a general boundary. The camera F6 is arranged below the right front part of the fence passageway 7, and the angles between the shooting range and the horizontal plane and the tangent plane of the front view are both 45 degrees, and similarly, a right face part image of the cow face with the nose as a general boundary is obtained.
As shown in fig. 2, there is also provided an intelligent extraction method of a fence aisle type cattle health data extraction device, comprising the following steps:
step one, acquiring the state of the cattle entering and exiting the fence: monitoring data of the electronic weight scale 8 in real time, and recording the time when the cattle enter the passageway as t1 when the data of the weight scale exceeds a threshold value; when the data of the weighing scale is lower than the threshold value, the time when the cow leaves the passageway is recorded as t2, the weight of the cow at the time between t1 and t2 and the image data collected by the camera A1, the camera B2, the camera C3, the camera D4, the camera E5 and the camera F6 are stored and recorded as the health data label of the current cow body.
And step two, synthesizing the cattle front image: and (3) synthesizing the cattle face images without the inclination on the front surface by using the cattle face images with three angles acquired by the camera D4, the camera E5 and the camera F6.
The cattle front face image synthesis technology used in the invention is feature level image fusion, which is to extract feature information from a cattle side face image, analyze, process and integrate the feature information to obtain a fused cattle front face image, and the accuracy of target recognition of the synthesized cattle front face image is obviously higher than that of an original image. The feature level fusion compresses image information, and then the image information is analyzed and processed by a computer, so that the consumed memory and time are reduced compared with the pixel level image fusion, and the real-time performance of image processing is improved.
The method for synthesizing the cattle frontal image comprises the following steps:
(1) As shown in fig. 3 and table 1, a two-way full convolution network model was constructed.
A1 The two-path full convolution network model is set to comprise two parts, namely a local feature extraction part, a global feature extraction part and a fusion part which are arranged in parallel, wherein the input of the local feature extraction part is a marked image, and the input of the global feature extraction part is an unmarked image;
a2 Set the local feature extraction part to include an input layer, a convolutional layer 1, a convolutional layer 2, a convolutional layer 3, a convolutional layer 4, an anti-convolutional layer 1, and an anti-convolutional layer 2 in this order;
a3 Set the global feature extraction part to include an input layer, a convolutional layer 5, a convolutional layer 6, a convolutional layer 7, a convolutional layer 8, and a deconvolution layer 3 in this order;
a4 Set the fusion portion to include the superimposed layer, the convolutional layer 9, and the convolutional layer 10 in this order; and overlapping the local feature extraction result and the global feature extraction result by an overlapping layer of the fusion part to obtain the cattle face front feature map of the current angle.
TABLE 1 two-way full convolution network model structure table
Figure BDA0004035014760000091
Figure BDA0004035014760000101
The correlation code algorithm is as follows:
a. local feature extraction section
o1=Conv2D(filters=1024,kernel_size=(5,5),padding="same",activation="relu")(model.output)
o1=Conv2D(filters=512,kernel_size=(3,3),padding="same",activation="relu")(o1)
o1=Conv2D(filters=512,kernel_size=(3,3),padding="same",activation="relu")(o1)
o1=Dropout(rate=0.5)(o1)
o1=Conv2D(filters=512,kernel_size=(3,3),padding="same",activation="relu")(o1)
o1=Conv2DTranspose(filters=512,kernel_size=(32,32),strides=(4,4),padding="valid",activation=None,name="score2")(o1)
o1=Conv2DTranspose(filters=512,kernel_size=(32,32),strides=(4,4),padding="valid",activation=None,name="score2")(o1)
b. Global feature extraction section
o2=Conv2D(filters=128,kernel_size=(5,5),padding="same",activation="relu")(model.output)
o2=Conv2D(filters=128,kernel_size=(3,3),padding="same",activation="relu")(o2)
o2=Conv2D(filters=128,kernel_size=(3,3),padding="same",activation="relu")(o2)
o2=Dropout(rate=0.5)(o2)
o2=Conv2D(filters=64,kernel_size=(3,3),padding="same",activation="relu")(o2)
o2=Conv2DTranspose(filters=2,kernel_size=(32,32),strides=(4,4),padding="valid",activation=None,name="score2")(o2)
c. Fusion layer
o3=torch.stack(o1,o2)
o3=Conv2D(filters=8,kernel_size=(3,3),padding="same",activation="relu")(o3)
o3=Conv2D(filters=3,kernel_size=(3,3),padding="same",activation="relu")(o3)
The above code is explained as follows:
conv2D: a convolution layer; conv2DTranspose: a deconvolution layer; a Filter: number of feature maps; kernel _ size: convolution kernel size; padding: an edge processing mode; activation is the activation of a function;
dropout, randomly freezing a certain proportion of nodes; stack: the channels (feature maps) are stacked.
(2) And (5) training a two-way full convolution network model.
B1 Not less than 3000 groups of training data sets comprise front, upper side, left side and right side bovine face images, wherein the upper side, left side and right side images are used for inputting into a network, the trained result is compared with the front image, the error is calculated by using a Pixel-wise strategy, and then the training is carried out by using an SGD gradient descent method, the learning rate is initially set to be 0.1, the attenuation rate is set to be 0.01, the inertia coefficient is set to be 0.1, the maximum number of times of training is set to be 5000 times, and the probability is set to be 15;
b2 The model obtained after the training is finished is verified by using a prediction set, the prediction set comprises not less than 1000 groups of cattle face images, and when the error between the synthesized image and the true value image is less than the threshold value, the model finishes the training.
(3) Constructing an ITG network model for synthesizing the front face image of the cow face:
an image synthesis algorithm is designed based on a deep learning full convolution network model, and the front non-inclined plane part image of a synthetic cow is as follows:
ITG(image_4,image_5,image_6)=image_front,
wherein, image _4, image _5and image _6 are images collected by the camera D4, the camera E5 and the camera F6 respectively, ITG is a deep learning full convolution network model, image _ front is a synthesized cattle front non-inclined plane part image, and visible light and infrared cattle front images are synthesized.
C1 Setting an ITG network model to comprise three same two-path full convolution network models, and respectively processing the cow face images shot on the upper side, the left side and the right side to obtain three characteristic maps, namely generating the cow face front characteristic maps with three different angles;
c2 Setting to combine the feature maps of the front face of the cow face at three different angles into a 9-channel feature map to be processed;
its corresponding pytorech code is as follows:
p12=torch.stack(p1,p2)
p123=torch.stack(p12,p3)
p1, p2 and p3 respectively represent the results of the three-angle cattle face image synthesis, p123 represents the results of the synthesis, and stack represents the feature map stack.
C3 Setting a 9-channel feature map to be processed to pass through 3 layers of convolution layers (parameters of a multi-view image feature fusion convolution layer are shown in a table 2) and 2 pooling layers, extracting and fusing features of three view images, and recovering resolution through 1 layer of deconvolution layers (parameters of the deconvolution layers are shown in a table 3) to obtain a synthesized bovine face front image;
TABLE 2 Multi-View image feature fusion convolutional layer parameter Table
Nuclear size Number of feature maps
Convolutional layer 1 3*3 128
Convolutional layer 2 3*3 64
Convolutional layer 3 3*3 2
The Pythrch code is as follows:
s1=Conv2D(filters=128,kernel_size=(3,3),padding="same",activation="relu")(model.output)
s1=Conv2D(filters=64,kernel_size=(3,3),padding="same",activation="relu")(s1)
s1)=Conv2D(filters=8,kernel_size=(3,3),padding="same",activation="relu")(s1))
TABLE 3 deconvolution layer parameter Table
Nuclear size Characteristic diagram
Deconvolution layer 1 32*32 2
The Pythrch code is as follows:
T1=Conv2DTranspose(filters=2,kernel_size=(32,32),strides=(4,4),padding="valid",activation=None,name="score2")(model.output)。
(4) And acquiring images collected by the camera D4, the camera E5 and the camera F6, inputting the images into the ITG network model, and outputting a synthesized real-time cow face front image.
And thirdly, recognizing the identity of the cattle.
According to the synthesized positive non-inclined cow face image, a traditional yolov5 target detection model is adopted to position a cow face, another yolov5 network is used to position cow eyes, a nose, ears and a mouth, and the following eight indexes are calculated: the number of the ox eye pixels, the average gray value of the ox eye region, the distance between the ox eye centers, the number of the ox nose pixels, the average gray value of the ox nose region, the number of the ox mouth pixels, the average gray value of the ox mouth region and the distance between the ox ears are matched with the ox face image in the database, and the identity of the ox is judged.
(1) And detecting the position of the cow face in the synthesized image by using a yoloV5 network model according to the synthesized visible light image of the cow face front.
(2) The positions of the eyes, the nose, the ears and the mouth of the cow in the image are positioned by using another yoloV5 network model, and then the images are matched with the cow face images in the database, and the matching degree is calculated by calculating the following eight index characteristics: the number of the ox eye pixels, the average gray value of the ox eye region, the distance between the ox eye centers, the number of the ox nose pixels, the average gray value of the ox nose region, the number of the ox mouth pixels, the average gray value of the ox mouth region and the distance between the ox ears;
taking the face image of the cattle for analysis, comparing the eight index features with the images of all the cattle in the database, and reserving the result when the eight index features are higher than a threshold value; and analyzing all the face images of the cattle from the time t1 to the time t2 according to the steps, taking the optimal result as the face image of the cattle, and identifying the cattle body from the database.
And fourthly, calculating the body type parameters of the cattle.
The method comprises the steps of obtaining images collected by a camera A1, a camera B2 and a camera C3 when a cow passes through a fence, separating the cow from a background by an image segmentation method, calculating the body length, the body width and the body height of the cow in each image, taking the maximum value as the body length, the body width and the body height of the cow, and calculating the body type parameters of the cow, wherein the expression is as follows:
BL=max(bl 1 、bl 2 、……、bl n ),
BW=max(bw 1 、bw 2 、……、bw n ),
BH=max(bh 1 、bh 2 、……、bh n ),
wherein BL, BW and BH represent the measurement results of body length, body width and body height of cattle respectively, BL n 、bw n 、bh n Respectively representing the body length, the body width and the body height obtained by the image analysis of each cow, wherein n is the number of the image acquisition.
And fifthly, measuring the body temperature data of the cattle.
In practical application, the head movement of the cow in the fence passageway is not limited, so that the temperature of the forehead of the cow cannot be obtained by using the body temperature sensing equipment, and the body temperature cannot be accurately measured. The method skillfully utilizes the synthetic image of the cow face, obtains the cow body temperature data by utilizing the infrared image based on skillful processing between the visible light image and the infrared image, and accurately realizes the measurement of the cow body temperature data.
(1) And synthesizing infrared images collected by the camera D4, the camera E5 and the camera F6 at the same time into a front infrared image of the cattle face by using an ITG network model, segmenting the cattle face and the background by using a u-net semantic segmentation model, and calculating to obtain the length and the width of the cattle face.
The characteristic that the camera D4, the camera E5 and the camera F6 are used for collecting visible light and infrared images is utilized, and the same ITG network model (without increasing the complexity of industrial data processing) is utilized to carry out the synthesis processing of different images. In the step of synthesizing the cow front images, inputting visible light images by using an ITG network model to obtain synthesized visible light cow face front images; and then, positioning the bovine face characteristic points by utilizing a yoloV5 network model. In the measurement step of the cattle body temperature data, an ITG network model is used for inputting infrared images to obtain infrared cattle face front images, then the visible light cattle face front images and the infrared cattle face front images are aligned, and various key position points (cattle eyes, cattle nostril middle point positions, cattle forehead and the like) in the cattle face are directly positioned. Therefore, the problem that in the prior art, the temperature of the accurate position of the cow face cannot be accurately obtained by using temperature measuring equipment is solved.
(2) Aligning the synthesized cattle face infrared image with the smooth cattle face image, and directly positioning the positions of the cattle eyes and the cattle nostril middle points in the cattle face image;
taking a cow face length H which is 0.2 time of the cow eye midpoint connecting line as a circle center and a cow face width W which is 0.4 time of the cow eye midpoint connecting line as a radius, and performing circle processing to obtain the cow forehead position; taking the average value as the cattle body temperature of the image of the frame.
(3) Analyzing all forehead infrared images of a cow, calculating the cow body temperature of each frame of image according to the method, taking the maximum value as a cow body temperature measurement result, wherein the formula is as follows:
BT=max(bt 1 ,bt 2 ,……bt n ),
wherein BT is the final measurement result of the body temperature of the cattle, BT n Calculating the temperature of the cattle body for each frame of image, wherein n is the number of image acquisition;
sixthly, measuring the weight of the cattle: and (4) removing abnormal values from the measured cattle weight data, fitting the abnormal values into a normal distribution curve, and taking the mean value of the normal distribution curve as the cattle weight data.
Seventhly, storing the health data of the cattle body: and storing the cattle body identity, the cattle body type parameters, the cattle body temperature data and the cattle body weight data to form cattle body health data.
The device uses an industrial camera to collect beef cattle image information, an infrared camera to collect body temperature information, the electronic weighing scale 8 measures weight information, all data are transmitted to a server through a gigabit network, and body shape parameters, body weight and body temperature data of the cattle are obtained through background algorithm analysis. The intelligent passageway is 2.6m long, 1.4m wide and 2.0m high, and a partition board is arranged at a position 0.2m away from each of two sides in the width direction and used for protecting the camera. The steel cavity is welded at the top of the intelligent passageway, and the device is convenient to hoist and carry. As shown in FIG. 4, by using the method of the present invention, the facial images of the cattle are synthesized and positioned, and then the health data of the cattle are matched to realize the monitoring of the cattle state.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. The utility model provides a fence passageway formula ox body health data extraction element, includes fence passageway (7), electronic weighing scale (8), its characterized in that are installed to the bottom in fence passageway (7): fence passageway (7) on install camera A (1), camera B (2), camera C (3), camera D (4), camera E (5) and camera F (6) all be connected with the server, camera A (1), camera B (2) and camera C (3) be visible light image acquisition camera, camera D (4), camera E (5) and camera F (6) be visible light and infrared image acquisition camera.
2. The fence aisle-type cattle health data extraction device as claimed in claim 1, wherein: the camera A (1) is arranged at the top end of the middle part of the fence passageway (7) and the shooting range of the camera A is vertically downward, the camera B (2) is arranged at the left side of the fence passageway (7) and the shooting range of the camera B faces the camera C (3), the camera C (3) is arranged at the right side of the fence passageway (7) and the shooting range of the camera C faces the camera B (2); the camera D (4) is arranged at the top end of the front part of the fence passageway (7) and the camera shooting range of the camera D is 45 degrees from the rear to the bottom, the camera E (5) is arranged below the left front part of the fence passageway (7) and the included angles of the camera shooting range, the horizontal plane and the front view section of the camera E are 45 degrees, the camera F (6) is arranged below the right front part of the fence passageway (7) and the included angles of the camera shooting range, the horizontal plane and the front view section of the camera E are 45 degrees.
3. The intelligent extraction method of the fence aisle-type cattle health data extraction device as claimed in claim 1, comprising the following steps:
31 Acquisition of cattle entry and exit fence status: monitoring data of the electronic weight scale (8) in real time, and recording the time t1 when the cow enters the passageway when the data of the weight scale exceeds a threshold value; when the data of the weighing scale is lower than a threshold value, recording the time when the cow leaves the passageway as t2, storing the weight of the cow at the time between t1 and t2 and image data collected by a camera A (1), a camera B (2), a camera C (3), a camera D (4), a camera E (5) and a camera F (6), and recording the data as a health data label of the current cow body;
32 ) synthetic processing of the bovine frontal image: synthesizing a visible light image of the cow face without the inclination on the front side by using the visible light images of the cow face at three angles acquired by the camera D (4), the camera E (5) and the camera F (6);
33 Identification of bovine identity: according to the synthesized face image of the cattle without the inclined front face, a yolov5 target detection model is adopted to position the cattle face, another yolov5 network is used to position the cattle eyes, nose, ears and mouth, and the following eight indexes are calculated: the number of the ox eye pixels, the average gray value of the ox eye region, the distance between the ox eye centers, the number of the ox nose pixels, the average gray value of the ox nose region, the number of the ox mouth pixels, the average gray value of the ox mouth region and the distance between the ox ears are matched with the ox face image in the database, and the identity of the ox is judged;
34 Calculation of bovine body type parameters: the method comprises the steps of obtaining images collected by a camera A (1), a camera B (2) and a camera C (3) when a cow passes through a fence, separating the cow from a background by an image segmentation method, calculating the body length, the body width and the body height of the cow in each image, taking the maximum value as the body length, the body width and the body height of the cow, and calculating body type parameters of the cow, wherein the expression is as follows:
BL=max(bl 1 、bl 2 、……、bl n ),
BW=max(bw 1 、bw 2 、……、bw n ),
BH=max(bh 1 、bh 2 、……、bh n ),
wherein BL, BW and BH represent the measurement results of body length, body width and body height of cattle respectively, BL n 、bw n 、bh n Respectively representing the body length, the body width and the body height obtained by analyzing the image of each cow, wherein n is the number of the acquired images;
35 Measurement of bovine body temperature data:
351 Using an ITG network model, synthesizing infrared images collected by a camera D (4), a camera E (5) and a camera F (6) at the same time into a front infrared image of the cattle face, segmenting the cattle face and a background through a u-net semantic segmentation model, and calculating to obtain the length and the width of the cattle face;
352 Aligning the synthesized bovine face infrared image with the light bovine face image to directly position the middle point positions of bovine eyes and bovine nostrils in the bovine face image;
taking a cow face length H which is 0.2 time of the cow eye midpoint connecting line as a circle center and a cow face width W which is 0.4 time of the cow eye midpoint connecting line as a radius, and performing circle processing to obtain the cow forehead position; taking the average value as the cattle body temperature of the image of the frame;
353 Analyzing all forehead infrared images of a cow, calculating the cow body temperature of each frame of image according to the method, taking the maximum value as a cow body temperature measurement result, wherein the formula is as follows:
BT=max(bt 1 ,bt 2 ,……bt n ),
wherein BT is the final measurement result of the body temperature of the cattle, BT n Calculating the temperature of the cattle body for each frame of image, wherein n is the number of image acquisition;
36 Measurement of bovine body weight: removing abnormal values from the measured cattle weight data, fitting the abnormal values into a normal distribution curve, and taking the mean value of the normal distribution curve as the cattle weight data;
37 Storage of bovine health data: and storing the cattle body identity, the cattle body type parameters, the cattle body temperature data and the cattle body weight data to form cattle body health data.
4. The intelligent extraction method of the fence aisle type cattle body health data extraction device according to claim 3, wherein the synthesis processing of the cattle front image comprises the following steps:
41 Constructing a two-way full convolution network model;
42 Training of a two-way full convolution network model;
43 Constructing an ITG network model for the synthesis of the face front image of the cow:
an image synthesis algorithm is designed based on a deep learning full convolution network model, and the front non-inclined plane part image of a synthetic cow is as follows:
ITG(image_4,image_5,image_6)=image_front,
image _4, image _5and image _6 are images collected by a camera D (4), a camera E (5) and a camera F (6), ITG is a deep learning full convolution network model, image _ front is a synthesized cattle front non-inclined plane part image, and visible light and infrared cattle front images are synthesized;
431 Setting an ITG network model to comprise three same two-path full convolution network models, and respectively processing the cow face images shot on the upper side, the left side and the right side to obtain three characteristic maps, namely generating the cow face front characteristic maps with three different angles;
432 Setting to combine the feature maps of the front face of the cow face at three different angles into a 9-channel feature map to be processed;
433 Setting a 9-channel feature map to be processed to pass through 3 layers of convolution layers and 2 pooling layers, extracting and fusing features of three visual-angle images, and recovering resolution through 1 layer of deconvolution layer to obtain a synthesized face front image;
44 Images collected by the camera D (4), the camera E (5) and the camera F (6) are obtained and input into the ITG network model, and a synthesized real-time cattle face front image is output.
5. The intelligent extraction method of the fence aisle type cattle health data extraction device as claimed in claim 3, wherein the identification of the cattle identity comprises the following steps:
51 Detecting the position of the cow face in the synthesized image by using a yoloV5 network model according to the synthesized cow face front visible light image;
52 Using another yoloV5 network model to locate the positions of the bovine eyes, nose, ears and mouth in the image, and further matching the bovine face image in the database, and calculating the matching degree by calculating the following eight index features: the number of the ox eye pixels, the average gray value of the ox eye region, the distance between the ox eye centers, the number of the ox nose pixels, the average gray value of the ox nose region, the number of the ox mouth pixels, the average gray value of the ox mouth region and the distance between the ox ears;
taking a cow face image for analysis, comparing the eight index features with the images of all cows in the database, and keeping the result when the eight index features are all higher than a threshold value; and analyzing all the face images of the cattle from the time t1 to the time t2 according to the steps, taking the optimal result as the face image of the cattle, and identifying the cattle body from the database.
6. The intelligent extraction method of the fence aisle-type cattle body health data extraction device according to claim 4, wherein the construction of the two-way full convolution network model comprises the following steps:
61 The two-path full convolution network model is set to comprise two parts, namely a local feature extraction part, a global feature extraction part and a fusion part which are arranged in parallel, wherein the input of the local feature extraction part is a marked image, and the input of the global feature extraction part is an unmarked image;
62 Set the local feature extraction part to include an input layer, a convolutional layer 1, a convolutional layer 2, a convolutional layer 3, a convolutional layer 4, an anti-convolutional layer 1, and an anti-convolutional layer 2 in this order;
63 Set the global feature extraction section to include an input layer, a convolutional layer 5, a convolutional layer 6, a convolutional layer 7, a convolutional layer 8, and a deconvolution layer 3 in this order;
64 Set the fusion portion to include the superimposed layer, the convolutional layer 9, and the convolutional layer 10 in this order; and overlapping the local feature extraction result and the global feature extraction result by an overlapping layer of the fusion part to obtain the cattle face front feature map of the current angle.
7. The intelligent extraction method of the fence aisle type cattle body health data extraction device according to claim 4, wherein the two-way full convolution network model is trained by the following method:
71 Not less than 3000 groups of training data sets comprise front, upper side, left side and right side bovine face images, wherein the upper side, left side and right side images are used for inputting into a network, a trained result is compared with the front image, an error is calculated by using a Pixel-wise strategy, then an SGD gradient descent method is used for training, the learning rate is initially set to 0.1, the attenuation rate is set to 0.01, the inertia coefficient is set to 0.1, the maximum training frequency is set to 5000 times, and the probability is set to 15;
72 The model obtained after training is verified by using a prediction set, wherein the prediction set comprises not less than 1000 groups of cattle face images, and when the error between the synthesized image and a true value image is less than a threshold value, the model completes training.
CN202310001539.8A 2023-01-03 2023-01-03 Fence passageway type cattle body health data extraction device and intelligent extraction method thereof Pending CN115918571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310001539.8A CN115918571A (en) 2023-01-03 2023-01-03 Fence passageway type cattle body health data extraction device and intelligent extraction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310001539.8A CN115918571A (en) 2023-01-03 2023-01-03 Fence passageway type cattle body health data extraction device and intelligent extraction method thereof

Publications (1)

Publication Number Publication Date
CN115918571A true CN115918571A (en) 2023-04-07

Family

ID=86555961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310001539.8A Pending CN115918571A (en) 2023-01-03 2023-01-03 Fence passageway type cattle body health data extraction device and intelligent extraction method thereof

Country Status (1)

Country Link
CN (1) CN115918571A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117053875A (en) * 2023-10-10 2023-11-14 华南农业大学 Intelligent poultry phenotype measuring device and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117053875A (en) * 2023-10-10 2023-11-14 华南农业大学 Intelligent poultry phenotype measuring device and method
CN117053875B (en) * 2023-10-10 2023-12-19 华南农业大学 Intelligent poultry phenotype measuring device and method

Similar Documents

Publication Publication Date Title
CN109934176B (en) Pedestrian recognition system, recognition method, and computer-readable storage medium
CN110502965B (en) Construction safety helmet wearing monitoring method based on computer vision human body posture estimation
CN110532850B (en) Fall detection method based on video joint points and hybrid classifier
CN110399808A (en) A kind of Human bodys' response method and system based on multiple target tracking
CN109215806A (en) A kind of public place health monitoring systems and method based on recognition of face
CN107590452A (en) A kind of personal identification method and device based on gait and face fusion
CN104036236B (en) A kind of face gender identification method based on multiparameter exponential weighting
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN111259978A (en) Dairy cow individual identity recognition method integrating multi-region depth features
CN106570491A (en) Robot intelligent interaction method and intelligent robot
CN103049459A (en) Feature recognition based quick video retrieval method
CN105022982A (en) Hand motion identifying method and apparatus
CN109255298A (en) Safety helmet detection method and system in dynamic background
CN104794451B (en) Pedestrian's comparison method based on divided-fit surface structure
TW202005522A (en) Fry counting system and fry counting method
CN112132157B (en) Gait face fusion recognition method based on raspberry pie
CN115918571A (en) Fence passageway type cattle body health data extraction device and intelligent extraction method thereof
CN112232190B (en) Method for detecting abnormal behaviors of old people facing home scene
CN109711232A (en) Deep learning pedestrian recognition methods again based on multiple objective function
Sun et al. Behavior recognition and maternal ability evaluation for sows based on triaxial acceleration and video sensors
CN110765925B (en) Method for detecting carrying object and identifying gait based on improved twin neural network
CN106980864A (en) A kind of pedestrian's recognition methods again based on support sample indirect type
CN113743380B (en) Active tracking method based on video image dynamic monitoring
CN106845361B (en) Pedestrian head identification method and system
CN112613430B (en) Gait recognition method based on deep migration learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination