CN111612645A - Livestock animal slaughtering detection method based on gun-ball linkage and BIM - Google Patents

Livestock animal slaughtering detection method based on gun-ball linkage and BIM Download PDF

Info

Publication number
CN111612645A
CN111612645A CN202010458721.2A CN202010458721A CN111612645A CN 111612645 A CN111612645 A CN 111612645A CN 202010458721 A CN202010458721 A CN 202010458721A CN 111612645 A CN111612645 A CN 111612645A
Authority
CN
China
Prior art keywords
animal
fence
detection
livestock
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010458721.2A
Other languages
Chinese (zh)
Inventor
各珍珍
邹昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010458721.2A priority Critical patent/CN111612645A/en
Publication of CN111612645A publication Critical patent/CN111612645A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/606Protecting data by securing the transmission between two devices or processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Bioethics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Agronomy & Crop Science (AREA)
  • Animal Husbandry (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a livestock animal entering detection method based on gun-ball linkage and BIM. The method comprises the following steps: carrying out animal detection on an animal image of the large-scale free-range area acquired by the gunlock; performing coordinate regression to obtain the coordinates of the animal key points; when the distance between the animal and the fence is smaller than a set threshold value, calling a ball machine; the ball machine acquires an image of the entrance of the fence, and an animal thermodynamic diagram of the entrance of the fence is obtained after the image is processed; superposing thermodynamic diagrams of the animals at the entrance of the fence, and adjusting the pose of the ball machine after the track disappears; acquiring an animal image of an area inside the fence, and processing the animal image to obtain an animal thermodynamic diagram inside the fence; counting the number of peak points of the animal thermodynamic diagram hot spots in the fence; and visualizing the building information model of the livestock breeding area by combining the WebGIS technology. By utilizing the system and the method, automatic livestock entering detection is realized, the efficiency and the precision of livestock entering detection are improved, and the safety in the data processing and transmitting process is improved.

Description

Livestock animal slaughtering detection method based on gun-ball linkage and BIM
Technical Field
The invention relates to the technical field of artificial intelligence, BIM, block chain and intelligent livestock raising, in particular to a livestock animal entering-fence detection method based on gun-ball linkage and BIM.
Background
In the animal husbandry industry and animal and plant breeding, in order to obtain better meat products, milk products and the like, captive breeding and scattered breeding are combined. The number of animals kept in pens is generally large, and the traditional statistical mode of observing by a raiser is not suitable for the current situation of the livestock industry after the animals are grazed and scattered and enter the fence and are counted up.
In the current statistics of the number of animals raised, some methods achieve the number statistics by installing a position sensor on each animal. However, in the actual cultivation process, the number of animals changes at any time along with the conditions of slaughtering, breeding and the like, and the sensor counting method requires an implementer to install sensors for the animals in time according to the conditions, so that the implementation cost is high, and inconvenience is brought to implementation. Moreover, sensor dropout can lead to statistical bias, reducing accuracy.
Disclosure of Invention
The invention aims to provide a livestock animal entering-fence detection method based on gun-ball linkage and BIM (building information modeling) aiming at the defects in the prior art, so that automatic livestock animal entering-fence detection is realized, the efficiency and the precision of livestock animal entering-fence detection are improved, and the safety in the data processing and transmitting process is improved.
A livestock animal entering-fence detection method based on gun-ball linkage and BIM comprises the following steps:
firstly, animal detection is carried out on an animal image of a large-range free-ranging area collected by a gun camera based on a livestock animal detection deep neural network to obtain an animal key point thermodynamic diagram;
performing coordinate regression on the hot spots in the animal key point thermodynamic diagram to obtain the coordinates of the animal key points;
step three, projecting the coordinates of the key points of the animals to a coordinate system of a building information model of the livestock breeding area, combining the coordinate information of a fence entrance in the building information model of the livestock breeding area, and calling a dome camera when the distance between the animals and the fence is less than a set threshold value;
step four, the ball machine is set to be at a fixed position, but the shooting direction can be rotated, fence entrance images are collected and sent to a fence entrance detection depth neural network for processing, and a fence entrance animal thermodynamic diagram is obtained;
step five, superposing the thermodynamic diagrams of the animals at the entrance of the multi-frame fence in the time window by using a superposition unit to obtain the moving track of the animals, and adjusting the pose of a ball machine to shoot images inside the fence when the track disappears;
collecting animal images in the inner area of the fence, and processing the animal images through a fence-entering detection depth neural network to obtain an animal thermodynamic diagram in the fence;
step seven, counting the number of peak points of the thermodynamic diagram hot spots of the animals in the fence to obtain a livestock entering fence detection result;
and step eight, visualizing the building information model of the livestock breeding area by combining the WebGIS technology, and displaying the livestock entering-the-fence detection result.
Performing the following operations prior to step one:
configuring a livestock animal detection deep neural network and a first post-processing unit, wherein the livestock animal detection deep neural network comprises an animal key point detection encoder and an animal key point detection decoder;
splitting a deep neural network reasoning task into an animal key point detection encoder reasoning task and an animal key point detection decoder reasoning task;
distributing an animal key point detection encoder reasoning task to a gun machine end for collecting images, and distributing an animal key point detection decoder reasoning task and a first post-processing unit task to different randomly selected cluster computing nodes respectively;
parameters required by tasks in the bolt and the computing node are used as block main body data, and the blocks are connected according to the deep neural network and the operation sequence between the deep neural network and the first post-processing unit to generate a first block private chain.
The livestock animal detection deep neural network takes the livestock animal image of the free-range large area as a training sample set, the training sample set is subjected to characteristic marking, the center of an animal body is taken as a key point, a hot spot generated by Gaussian blur is subjected to characteristic marking on the position of the key point, and training is carried out by utilizing a mean square error loss function and a random gradient descent method.
The first step is specifically as follows:
on the gunlock, carrying out animal detection encoder reasoning tasks on images acquired by the gunlock, and outputting a characteristic diagram;
and executing an animal key point detection decoder reasoning task on the feature map on the corresponding cluster computing node, and outputting an animal key point thermodynamic diagram.
And the first post-processing unit on the corresponding computing node executes the operations of the second step and the third step.
Before the fourth step, the method further comprises the following steps:
the entry detection deep neural network comprises an entry detection encoder and an entry detection decoder, the inference task of the entry detection encoder is distributed to a ball machine end for collecting images, and the inference task of the entry detection decoder and the superposition unit task are respectively distributed to different randomly selected cluster computing nodes;
and the ball machine and parameters required by each task in the computing node are used as block main body data, and each block is connected according to the entry detection deep neural network and the operation sequence between the entry detection deep neural network and the superposition unit to generate a second block private chain.
The in-fence detection depth neural network takes an animal image of a small area near a fence and an image in the fence as training sample sets, performs characteristic marking on the training sample sets, takes the center of an animal body as a key point, performs characteristic marking on a hot spot generated by Gaussian blur at the position of the key point, and performs training by using a mean square error loss function and a random gradient descent method.
The posture adjustment of the ball machine in the fifth step comprises the following steps:
the center of the camera light center before rotation corresponding to the image plane is the center point of the fence door, the center of the camera light center after rotation corresponding to the image plane is the center point of the area in the fence, and the distance from the camera light center before rotation to the center of the corresponding camera image plane in the direction is taken as a vector
Figure BDA0002510195520000021
The distance from the optical center of the camera to the center of the corresponding image plane in the belt direction after rotation is taken as a vector
Figure BDA0002510195520000022
The center of the image plane corresponding to the center of the optical center of the rotating dome camera is the center point between the front wheels of the dome camera, and the rotating angle theta of the dome camera is calculated by the following formula:
Figure BDA0002510195520000023
order to
Figure BDA0002510195520000024
The direction of the rotation vector of the ball machine is the rotating shaft, the mode of the rotation vector of the ball machine is theta, and the rotation vector of the ball machine is obtained
Figure BDA0002510195520000025
Calculating a rotation matrix: let the unit vector of the rotation vector of the ball machine be r ═ rxryrz]TAnd the angle is theta, and the corresponding ball machine rotation matrix is R:
Figure BDA0002510195520000026
wherein I is a third order identity matrix;
selecting a point in a world coordinate system, solving the point of the camera coordinate system corresponding to the point through an external reference matrix, and calculating a focal length according to the similarity of the triangles by combining the corresponding point coordinates in the image plane coordinate system;
and the ball machine adjusts the pose according to the rotation matrix of the ball machine and focuses according to the focal length.
The method for selecting different randomly selected cluster computing nodes comprises the following steps:
and generating a random number sequence by utilizing an inverse congruence method according to the random number seeds, sequencing the numerical value indexes of the generated random numbers to obtain a numerical value index sequence, and sequentially taking the cloud host instances of the corresponding indexes in the numerical value index sequence as nodes for executing tasks.
And encrypting and decrypting the data transmitted between the private chains of the block chains by adopting an RC5 encryption algorithm.
Compared with the prior art, the invention has the following beneficial effects:
1. the method monitors the process of entering the livestock into the stall by the aid of an image acquisition mode based on gun-ball cooperation, animal quantity statistics is carried out by adjusting the positions and postures of the ball machines after all animals enter the stall, the whole process includes whether entering the stall and automatic completion of the position and posture adjustment of the ball machines without manual intervention, intelligent livestock entering detection is achieved, and efficiency of entering the stall detection is improved.
2. The method adopts a deep neural network technology, uses a large number of livestock animal image samples for training, has better generalization performance, and improves the stability of the method and the accuracy of the detection result.
3. According to the characteristics of the gunlock and the ball machine, the image acquisition is triggered in a gun-ball linkage mode, so that the method is more flexible and is beneficial to improving the detection precision.
4. Aiming at the difference of images shot by a gunlock and a dome camera, the invention respectively uses different training samples to train the livestock detection deep neural network and the livestock entering into the fence detection deep neural network, thereby improving the network detection precision.
5. The invention carries out decentralized reasoning on the network and other tasks of the method based on the cluster technology, thereby improving the parallelism.
6. The method combines the block chain technology to configure the first block chain private chain and the second block chain private chain, so that the safety of each task module parameter in the network reasoning process is improved; the first block chain private chain and the second block chain private chain are dynamically generated according to the node distribution sequence, so that the first block chain private chain and the second block chain private chain are not easy to crack, and the attack resistance is improved; further, data transmitted between the private chains of the block chain are encrypted, and the data are prevented from being leaked.
Drawings
FIG. 1 is a diagram of a neural network architecture of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a livestock animal entering detection method based on gun-ball linkage and BIM. First of all. FIG. 1 is a diagram of a neural network architecture of the present invention. The following description will be made by way of specific examples.
Example 1:
in order to realize the system, a building information model of the livestock breeding area and an information exchange module thereof need to be constructed firstly, and the building information model of the breeding area comprises a building information model BIM of a free-range and captive breeding area.
BIM and its information exchange module in livestock-raising area are a information processing and data exchange platform based on BIM. The BIM of the livestock breeding area comprises all information required by three-dimensional modeling of the free-range and captive breeding areas, such as geographical range information, fence information, camera perception information, camera geographical position and camera pose, livestock animal entering detection results and the like of the free-range and captive breeding areas. E.g., fence entrance coordinate information; information of an internal reference matrix and an external reference matrix of the gun camera with fixed poses; position information and an internal reference matrix of the dome camera; and when the camera fixes the pose, shooting a coordinate point of a certain position of the area under a BIM three-dimensional model coordinate system, and the like.
The invention mainly aims to realize livestock entering detection in livestock free-ranging and captive breeding areas, which is realized based on a neural network, and the final output result is a livestock entering detection result. The method comprises a livestock animal detection deep neural network and a slaughtering detection deep neural network. The two networks are described separately below.
The livestock animal detection deep neural network comprises an animal key point detection encoder and an animal key point detection decoder. The functions are as follows: firstly, a gunlock is a camera with a fixed pose, the livestock animal information in a large-scale free-range area is subjected to image acquisition, and then the images acquired by the gunlock are sent to an animal key point detection encoder and an animal key point detection decoder to obtain animal key point detection Heatmap 1. Specifically, the animal key point detection encoder is used for encoding images collected by a gunlock and extracting features; and the animal key point detection decoder is used for decoding the characteristic diagram output by the encoder to obtain animal key point detection Heatmap 1.
A first post-processing unit is provided. And the first post-processing unit is used for regressing the coordinates of the key points by using a softargmax function for the hot spots in the Heatmap1 after the livestock animals detect the deep neural network, projecting the coordinates of the key points into a coordinate system of the BIM three-dimensional model, combining the coordinates with the rail entrance coordinate information in the BIM information, and calling a dome camera when the distance between the animal position and the rail position is smaller than a set threshold value. Preferably, the set threshold is set to 3 meters.
The entry detection deep neural network comprises an entry detection encoder and an entry detection decoder. The functions are as follows: the ball machine is set to be at a fixed position, but the shooting direction can be rotated, firstly, images are collected at the entrance of the fence, and after the images are processed by the entrance detection encoder and the entrance detection decoder, the thermodynamic diagram Heatmap2 of the animal at the entrance of the fence is obtained. Specifically, the entry detection encoder is used for encoding images collected by the bolt face and extracting features; the column detection decoder is used for decoding the characteristic diagram output by the encoder to obtain Heatmap 2.
A superimposing unit is provided. And the superposition unit is used for carrying out multi-frame thermodynamic diagram superposition in the time window to obtain the moving track of the animal, and adjusting the pose of the ball machine after the track disappears so that the ball machine rotates to the inside of the fence to shoot. The implementer can set the time window size according to the implementation situation. That is, there is no longer a track that can be broken into a linear shape in the thermodynamic diagram superimposition result.
After the superposition unit judges that the track disappears, the pose of the dome camera needs to be adjusted. The method for adjusting the pose of the dome camera comprises the following specific steps: first, the distance in the belt direction from the optical center of the camera before rotation to the center of the corresponding camera image plane is taken as a vector
Figure BDA0002510195520000041
The distance from the optical center of the camera to the center of the corresponding image plane in the belt direction after rotation is taken as a vector
Figure BDA0002510195520000042
It should be noted that the camera center may have some positional deviation during the rotation, but it is negligible. The rotation angle θ is obtained, and the formula is:
Figure BDA0002510195520000043
in the invention, the center of the camera optical center before rotation corresponding to the image plane is the center point of the fence door, the center of the camera optical center after rotation corresponding to the image plane is the center point of the area in the fence, then,
Figure BDA0002510195520000044
is a rotation axis, the direction of the rotation vector is the rotation axis, the modulus of the rotation vector is theta, and the rotation vector is obtained
Figure BDA0002510195520000045
Then, the rotation matrix R is obtained by Rodrigues transformation. The transformation process comprises the following steps: let the unit vector of the rotation vector be r ═ rxryrz]TThe angle is θ and the corresponding rotation matrix is R. The formula is as follows:
Figure BDA0002510195520000046
Figure BDA0002510195520000047
wherein I is a third order identity matrix.
In the case of the gun and ball linked camera of the present invention, the translation matrix is a known quantity. And obtaining an external parameter matrix of the camera from the rotation matrix and the translation matrix. The internal parameter matrix K is
Figure BDA0002510195520000048
Where the focus distance f is unknown only, by selecting a point in the world coordinate system
Figure BDA0002510195520000049
Finding out the point M (X) of the camera coordinate system corresponding to the external parameter matrixc,Yc,Zc) And combining the corresponding points M' (x, y) in the camera coordinate system, and setting the distance from the origin of the camera coordinate system to the origin of the world coordinate system as f, the focal length f can be calculated according to the similarity of the triangles, and the formula is as follows:
Figure BDA0002510195520000051
the proper focal distance is set so that the shooting area can cover the whole inner area of the fence. After the focal distance is determined, a camera projection matrix P ═ K [ R | t ] can be obtained]. And adjusting the pose of the camera according to the rotation matrix, and adjusting the focal length of the dome camera according to the focal length f.
And acquiring an animal image of the inner area of the fence, and sending the animal image into a fence detection encoder and a fence decoder to obtain an animal thermodynamic diagram Heatmap3 in the fence. And the second post-processing unit counts the number of peak points of the heat spots of the Heatmap3 to obtain the result of the livestock entering the fence.
It should be noted that the same encoder and decoder are used at the entrance of the fence as for image processing inside the fence. And processing the image at the entrance to obtain a thermodynamic diagram, and adjusting the pose by the ball machine to shoot the interior of the fence when the track obtained by superposition disappears.
The training process of the two neural networks is explained. The two neural networks are structurally similar, but the training sample sets are different. The livestock animal detection deep neural network takes the livestock animal image of the free-range large area as a training sample set, and the training sample is a small target livestock animal image. The in-fence detection depth neural network takes the animal images of small areas near the fence and the images in the fence as a training sample set, and the training samples are the images of the large target livestock animals. Through training, the network learns the detection of animal entry. The specific training method comprises the following steps: the training data set is prepared by training two networks respectively by using the images of the animals in the large free-range area or the images of the animals in the small area near the fence as the training data set. And (3) carrying out feature labeling on the training data set, taking the center of the animal body as a key point, and carrying out feature labeling on a hot spot generated by Gaussian blur on the position of the key point. Training is carried out by using a mean square error loss function and a random gradient descent method.
Because the hardware performance of the equipment end, namely the gunlock camera and the dome camera, is limited, if the calculation of the method is carried out at the equipment end, the calculation efficiency is very low, and the real-time performance cannot meet the requirement, so that the local server cluster is adopted to execute partial calculation of the embodiment. When cluster computing is used, data are easy to leak, and safety cannot be guaranteed, so that a block chain private chain mode is used, a module of a deep neural network is used as a block, scattered reasoning is carried out, and excellent performance of distributed nodes, encryption and high disaster tolerance is achieved.
Firstly, the livestock animal detection deep neural network and the entry detection deep neural network are split in a modularization mode, and the network is split into an animal key point detection encoder, an animal key point detection decoder, an entry detection encoder and an entry detection decoder according to module functions.
And allocating the inference task of the animal key point detection encoder to the gunlock end, then randomly selecting a server node from the server cluster, and allocating the inference task of the animal key point detection decoder and the first post-processing unit task to the server node. The parameters required by the animal key point detection encoder are block main body data in the gunlock, and a first block located in the gunlock is formed; the parameters required by the animal key point detection decoder are used as block main body data in a corresponding server node and used as a second block, and the second block is connected to the first block; the parameters required by the first post-processing unit are used as block main body data of the corresponding server node to form a third block, and the third block is connected to the second block; thus forming a first blockchain private chain. Aiming at a frame of image collected by a gunlock, a corresponding first block chain private chain is generated, and the private chain exists in the reasoning process, so that the safety performance of the livestock animal detection deep neural network is improved. Moreover, because the server node is randomly selected, the first block chain private chain can dynamically change along with the inference request, and the safety performance of the network is further improved.
And aiming at the fence detection deep neural network, distributing the reasoning tasks of the fence detection encoder to a ball machine end, then randomly selecting server nodes from a server cluster, and distributing the tasks of the fence detection decoder and the superposition unit to the server nodes. And the generation mode of the second block private chain is similar to that of the first block private chain, namely, the data required by the corresponding task is used as the main block data, and the blocks are connected according to the neural network reasoning sequence and the reasoning sequence between the network and the superposition unit to obtain the second block private chain. Different from other units, other functional units can replace nodes according to each frame, generate block chain private chains and perform calculation, and the size of the time window can be used as the time interval for updating the nodes because the superposition unit processes data in the time window.
The selection of server nodes is randomized, one way is: and sequencing available nodes, disordering and reordering the serial number sequence of the tasks by a random seed method, and finally sending the nodes corresponding to the serial numbers for operation. The specific method comprises the following steps: first, a new random number seed is generated, which can be selected by using various methods such as IP and Global Time, which are not described herein. After the random number seeds are selected, the random number is generated by using an inverse congruence method. Let seed be X0, X (n +1) ═ aX (n)-1+ b) mod c, repeating to generate multiple random numbers. It should be noted that at least the same number of random numbers as the number of available nodes should be generated. Then, the random numbers in the random number sequence are corresponding to tasks, for example, task 1 corresponds to X0, task 2 corresponds to X1, then a corresponding numerical value index is generated for each task according to a sorting algorithm, and the tasks are allocated to nodes corresponding to the indexes.
After the block chain private chain is configured, inference operation is performed according to the chain logic of the network and other units, which is referred to as chain inference for short. The whole chain type reasoning can be completed at the local end by adopting a mode of joint reasoning of the equipment end and the server cluster. After the first block chain private chain is configured, the detection and encoding of the animal key points are completed at the camera end of the gunlock, namely, the camera collects images and the features are extracted through the encoder. The animal key point detection decoding is completed in a corresponding local server, namely, an animal key point thermodynamic diagram is generated through a decoding operation. In the first post-processing unit, calculating coordinates of hot spots in the thermodynamic diagram of the animal key points through softargmax, projecting the coordinates into a BIM model, combining with BIM information, calling a dome camera, and adjusting the pose of the dome camera. And when the task on the private link point of the first block chain is completed, the private link of the first block chain can be released. And then configuring a second block private chain, and after the configuration is completed, completing the column entry detection encoder at the camera end of the dome camera, namely acquiring images by a camera and extracting features by the encoder. The fence detection decoding is completed in the corresponding local server, namely, a fence entrance animal thermodynamic diagram is generated through the decoding operation. The superposition unit is completed in the local server, the heat superposition operation is carried out to obtain a track, and the pose of the dome camera is adjusted after the track disappears. At this point, the task at the private link point of the second block chain is completed and can be released. And finally, counting the number of the livestock animals in the fence. The encoding, decoding and post-processing of the images in the fence can be completed on a dome camera or a client, the collected images are subjected to feature extraction through a fence-entering detection encoder, and an animal thermodynamic diagram in the fence is generated through decoding operation. The client is a node for receiving the result and displaying the result. And the second post-processing unit carries out peak point statistics on the animal thermodynamic diagrams in the fence. Preferably, the client is configured with a listing detection decoder and a second post-processing unit, receives the features extracted after the ball machine encodes the images in the fence, and decodes and post-processes the features to obtain a listing detection result. The features extracted after coding are not easy to identify, so that the safety of data transmission between the dome camera and the client can be ensured.
In order to further improve the safety performance of the method of the invention. The invention encrypts the data transmitted between the nodes, namely the data transmitted between the first block chain private chain blocks and the second block chain private chain blocks. The mechanism of the RC5 encryption algorithm is used here, and the specific method is as follows: creating a key group: the RC5 algorithm uses 2r +2 key-dependent 32-bit words for encryption, where r denotes the number of rounds of encryption. A key group is created by first copying the key bytes into an array L of 32-bit words (note here whether the processors are in little-endian order or big-endian order), and the last word can be padded with zeros if necessary. Then, the array S is initialized by using a linear congruence generator, and finally L and S are mixed. Encryption processing: after the key set is created, encryption of the plaintext is started, and when encryption is performed, the plaintext packet is firstly divided into two 32-bit words: a and B (e.g., assuming the processor byte order is little-endian, w is 32, the first plaintext byte goes into the lowest byte of a, the fourth plaintext byte goes into the highest byte of a, the fifth plaintext byte goes into the lowest byte of B, and so on), loop left, add with the key. The output ciphertext is the content in registers a and B. Decryption processing: the ciphertext block is divided into two words: a and B (same storage mode and encryption), which are consistent with the cycle right shift, and are subtracted from the key. The data among all the nodes are transmitted based on the encryption algorithm, and the next block decrypts the data according to the reverse direction of the encryption algorithm after receiving the data of the previous block.
The implementer should know which encryption method is used, when the key is updated, and there are many known methods, such as timing update and manual update, when and how to update the key according to the actual situation.
In order to visually present information output by a network, BIM of the free-ranging and captive breeding areas of the livestock cattle and sheep is combined for visual processing through WebGIS. The target coordinate point obtained by two-dimensional image processing is projected into a three-dimensional building model of the BIM, and information can be processed from two ranges of the size by combining the coordinate information of the camera. Meanwhile, through visualization of the WebGIS, a supervisor can search, inquire and analyze on the Web of the client, so that the breeding personnel can know the condition of cattle and sheep entering the fence in the fence area and take corresponding measures conveniently.
The above embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the present invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A livestock animal entering detection method based on gun-ball linkage and BIM is characterized by comprising the following steps:
firstly, animal detection is carried out on an animal image of a large-range free-ranging area collected by a gun camera based on a livestock animal detection deep neural network to obtain an animal key point thermodynamic diagram;
performing coordinate regression on the hot spots in the animal key point thermodynamic diagram to obtain the coordinates of the animal key points;
step three, projecting the coordinates of the key points of the animals to a coordinate system of a building information model of the livestock breeding area, combining the coordinate information of a fence entrance in the building information model of the livestock breeding area, and calling a dome camera when the distance between the animals and the fence is less than a set threshold value;
step four, the ball machine is set to be at a fixed position, but the shooting direction can be rotated, fence entrance images are collected and sent to a fence entrance detection depth neural network for processing, and a fence entrance animal thermodynamic diagram is obtained;
step five, superposing the thermodynamic diagrams of the animals at the entrance of the multi-frame fence in the time window by using a superposition unit to obtain the moving track of the animals, and adjusting the pose of a ball machine to shoot images inside the fence when the track disappears;
collecting animal images in the inner area of the fence, and processing the animal images through a fence-entering detection depth neural network to obtain an animal thermodynamic diagram in the fence;
step seven, counting the number of peak points of the thermodynamic diagram hot spots of the animals in the fence to obtain a livestock entering fence detection result;
and step eight, visualizing the building information model of the livestock breeding area by combining the WebGIS technology, and displaying the livestock entering-the-fence detection result.
2. The method of claim 1, further comprising performing the following prior to step one:
configuring a livestock animal detection deep neural network and a first post-processing unit, wherein the livestock animal detection deep neural network comprises an animal key point detection encoder and an animal key point detection decoder;
splitting a deep neural network reasoning task into an animal key point detection encoder reasoning task and an animal key point detection decoder reasoning task;
distributing an animal key point detection encoder reasoning task to a gun machine end for collecting images, and distributing an animal key point detection decoder reasoning task and a first post-processing unit task to different randomly selected cluster computing nodes respectively;
parameters required by tasks in the bolt and the computing node are used as block main body data, and the blocks are connected according to the deep neural network and the operation sequence between the deep neural network and the first post-processing unit to generate a first block private chain.
3. The method as claimed in claim 2, wherein the livestock animal detection deep neural network takes the livestock animal image of the free-ranging large area as a training sample set, the training sample set is characterized, the center of the animal body is taken as a key point, a hot spot generated by Gaussian blur is performed on the key point position for characteristic marking, and training is performed by using a mean square error loss function and a random gradient descent method.
4. The method according to claim 2, wherein the first step is specifically:
on the gunlock, carrying out animal detection encoder reasoning tasks on images acquired by the gunlock, and outputting a characteristic diagram;
and executing an animal key point detection decoder reasoning task on the feature map on the corresponding cluster computing node, and outputting an animal key point thermodynamic diagram.
5. The method of claim 2, wherein the first post-processing unit on the respective compute node performs the operations of step two and step three.
6. The method of claim 1, further comprising, prior to step four:
the entry detection deep neural network comprises an entry detection encoder and an entry detection decoder, the inference task of the entry detection encoder is distributed to a ball machine end for collecting images, and the inference task of the entry detection decoder and the superposition unit task are respectively distributed to different randomly selected cluster computing nodes;
and the ball machine and parameters required by each task in the computing node are used as block main body data, and each block is connected according to the entry detection deep neural network and the operation sequence between the entry detection deep neural network and the superposition unit to generate a second block private chain.
7. The method of claim 6, wherein the in-fence detection deep neural network takes the images of the small-area animals near the fence and the images in the fence as a training sample set, performs feature labeling on the training sample set, takes the center of the animal body as a key point, performs feature labeling on a hot spot generated by Gaussian blur at the position of the key point, and performs training by using a mean square error loss function and a random gradient descent method.
8. The method of claim 1, wherein adjusting the pose of the ball machine in step five comprises:
the center of the camera light center before rotation corresponding to the image plane is the center point of the fence door, the center of the camera light center after rotation corresponding to the image plane is the center point of the area in the fence, and the distance from the camera light center before rotation to the center of the corresponding camera image plane in the direction is taken as a vector
Figure FDA0002510195510000021
The distance from the optical center of the camera to the center of the corresponding image plane in the belt direction after rotation is taken as a vector
Figure FDA0002510195510000022
The center of the image plane corresponding to the center of the optical center of the rotating dome camera is the center point between the front wheels of the dome camera, and the rotating angle theta of the dome camera is calculated by the following formula:
Figure FDA0002510195510000023
order to
Figure FDA0002510195510000024
Figure FDA0002510195510000025
The direction of the rotation vector of the ball machine is the rotating shaft, the mode of the rotation vector of the ball machine is theta, and the rotation vector of the ball machine is obtained
Figure FDA0002510195510000026
Calculating a rotation matrix: let the unit vector of the rotation vector of the ball machine be r ═ rxryrz]TAnd the angle is theta, and the corresponding ball machine rotation matrix is R:
Figure FDA0002510195510000027
wherein I is a third order identity matrix;
selecting a point in a world coordinate system, solving the point of the camera coordinate system corresponding to the point through an external reference matrix, and calculating a focal length according to the similarity of the triangles by combining the corresponding point coordinates in the image plane coordinate system;
and the ball machine adjusts the pose according to the rotation matrix of the ball machine and focuses according to the focal length.
9. The method of claim 2 or 6, wherein the randomly selected different cluster computing nodes are selected by:
and generating a random number sequence by utilizing an inverse congruence method according to the random number seeds, sequencing the numerical value indexes of the generated random numbers to obtain a numerical value index sequence, and sequentially taking the cloud host instances of the corresponding indexes in the numerical value index sequence as nodes for executing tasks.
10. The method of claim 2 or 6, wherein the data transmitted between the private chain of block chains is encrypted and decrypted by using an RC5 encryption algorithm.
CN202010458721.2A 2020-05-27 2020-05-27 Livestock animal slaughtering detection method based on gun-ball linkage and BIM Withdrawn CN111612645A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010458721.2A CN111612645A (en) 2020-05-27 2020-05-27 Livestock animal slaughtering detection method based on gun-ball linkage and BIM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010458721.2A CN111612645A (en) 2020-05-27 2020-05-27 Livestock animal slaughtering detection method based on gun-ball linkage and BIM

Publications (1)

Publication Number Publication Date
CN111612645A true CN111612645A (en) 2020-09-01

Family

ID=72198189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010458721.2A Withdrawn CN111612645A (en) 2020-05-27 2020-05-27 Livestock animal slaughtering detection method based on gun-ball linkage and BIM

Country Status (1)

Country Link
CN (1) CN111612645A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170568A (en) * 2021-12-03 2022-03-11 成都鼎安华智慧物联网股份有限公司 Personnel density detection method and system based on deep learning
CN114170568B (en) * 2021-12-03 2024-05-31 成都鼎安华智慧物联网股份有限公司 Personnel density detection method and detection system based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170568A (en) * 2021-12-03 2022-03-11 成都鼎安华智慧物联网股份有限公司 Personnel density detection method and system based on deep learning
CN114170568B (en) * 2021-12-03 2024-05-31 成都鼎安华智慧物联网股份有限公司 Personnel density detection method and detection system based on deep learning

Similar Documents

Publication Publication Date Title
Haugaard et al. Surfemb: Dense and continuous correspondence distributions for object pose estimation with learnt surface embeddings
Reed et al. Self-supervised pretraining improves self-supervised pretraining
US11967101B2 (en) Method and system for obtaining joint positions, and method and system for motion capture
CN111583661B (en) Method for detecting red light running of vehicle based on gun-ball linkage and DNN
CN106403942B (en) Personnel indoor inertial positioning method based on substation field depth image identification
CN112084967A (en) Limb rehabilitation training detection method and system based on artificial intelligence and control equipment
CN110458940A (en) The processing method and processing unit of motion capture
Huang et al. Invariant representation learning for infant pose estimation with small data
CN110210483A (en) Medical image lesion region dividing method, model training method and equipment
CN111611948A (en) Urban garbage can overflow detection method based on CIM and block chain
CN115880558B (en) Farming behavior detection method and device, electronic equipment and storage medium
CN111611937A (en) Prison personnel abnormal behavior monitoring method based on BIM and neural network
Groß et al. Generalizing GANs: a turing perspective
Hu et al. Gaze target estimation inspired by interactive attention
Chen et al. Geo-defakehop: High-performance geographic fake image detection
Aspandi et al. Fully end-to-end composite recurrent convolution network for deformable facial tracking in the wild
CN111612645A (en) Livestock animal slaughtering detection method based on gun-ball linkage and BIM
Mahapatra et al. A multi-view video synopsis framework
Zhao et al. G3DOA: Generalizable 3D descriptor with overlap attention for point cloud registration
CN104732560A (en) Virtual camera shooting method based on motion capture system
CN113284046A (en) Remote sensing image enhancement and restoration method and network based on no high-resolution reference image
Vankadari et al. Unsupervised Learning of Monocular Depth and Ego-Motion using Conditional PatchGANs.
CN114912136B (en) Competition mechanism based cooperative analysis method and system for medical data on block chain
Chumachenko et al. Ensembling object detectors for image and video data analysis
Liu et al. Geomim: Towards better 3d knowledge transfer via masked image modeling for multi-view 3d understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200901

WW01 Invention patent application withdrawn after publication