CN110296660B - Method and device for detecting livestock body ruler - Google Patents

Method and device for detecting livestock body ruler Download PDF

Info

Publication number
CN110296660B
CN110296660B CN201910560495.6A CN201910560495A CN110296660B CN 110296660 B CN110296660 B CN 110296660B CN 201910560495 A CN201910560495 A CN 201910560495A CN 110296660 B CN110296660 B CN 110296660B
Authority
CN
China
Prior art keywords
livestock
network
size
animal
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910560495.6A
Other languages
Chinese (zh)
Other versions
CN110296660A (en
Inventor
陈奕名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Jingdong Technology Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN201910560495.6A priority Critical patent/CN110296660B/en
Publication of CN110296660A publication Critical patent/CN110296660A/en
Application granted granted Critical
Publication of CN110296660B publication Critical patent/CN110296660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G17/00Apparatus for or methods of weighing material of special form or property
    • G01G17/08Apparatus for or methods of weighing material of special form or property for weighing livestock

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method and a device for detecting a body ruler of a livestock. The livestock body ruler detection method comprises the following steps: acquiring an overhead view image of the livestock; inputting the overlook images into a trained body scale detection model, wherein the body scale detection model comprises a plurality of neural networks which are connected in series, and the sizes of input pictures of the neural networks are the same; and outputting the body ruler data of the livestock. The livestock body ruler detection method can improve the accuracy of livestock body ruler detection.

Description

Method and device for detecting livestock body ruler
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a livestock body ruler detection method and device.
Background
With the development of intelligent breeding technology, body size detection becomes an indispensable item in the production process. In practical production application, the living livestock appearance such as body length, body width, body height, leg length and the like can be measured in a non-contact mode through a livestock body ruler detection technology, components such as the livestock meat yield, the protein content, lean meat and the like are estimated through detection data, the livestock growth condition is monitored, and the production scheme is adjusted in an adaptive mode.
In the related technology, the body size data of the livestock is obtained mainly by extracting the contour information of the livestock from the livestock image and screening the body size measuring points by matching with relative parameters. However, the body size detection method based on the livestock contour analysis in the image is established on the assumption that the camera angle is ideal and the illumination condition is the same, the measured value error is increased due to the change of the camera angle, the change of the illumination condition and the change of the livestock posture, and the background and the foreground cannot be stably distinguished when the livestock color is close to the background color. In addition, the method is usually deployed based on a series connection mode of various traditional algorithms, the identification process is complex, the fault tolerance rate is low, and the precision is not high.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The pig body ruler detection algorithm based on the improved MTCNN is realized aiming at the visual weight estimation of the farm, the accuracy of visual body ruler detection is effectively improved, and the problems that the lighting condition cannot be adapted to change, the foreground background color cannot be accurately distinguished when being close and the like in the related art are solved.
According to a first aspect of the present disclosure, there is provided a livestock body scale detection method, comprising:
acquiring an overhead view image of the livestock;
inputting the overlook images into a trained body scale detection model, wherein the body scale detection model comprises a plurality of neural networks which are connected in series, and the sizes of input pictures of the neural networks are the same;
and outputting the body ruler data of the livestock.
In an exemplary embodiment of the present disclosure, further comprising:
and outputting the body weight of the livestock according to the body size data.
In an exemplary embodiment of the present disclosure, the body-size data comprises a shoulder width, a hip width, a body length of the animal.
In an exemplary embodiment of the present disclosure, the plurality of serially connected neural networks includes a first network, a second network, and a third network connected in series, and the number of regression key points of the first network, the second network, and the third network is 6.
In an exemplary embodiment of the present disclosure, the input picture sizes of the first, second, and third nets are all 160 × 160, the convolution kernel sizes of the first and second nets include 5 × 5, the pooling parameters include 4 × 4, and the pooling step size includes 4; the convolution kernel size and pooling parameter of the third network are both 3 x 3, and the pooling step size is 2.
In an exemplary embodiment of the disclosure, the animal is one or more animals located in the center of the overhead image, and the body ruler data comprises body ruler data for each of the animals.
In an exemplary embodiment of the present disclosure, the livestock includes pigs, cattle, sheep.
According to a second aspect of the present disclosure, there is provided a livestock body scale detecting apparatus comprising:
the camera is used for acquiring an overhead view image of the livestock;
a processor coupled to the camera and configured to obtain the top view image; inputting the overlook images into a trained body scale detection model, wherein the body scale detection model comprises a plurality of neural networks which are connected in series, and the sizes of input pictures of the neural networks are the same; and outputting the body ruler data of the livestock.
According to a third aspect of the present disclosure, there is provided a livestock body scale detecting apparatus comprising: a memory; and a processor coupled to the memory, the processor configured to perform the method of any of the above based on instructions stored in the memory.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having a program stored thereon, the program when executed by a processor implementing the method of animal body scale detection as claimed in any one of the preceding claims.
According to the body ruler detection method provided by the embodiment of the disclosure, the overlooking images of the livestock are identified by using the multilayer neural network construct ruler detection models which are connected in series and have the same input size, so that the body ruler of the livestock can be accurately detected under various illumination and various livestock postures, and the accuracy of distinguishing the foreground and the background can be improved when the color of the livestock is similar to the background color.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 is a schematic view of a method of detecting a body size of an animal according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of a body ruler detection model in an exemplary embodiment of the present disclosure.
Fig. 3A to 3C are schematic structural diagrams of the first to third networks, respectively, in an exemplary embodiment of the present disclosure.
Fig. 4A is a schematic diagram of a detection box in an embodiment of the present disclosure.
Fig. 4B is a schematic diagram of key points of the body ruler detection model in the embodiment of the present disclosure.
Fig. 5 is a block diagram of an animal body size detection arrangement in an exemplary embodiment of the present disclosure.
FIG. 6 is a block diagram of an electronic device in an exemplary embodiment of the present disclosure.
FIG. 7 is a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Further, the drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The following detailed description of exemplary embodiments of the disclosure refers to the accompanying drawings.
Fig. 1 schematically shows a flow chart of a method of detecting a body scale of an animal according to an exemplary embodiment of the present disclosure. Referring to fig. 1, the animal body ruler detection method 100 may include:
step S1, acquiring an overhead view image of the livestock;
step S2, inputting the overhead view image into a trained body size detection model, wherein the body size detection model comprises a plurality of neural networks connected in series, and the sizes of input pictures of the neural networks are the same;
and step S3, outputting the body size data of the livestock.
In the disclosed embodiment, the livestock can be pigs, cattle and sheep, and the body size data can be shoulder width, hip width and body length of the livestock. It is understood that the method 100 for detecting body size of livestock provided by the present disclosure can be used for detecting body sizes of various types of livestock or animals, and the detected body size data can also include more types, and those skilled in the art can set the method according to actual needs.
Fig. 2 is a schematic structural diagram of the body ruler detection model.
Referring to FIG. 2, in an embodiment of the present disclosure, the body ruler detection model 11 is implemented based on an MTCNN network, and may include a first network P-Net, a second network R-Net, and a third network O-Net connected in series. P-Net is used for analyzing the input overlook image 12 to obtain a candidate frame, R-Net is used for deleting the livestock-free candidate frame, O-Net is used for outputting the position of the livestock frame and the key point, and finally body size data 13 of the livestock is output.
The first network P-Net to the third network O-Net output three groups of characteristic values: the classification data, the frame regression data and the key point regression data, and the second network R-Net and the third network O-Net respectively take the characteristic values output by the first network P-Net and the second network R-Net as input data and adjust the input image according to the input data and the preset input image size.
Fig. 3A to 3C are schematic structural diagrams of the first network to the third network, respectively.
Referring to fig. 3A to 3C, the sizes of targets faced by the livestock body scale detection method provided by the present disclosure are not greatly different, and therefore, in the embodiment of the present disclosure, the sizes of the input pictures of the first network to the third network are all set to be the same, so as to reduce the amount of calculation, increase the calculation speed, and improve the search efficiency and accuracy.
In addition, different from the traditional MTCNN network mainly applied to the field of face recognition, for animal body size detection, if the small input picture size of the traditional MTCNN network is applied, more false alarm frames appear, and the calculation capacity is wasted, so for example, taking the pig body size detection as an example, the input picture sizes of the first network to the third network can be set to 160 × 160, so as to ensure that the position of the pig can be quickly positioned.
Referring to fig. 3A, for the MTCNN network, the input data source is a picture pyramid. The forming process of the picture pyramid comprises the following steps: the pictures are divided into different sizes according to different scales, and the pictures are cut and stacked like a pyramid, so that one picture is equivalent to a plurality of pictures. After the top view image is processed into an image pyramid, each picture in the pyramid is processed into an input matrix of 160 x 3 according to three eigen channels (R, G, B), which are input into a first network P-Net.
According to the MTCNN network setting, through five groups of operations, P-Net outputs three groups of characteristic values of the overlook image: pig classification, box regression, and key point regression. In order to improve the calculation efficiency for the livestock body size detection, five groups of operation parameters of P-Net are adjusted in the embodiment of the present disclosure, specifically, the size of one or more convolution kernels is adjusted from conventional 3 × 3 to 5 × 5, the pooling parameter is adjusted from conventional 3 × 3 to 4 × 4, and the pooling step size is set to 4, so as to improve the detection speed as much as possible while ensuring the detection accuracy. The adjusted operation parameters can be any one or more of five groups of operation parameters, and the technical personnel in the field can set the operation parameters according to the types of the detected livestock.
Of the three eigenvalues output by P-Net, pigs were sorted into a matrix set of 1 x 2, where 1 x 1 corresponds to the input image size 160 x 160, and at an input image size of 320 x 320, the pig sorting matrix set was adaptively switched to 4 x 2. Where 2 represents the identification category, including, for example, "background" and "foreground", in a binarized representation. The frame regression feature value is a matrix group of 1 × 4, and 1 × 1 also corresponds to the input image size 160 × 160, which is not described again; and 4 represents the abscissa and ordinate of the upper left corner point and the abscissa and ordinate of the lower right corner point of the detection frame.
The key point regression feature values are in a 1 x 12 matrix set. Unlike the setting of detecting five key points when the conventional MTCNN network is applied to the field of face recognition, in one embodiment of the present disclosure, body ruler data is set to shoulder width, hip width, body length of livestock, and thus, the number of regression key points is set to 6, i.e., the key points include six points of left shoulder, right shoulder, left hip, right hip, head, and tail root. Thus, "12" in the regression feature values of the key points indicates that the abscissa and ordinate of the six points total 12 data.
Since one picture is divided into many small pyramid-type pictures, it is necessary to perform Non-Maximum Suppression (NMS) processing on the entire processed picture once and finally generate a corresponding processed result. Specifically, the characteristic value output by the P-Net is subjected to bounding box regression (bounding box regression) processing and non-maximum suppression processing to form an input parameter of the R-Net.
In fig. 3B, the second network R-Net performs four sets of operations on the input image to output three sets of feature values. The input image size of R-Net is also 160 × 160, and the input image is obtained by performing Resize (size transformation) processing on the frame region to be selected, which is determined according to the input parameters and the original input image. Each set of operation parameters in R-Net can be adjusted to a convolution kernel size of 5 x 5, a pooling parameter of 4 x 4, and a pooling step of 4. Like P-Net, the number of adjusted operational parameters may differ from animal to animal.
In the same way, the characteristic value output by the R-Net is subjected to bounding box regression (bounding box regression) processing and non-maximum suppression processing to form an input parameter of the O-Net.
In FIG. 3C, the third network O-Net performs five sets of operations on the Resize processed input image to output three sets of feature values. The input image is formed in the same way as R-Net. And extracting final body size data after the characteristic value output by the O-Net is subjected to bounding box regression (bounding box regression) processing and non-maximum suppression processing.
The manner of training the body size detection model 11 may be step-by-step training. Firstly training P-Net, then training R-Net by using the output data of the trained P-Net, and finally training O-Net by using the output data of the trained R-Net, so as to improve the correction capability of the R-Net and the O-Net on the input data. When generating the R-NET and O-NET network training data, the search box can be initialized to be rectangular (the length-width ratio is different according to different detected types of livestock) so as to adapt to the proportion of the body types of the livestock.
Fig. 4A and 4B are a schematic diagram of a search box and a schematic diagram of a keypoint, respectively, in an embodiment of the present disclosure.
Referring to fig. 4A, when the tested livestock is a pig, the search box is set to be rectangular with an aspect ratio of 2: 1.
Referring to fig. 4B, six point locations of the tail root a, the head B, the left shoulder C, the right shoulder D, the left hip E, and the right hip F are set as key points, and are used as body ruler data.
After the body size data are obtained, the estimated body weight of the livestock can be further output according to the relation between the body size data and the unit volume body weight of the livestock.
It is worth mentioning that, because the camera has a fish-eye effect, the method provided by the present disclosure can be used for accurately detecting the body size data of one or more livestock located in the center of the overlooking image. With regard to the prior art in through detecting the object profile in the image and discern the object different, the method that this disclosure provided can be owing to used not be restricted to the body chi detection model of colour, light and gesture and carry out livestock discernment, can also guarantee accurate body chi and measure when the livestock is crowded, the gesture is different, once only detects the body chi of bull livestock promptly, helps improving detection efficiency, reduces the time of driving the livestock.
In conclusion, the livestock body scale detection method provided by the invention can effectively detect the body scale information of the livestock in the image under various illumination conditions and livestock postures, and improve the accuracy of visual weight estimation.
Corresponding to the method embodiment, the present disclosure also provides a livestock body ruler detection apparatus, which may be used to execute the method embodiment.
Fig. 5 schematically shows a block diagram of an animal body size detection arrangement in an exemplary embodiment of the disclosure.
Referring to fig. 5, the animal body ruler detecting apparatus 500 may include:
a camera 51 for acquiring overhead images of the livestock;
a processor 52, coupled to the camera, for acquiring the overhead image; inputting the overlook images into a trained body scale detection model, wherein the body scale detection model comprises a plurality of neural networks which are connected in series, and the sizes of input pictures of the neural networks are the same; and outputting the body ruler data of the livestock.
In an exemplary embodiment of the present disclosure, the body-size data comprises a shoulder width, a hip width, a body length of the animal.
In an exemplary embodiment of the present disclosure, the plurality of serially connected neural networks includes a first network, a second network, and a third network connected in series, and the number of regression key points of the first network, the second network, and the third network is 6.
In an exemplary embodiment of the present disclosure, the processor 52 is further configured to output the body weight of the animal based on the body size data.
In an exemplary embodiment of the disclosure, the animal is one or more animals located in the center of the overhead image, and the body ruler data comprises body ruler data for each of the animals.
In an exemplary embodiment of the present disclosure, the livestock includes pigs, cattle, sheep.
In an exemplary embodiment of the present disclosure, the input picture sizes of the first, second, and third nets are all 160 × 160, the convolution kernel sizes of the first and second nets include 5 × 5, the pooling parameters include 4 × 4, and the pooling step size includes 4; the convolution kernel size and pooling parameter of the third network are both 3 x 3, and the pooling step size is 2.
Since the functions of the apparatus 500 have been described in detail in the corresponding method embodiments, the disclosure is not repeated herein.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 630 that couples the various system components including the memory unit 620 and the processing unit 610.
Wherein the storage unit stores program code that is executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 610 may perform a method as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
Referring to fig. 7, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (8)

1. A livestock body ruler detection method is characterized by comprising the following steps:
acquiring an overhead view image of the livestock;
inputting the overlook image into a trained body size detection model, wherein the body size detection model is an MTCNN (multiple-transmission-network) model and comprises a plurality of serially connected neural networks, and input pictures of the neural networks are the same in size;
outputting body ruler data of the livestock;
the plurality of serially connected neural networks comprise a first network, a second network and a third network which are serially connected, the number of regression key points of the first network, the second network and the third network is 6, the sizes of input pictures of the first network, the second network and the third network are 160 x 160, the sizes of convolution kernels of the first network and the second network are 5 x 5, the pooling parameter is 4 x 4, and the pooling step size is 4; the convolution kernel size and pooling parameter of the third network are both 3 x 3, and the pooling step size is 2.
2. The livestock body size detection method of claim 1, further comprising:
and outputting the body weight of the livestock according to the body size data.
3. The animal body size detection method of claim 1 wherein said body size data includes shoulder width, hip width, body length of said animal.
4. The animal body size detection method of claim 1 wherein said animal is one or more animals located in the center of said overhead image, said body size data including body size data for each of said animals.
5. The method of claim 1 wherein said livestock comprises pigs, cattle, sheep.
6. A livestock body chi detection device which characterized in that includes:
the camera is used for acquiring an overhead view image of the livestock;
a processor, coupled to the camera, configured to perform the method of any of claims 1-5.
7. An electronic device, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the animal body scale detection method of any of claims 1-5 based on instructions stored in the memory.
8. A computer-readable storage medium on which a program is stored which, when executed by a processor, implements the livestock body scale detecting method according to any of claims 1-5.
CN201910560495.6A 2019-06-26 2019-06-26 Method and device for detecting livestock body ruler Active CN110296660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910560495.6A CN110296660B (en) 2019-06-26 2019-06-26 Method and device for detecting livestock body ruler

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910560495.6A CN110296660B (en) 2019-06-26 2019-06-26 Method and device for detecting livestock body ruler

Publications (2)

Publication Number Publication Date
CN110296660A CN110296660A (en) 2019-10-01
CN110296660B true CN110296660B (en) 2021-03-02

Family

ID=68028929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910560495.6A Active CN110296660B (en) 2019-06-26 2019-06-26 Method and device for detecting livestock body ruler

Country Status (1)

Country Link
CN (1) CN110296660B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178381A (en) * 2019-11-21 2020-05-19 北京海益同展信息科技有限公司 Poultry egg weight estimation and image processing method and device
CN111862189B (en) * 2020-07-07 2023-12-05 京东科技信息技术有限公司 Body size information determining method, body size information determining device, electronic equipment and computer readable medium
CN113449638B (en) * 2021-06-29 2023-04-21 北京新希望六和生物科技产业集团有限公司 Pig image ideal frame screening method based on machine vision technology
CN113762745A (en) * 2021-08-24 2021-12-07 北京小龙潜行科技有限公司 Live pig body shape assessment method and device based on machine vision
CN114973332A (en) * 2022-06-21 2022-08-30 河北农业大学 Weight measuring method and device, electronic equipment and living livestock measuring system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110029760A (en) * 2009-09-16 2011-03-23 (주)제주넷 Management system for livestock communicable disease
CN107180438A (en) * 2017-04-26 2017-09-19 清华大学 Estimate yak body chi, the method for body weight and corresponding portable computer device
CN107358223A (en) * 2017-08-16 2017-11-17 上海荷福人工智能科技(集团)有限公司 A kind of Face datection and face alignment method based on yolo
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net
CN109636779A (en) * 2018-11-22 2019-04-16 华南农业大学 Identify the method, apparatus and storage medium of poultry volume ruler

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6974373B2 (en) * 2002-08-02 2005-12-13 Geissler Technologies, Llc Apparatus and methods for the volumetric and dimensional measurement of livestock
CN109141248B (en) * 2018-07-26 2020-09-08 深源恒际科技有限公司 Pig weight measuring and calculating method and system based on image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110029760A (en) * 2009-09-16 2011-03-23 (주)제주넷 Management system for livestock communicable disease
CN107180438A (en) * 2017-04-26 2017-09-19 清华大学 Estimate yak body chi, the method for body weight and corresponding portable computer device
CN107358223A (en) * 2017-08-16 2017-11-17 上海荷福人工智能科技(集团)有限公司 A kind of Face datection and face alignment method based on yolo
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net
CN109636779A (en) * 2018-11-22 2019-04-16 华南农业大学 Identify the method, apparatus and storage medium of poultry volume ruler

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于MTCNN的视频人脸检测及识别方法;常思远 等;《许昌学院学报》;20190331;第38卷(第2期);149-152 *
基于RBF神经网络的种猪体重预测;刘同海 等;《农业机械学报》;20130831;第44卷(第8期);245-249 *

Also Published As

Publication number Publication date
CN110296660A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110296660B (en) Method and device for detecting livestock body ruler
Jiang et al. FLYOLOv3 deep learning for key parts of dairy cow body detection
Aquino et al. vitisBerry: An Android-smartphone application to early evaluate the number of grapevine berries by means of image analysis
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
Zhang et al. Real-time sow behavior detection based on deep learning
Li et al. Deep cascaded convolutional models for cattle pose estimation
Kalsotra et al. Background subtraction for moving object detection: explorations of recent developments and challenges
CN110163798B (en) Method and system for detecting damage of purse net in fishing ground
CN110991220B (en) Egg detection and image processing method and device, electronic equipment and storage medium
TWI776176B (en) Device and method for scoring hand work motion and storage medium
CN111488766A (en) Target detection method and device
Sachin et al. Vegetable classification using you only look once algorithm
Witte et al. Evaluation of deep learning instance segmentation models for pig precision livestock farming
Ye et al. PlantBiCNet: A new paradigm in plant science with bi-directional cascade neural network for detection and counting
Han et al. Mask_LaC R-CNN for measuring morphological features of fish
Zhang et al. An approach for goose egg recognition for robot picking based on deep learning
CN115995017A (en) Fruit identification and positioning method, device and medium
CN113947780B (en) Sika face recognition method based on improved convolutional neural network
CN112084874B (en) Object detection method and device and terminal equipment
Phan et al. Classification of Tomato Fruit Using Yolov5 and Convolutional Neural Network Models. Plants 2023, 12, 790
CN113221704A (en) Animal posture recognition method and system based on deep learning and storage medium
CN113436259A (en) Deep learning-based real-time positioning method and system for substation equipment
CN112766387A (en) Error correction method, device, equipment and storage medium for training data
Pistocchi et al. Kernelized Structural Classification for 3D dogs body parts detection
Mamat et al. Enhancing Image Annotation Technique of Fruit Classification Using a Deep Learning Approach. Sustainability 2023, 15, 901

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder