WO2020163484A1 - Computer vision-based feeding monitoring and method therefor - Google Patents

Computer vision-based feeding monitoring and method therefor Download PDF

Info

Publication number
WO2020163484A1
WO2020163484A1 PCT/US2020/016804 US2020016804W WO2020163484A1 WO 2020163484 A1 WO2020163484 A1 WO 2020163484A1 US 2020016804 W US2020016804 W US 2020016804W WO 2020163484 A1 WO2020163484 A1 WO 2020163484A1
Authority
WO
WIPO (PCT)
Prior art keywords
feed
livestock
logic circuitry
amount
time
Prior art date
Application number
PCT/US2020/016804
Other languages
French (fr)
Inventor
Joao Reboucas DOREA
Sek Cheong
Guilherme Jordao De Magalhaes Rosa
Original Assignee
Wisconsin Alumni Research Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wisconsin Alumni Research Foundation filed Critical Wisconsin Alumni Research Foundation
Priority to BR112021013111-6A priority Critical patent/BR112021013111A2/en
Priority to EP20751980.2A priority patent/EP3920691A4/en
Priority to US17/428,230 priority patent/US11937580B2/en
Publication of WO2020163484A1 publication Critical patent/WO2020163484A1/en
Priority to US18/615,874 priority patent/US20240224947A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/01Feed troughs; Feed pails
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/02Automatic devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • aspects of the present disclosure are directed to computer vision-based feed monitoring. Certain aspects are directed to monitoring feed delivery and related animal behavior, and for managing feeding for the same.
  • Various aspects are directed to addressing challenges to one or more of the design, manufacture and resulting structure/implementation of feed troughs, controlling animal feeding, and ascertaining characteristics of the same. Certain aspects are directed to addressing challenges presented by weather, animal behavior and staff fluctuation, which can affect automated monitoring and related implementations. For instance, improper feeding of cattle can hinder mink production and growth. In some instances, lack of available feed can cause stress to animals, which can be exasperated over time. Further, monitoring large livestock operations in an accurate and efficient manner, particularly where such operations are in remote areas and/or otherwise do not have the resources to adequately monitor the livestock.
  • Various example embodiments are directed articles of manufacture, related apparatuses and methods, which may address various challenges including those noted above.
  • FIG. 1 shows an apparatus/system involving feed monitoring and methods therefor, as may be implemented in accordance with various embodiments
  • FIG. 2 shows a data flow diagram, as may be implemented in accordance with various embodiments
  • Figure 3 shows a data flow diagram involving imaging for characterization of feed bunk amount and animal presence, as may be implemented in accordance with various embodiments
  • Figure 4 shows a data flow diagram involving data collection and learning, such as may be implemented with the learning algorithm block in Figure 3 and/or otherwise in accordance with various embodiments;
  • Figure 5 shows a data flow diagram involving data integration and generation of feed delivery recommendation, such as may be implemented with the optimization block in Figure 3 and/or otherwise in accordance with various embodiments.
  • a computer vision system includes a plurality of cameras configured and located to capture feed bunk and livestock images. The cameras may be located at different locations in feedlots or other agricultural environments.
  • an amount of feed in a feed bunk as well as a number of livestock at the bunks are assessed relative to time. Results of the assessment relative to amount and/or leftover feed and the number of livestock at respective times are utilized to characterize aspects of the feeding environment. For instance, the amount of feed and livestock present in an agricultural environment at respective times can be utilized to determine conditions under which more feed is needed and/or when too much feed is present. This data can also be automatically utilized to generate a call for additional feed, or to generate an output indicating a condition when too much feed has been presented. This data can further be utilized to generate a predictive feeding schedule, based on time of day and predicted feeding needs. This data may also be utilized to characterize animal starvation status, by combining the feeding behavior and bunk management predictions.
  • feed amount and livestock presence can be monitored and used with feed amount and livestock presence, such as noted above.
  • one or more of actual or forecast precipitation, temperature, wind, humidity, barometric pressure, and amount of sun (or clouds) can be ascertained and utilized in generating outputs indicative of a feeding need, or in generating a predictive output in regard to the same.
  • Outputs generated based on the monitoring and machine vision can be tailored to suit particular applications.
  • an output is generated to instruct an amount of feed to be delivered to one or more bunks to maximize feedstock gain or milk yield while minimizing feed waste.
  • This output can be dynamically generated based on monitored characteristics as noted herein, based on livestock behavior and/or environmental conditions.
  • Certain embodiments are directed to tracking characteristics of feed delivery, such as quantity, time, and an amount of time an employee or machine performs a specific task. For instance, objects may be identified via machine vision, and characteristics of feed delivery as related to the presence of the objects can be tracked relative to time and utilized with data characterizing feed amount and livestock presence.
  • a variety of power sources may be utilized for powering machine vision and/or processing techniques implemented herein, to suit particular embodiments. In some embodiments electricity is provided through an electric wire, by solar panels, or induction.
  • Results of machine vision and related processing can be processed or otherwise provided in a variety of manners.
  • images are processed at or near a location at which they are obtained, for ascertaining feed amount and livestock presence, or a variety of other imaged aspects as characterized herein. Such processing may further involve ascertaining aspects of delivering feed.
  • images are transmitted to a remote location (e.g., to cloud computing circuitry) where the images are processed (e.g., algorithms are run), which transmits results for use in controlling the provision of feed to the monitored livestock and related feed trough.
  • the resulting instructions/output can be provided in a number of manners, such as through a dashboard, on computers, tablets or smartphones.
  • machine vision is utilized for ascertaining an amount of feed in a bunk as well as a number of animals within an area at the bunk.
  • the ascertained amount of feed is categorized into predetermined ranges of feed amounts, and the ascertained number of animals is also categorized into predetermined ranges of numbers of animals.
  • Such ranges may include, for example, amounts of feed corresponding to full, medium, low and empty states of the bunk.
  • For the animals such ranges may include numbers of animals corresponding to empty, low, half, and full area around the bunk.
  • the images can be processed in real time and utilized to generate outputs based on a combination of the respective states.
  • an empty bunk and empty area (of animals), or a full bunk with a full area of animals may correspond to a“green flag” condition.
  • the feed bunk is empty, animals may not hungry or not anxiously waiting for feed to be delivered.
  • animals are feeding and there is plenty of feed available.
  • a half full amount of animals around the feed bunk and an empty bunk may correspond to a“red flag”, condition indicative of a need for feed in the bunk, which may be communicated via an automatically-generated warning output.
  • animal feeding patterns are predicted based upon the respective states noted above, relative to time. Such patterns may also relate to when and how the feed is made available, or can be dictated based on desired feeding times. For instance, where a feeding pattern is indicative of a total amount of feed that may be consumed at a particular given time, the amount and timing of feeding can be tailored accordingly. Weather conditions and weather forecasts can also be utilized as noted above, for example to avoid having feed present while precipitation is occurring (or delivery of feed if strong rain is expected to come), helping to limit the amount of feed exposed to precipitations, which may facilitate the mitigation of spoilage.
  • a feed bunk management system includes a network of interconnected data collection stations.
  • a central station such as may involve a computer server, is also connected to the data collection stations. This connection may, for example, involve an internet protocol connection, cellular connection or other data connection. An internet protocol connection may be made using a POE (Power Over Ethernet) connection that also provides power.
  • Each data collection station includes a camera and logic circuitry, such as a NUC (Next Unit of Computing) computer. The camera captures images of a feed bunk and the NUC performs preliminary data processing and stores the data temporarily until the data is uploaded to the central station (server and/or cloud).
  • the cameras and NUCs may be powered either through electric wires, if available at the feedlot, or through solar panels. Such solar panels may be paired with batteries for storage of energy for the system to be able to continue working during nights and when sunlight is not available, due to weather conditions. Batteries can be installed on feedlots with electric energy, as a backup system when power is lost.
  • an apparatus includes a plurality of networked cameras, machine-vision logic circuitry and feed-control logic circuitry. Each camera is configured and arranged to capture images of a livestock feed area.
  • the machine-vision logic circuitry is configured and arranged to, for each of the captured images, characterize an amount of available feed in the livestock feed area depicted in the captured image over time, and characterize the presence of livestock in the livestock feed area depicted in the captured image over time.
  • the feed-control logic circuitry is configured and arranged to, for each respective feed area characterized by the plurality of networked cameras, assign time-based condition values based on the characterized amount of available feed and the characterized presence of livestock provided via the machine-vision logic circuitry, and output an instruction characterizing the presentation of feed in the feed area based on the assigned time-based condition values and a current time.
  • the feed-control logic circuitry can be implemented in a variety of manners.
  • the feed-control logic circuitry is configured and arranged to assign the time- based condition values based on a number of livestock present over time during which the characterized amount of available feed is below a threshold level. Variables may be utilized, such as by assigning the time-based condition values using a score assigned to respective variables representing a level of feed available in the feed area and the number of livestock in the feed area at a common time.
  • the scored variables are processed in an algorithm that utilizes the scored variables as inputs for providing a notification that feed is needed in the feed area, which is generated as the output.
  • the feed-control logic circuitry may be used to predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values, and output an instruction directing the provision of feed in the feed area at a future time, based on the predicted future feeding needs.
  • the feed-control logic circuitry may operate with the machine-vision logic circuitry to assign the time-based condition values based on the characterized presence of livestock under conditions when the characterized amount of available feed is below a threshold, and to predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values.
  • the feed-control logic circuitry may further operate with the machine-vision logic circuitry to assign the time-based condition values based on the characterized presence of livestock relative to one or more threshold amounts of the characterized amount of available feed, and to predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values.
  • Weather conditions may also be utilized by the feed-control logic circuitry to output an instruction characterizing the presentation of the feed for each respective feed area based on current or predicted weather-based variables applicable to the feed area.
  • the feed-control logic circuitry triggers a prediction task for captured images in each feed area, and implements a deep learning algorithm to generate predicted classes based on a model trained using transfer-learning strategies.
  • the predicted classes and respective probabilities are stored with timestamp and location data. Feeding needs for an upcoming feeding period for the livestock are predicted based on the assigned time-based condition values, the predicted classes, weather forecast data, and animal behavior characteristics linked to the feed area.
  • the machine-vision logic circuitry determines the animal behavior characteristics in the feed area based on one or more of animal movement and animal presence. For instance, restless animals may move around more than calm animals, or animals may feed less when nervous. Animals may also move faster or slower than an expected rate of movement, based on a variety of characteristics ranging from nervous conditions to other health conditions.
  • the machine-vision logic circuitry may also be utilized in a variety of manners.
  • the machine-vision logic circuitry characterizes an amount of available feed by detecting a level of feed available in a feed container accessible by the livestock for feeing, and characterizes the presence of livestock in the livestock feed area by characterizing a number of livestock present at the feed container.
  • the feed-control logic circuitry then assigns the time-based condition values by assigning a score to respective variables representing the level of feed and the number of livestock for one or more points in time, and outputs the instruction by providing a notification in response to the assigned scores of the variables satisfying a condition.
  • different ones of the networked cameras are located at respective ones of the livestock feed areas.
  • the machine-vision logic circuitry includes respective logic circuits located at each of the livestock feed areas, each logic circuit being configured to process images captured by the networked camera at its corresponding livestock feed areas to provide the characterization of the amount of available feed and the presence of livestock. An output representing the respective characterizations are transmitted to the feed-control logic circuitry.
  • An amount of feed may be estimated in a variety of manners.
  • each of a plurality of networked cameras is configured to capture an image of a livestock feed area by capturing a portion of the livestock feed area that is less than all of the livestock feed area.
  • the machine-vision logic circuitry is then configured to characterize the amount of available feed and the presence of the livestock by estimating a total amount of feed and a total number of livestock in the entire livestock feed area, based on the image of the portion of the livestock feed area.
  • Weather data may be used to augment data concerning feed amount and livestock presence as ascertained via machine vision or otherwise, in accordance with the various embodiments herein.
  • the aforementioned feed-control logic circuitry is configured to operate with the machine-vision logic circuitry to utilize a data- mining algorithm with the characterized amount of available feed, the characterized presence of livestock, and weather data as inputs to the algorithm. This algorithm and these inputs are used predict an amount of feed needed, and to output an instruction based on the predicted amount of feed.
  • feed-control logic circuitry is operates to generate an algorithm model for predicting feed levels and cattle presence based on a plurality of the images of the livestock feed area, the characterized amount of available feed and the characterized presence of livestock.
  • the amount of available feed and the presence of livestock depicted in the captured image are characterized and used by the algorithm model together with the amount of available feed and presence of livestock as inputs, to generate a predictive output.
  • Such an output may indicate characteristics at which one or more of an insufficient amount of feed will be present as defined for a threshold level of livestock and an excess amount of feed is/will be present as defined for the threshold level of livestock.
  • Networked cameras as characterized herein may be implemented in one or more of a variety of manners.
  • each camera is an autonomous unit and does not rely on an internet connection or network signal, and may otherwise communicate with logic/processing circuitry for assessing feed/livestock.
  • Each camera or camera unit may have a computer, such as a small single-board computer with storage, a broadband cellular network or other communication link, solar panel and battery.
  • Each camera or camera unit may also have a Wi-Fi connection, which may be implemented for example in environments in which no cellular network is present and/or in which Wi-Fi is preferred.
  • a local server may be installed in a local environment involving the networked cameras, and may distribute internet through a radio signal. Such cameras/camera units may operate in parallel and autonomously.
  • images may be temporarily stored on a single board computer as noted above, and uploaded later to cloud or other storage when an Internet or other network connection is re-established. Such images may be uploaded in batches.
  • a deep-leaming algorithm may be used to perform instance segmentation in parallel with the tasks associated with identification and localization. This approach may be divided into three phases. First, a backbone network (with a deep learning algorithm) may extract feature maps from input images. Second, feature maps generated from the backbone deep learning algorithm may be sent to the region proposal network (RPN) to produce regions of interest (ROIs).
  • RPN region proposal network
  • the ROIs generated by the RPN are mapped to extract corresponding target features in the shared feature maps and subsequently output to fully connected layers (FC) and a fully convolutional network (FCN), which may be used to classify targets and segment instances, respectively.
  • FC fully connected layers
  • FCN fully convolutional network
  • FIG. 1 shows an apparatus 100 (or system), as may be implemented in accordance with one or more embodiments.
  • the apparatus 100 includes a plurality of cameras respectively located within feed areas 110, 111, and 112-N (and further feed areas). These cameras communicate with machine vision logic circuitry 120, which operates to assess images ascertained via the cameras to characterize an amount of feed and animals in each respective one of the feed areas and provide an output indicative of the same.
  • This approach may involve, for example, imaging a feed trough and a predefined area around the feed trough, and ascertaining an amount of feed in the trough as well as a number of animals around the feed trough.
  • Feed-control logic circuitry 130 utilizes the feed/livestock characterization to generate a feed instruction, for example by indicating whether feed is needed at a current time, or by generating a feed schedule based upon monitoring. Generating a feed schedule may, for example, involve predicting a feed schedule or otherwise providing an output as noted herein.
  • the machine vision logic circuitry 120 and feed-control logic circuitry 130 may be implemented in a variety of manners.
  • the machine vision logic circuitry is implemented with the feed-control logic circuitry in a common circuit.
  • the machine vision logic circuitry is implemented as separate circuits within and/or connected locally (e.g., directly) to each camera in the feed areas 110-N, facilitating the transmission of data characterizing the feed/livestock, which may be useful for limiting the amount of data transmitted over distance (e.g., without the need for transmitting images that may involve a large amount of data).
  • the machine vision logic circuitry is located remotely from the cameras/feed areas 110-N, and processes the data from each feed area to provide an output characterizing the feed and/or livestock.
  • Figure 2 is a data-flow type diagram characterizing an approach
  • apparatus/system 200 to assessing images via machine vision for determining an amount of feed and livestock present, in accordance with another example embodiment.
  • Data transfer and storage is carried out on a local server (1), and transferred to a cloud platform (2), from which data analyses is performed at (3) and data visualization is provided at (4).
  • the local server (1) is omitted and communications are made directly to the cloud platform (2).
  • one or more aspects shown at 1, 2, 3 and 4 are combined. For instance, preliminary data analysis may take place locally at the feed trough location.
  • an image is acquired on an interval (e.g., every 15 minutes) by a Wi-Fi camera and is sent through a network to a local server (1), where the image is stored and sent to a cloud platform (2).
  • Each image may have an average size of 700 KB, and include image types such as RGB, depth and infrared.
  • data may be transferred automatically from the local server (1) to the cloud platform (2) in real-time. If Internet is temporarily unavailable, data is stored locally at (1) and sent to the cloud at (2) when an Internet connection is re-established.
  • images are stored, such as by using Blob storage (Binary Large Objects).
  • each new image arriving on the Blob storage triggers a function that calls an algorithm to generate predictions on the respective image.
  • Thousands of images may be labeled for bunk score classes to characterize a level of feed and livestock presence. For instance, levels corresponding to empty, low, medium, and full, and livestock presence corresponding to empty, low, half, and full may be utilized for labeling images.
  • the images can be used for predicting feed levels and cattle presence.
  • a Convolutional Neural Network (CNN) can be trained in order to generate accurate predictions.
  • model assessment in terms of prediction quality
  • an algorithm corresponding to the trained model can be stored in the cloud.
  • the result of the prediction e.g., prediction and its associate probability, as a measure of uncertainty
  • date, time, and unique identifier can be saved (e.g., in another Blob storage).
  • Results of the predictions can then be downloaded to a local server where they can be visualized in a dashboard.
  • Figure 3 shows a data flow diagram involving imaging for characterization of feed bunk amount and animal presence, as may be implemented in accordance with various embodiments.
  • Images are collected at block 310, and internet (or other network) availability is checked at block 311. If the internet is unavailable, images are stored in a local computer (or other memory) at 312, and internet availability is checked again (e.g., iteratively) at 313. If the internet is available at 313 or at 311, batches of images are sent to cloud storage at 314.
  • a process is initiated for each new image arriving and proceeds by trigging a function that calls a deep learning algorithm at block 321.
  • a predictive model generates and outputs predictions for a feed bunk amount 323 and number of animals at the bunk 324, for each respective image (or, e.g., for a few images taken closely in time). These predictions may include, for example, four feed bunk levels as shown (full, medium, low, empty) and three animal levels as shown (full, medium and empty).
  • Processing is initiated at block 330 for each prediction, with each prediction classified/named and being assigned a probability at block 331, with the information being stored.
  • an optimization model is applied to the database and used to determine a feed amount.
  • Figure 4 shows a data flow diagram involving data collection and learning, such as may be implemented with the learning algorithm block 321 in Figure 3 and/or otherwise in accordance with various embodiments.
  • Image collection is shown at block 410, and the images are used for implementing a deep learning algorithm. This may be carried out, for example, by collecting images as inputs at 420, with a backbone-learning network 430 processing the images to generate feature maps 432.
  • the feature maps may be processed in a region proposal network 440, which produces regions of interest at block 442 that can be combined with the feature maps at 450.
  • regions of interest can thus be mapped to extract corresponding target features in the shared feature maps and, at used at 460 to generate coordinates and categories with fully connected layers (FC), and a mask with a fully convolutional network (FCN) that may be used to classify targets and segment instances, respectively.
  • FC fully connected layers
  • FCN fully convolutional network
  • FIG. 5 shows a data flow diagram involving data integration and generation of feed delivery recommendation, such as may be implemented with the optimization block 332 in Figure 3 and/or otherwise in accordance with various embodiments.
  • images are collected and used at block 512 to populate a database.
  • Weather data is collected at block 520, integrated with the collected images at block 530, and used to populate a database at 540.
  • An optimization algorithm 550 is initiated at block 550 and used to generate a recommended feed delivery at 560, based on the collected image and weather data. Results may be displayed at block 565.
  • Success of the recommendation may be checked at block 570, such as by assessing an actual implementation of a recommended amount of feed that is delivered (e.g., is it enough, too little, or did it get too wet, etc.).
  • the optimization may be adjusted at 575 based on the assessed recommendation(s) from block 570, and used in providing a subsequent feed delivery recommendation at 560.
  • circuits or circuitry may be implemented by way of various circuits or circuitry, as may be illustrated by or referred to as a block, module, device, system, unit, controller, model, computer function and/or other circuit-type depictions (e.g., the various blocks/modules depicted in Figures 3-5).
  • Such circuits or circuitry are used together with other elements to exemplify how certain embodiments may be carried out in the form or structures, steps, functions, operations, and activities.
  • one or more modules are discrete logic circuits or programmable logic circuits configured for implementing
  • a programmable circuit as may be implemented for one or more blocks is one or more computer circuits, including memory circuitry for storing and accessing a program to be executed as a set (or sets) of instructions (and/or to be used as configuration data to define how the programmable circuit is to perform), and an algorithm or process as described in connection with the channel estimation/decoding approaches, or as described with the figures, is used by the
  • the instructions can be configured for implementation in logic circuitry, with the instructions (whether characterized in the form of object code, firmware or software) stored in and accessible from a memory (circuit).

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Animal Husbandry (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Birds (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Agronomy & Crop Science (AREA)
  • Signal Processing (AREA)
  • Primary Health Care (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Biophysics (AREA)
  • Mining & Mineral Resources (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Educational Administration (AREA)
  • Image Analysis (AREA)

Abstract

Aspects of this disclosure are directed to methods and apparatuses involving the characterization of livestock feeding functions. As may be implemented as or with one or more embodiments herein, networked cameras are configured to capture images of a livestock feed area and machine-vision logic circuitry characterizes, based on the captured images, an amount of available feed and the presence of livestock in the livestock feed area depicted in the captured image over time. Feed-control logic circuitry may assign time- based condition values each respective feed area characterized by the cameras based on the characterized amount of available feed and the characterized presence of livestock provided via the machine-vision logic circuitry. An instruction characterizing the presentation of feed in the feed area may be output based on the assigned time-based condition values and a current time. Such an output may be used to control the presentation amount, timing and/or other feeding characteristics.

Description

COMPUTER VISION-BASED FEEDING MONITORING
AND METHOD THEREFOR
OVERVIEW
Aspects of the present disclosure are directed to computer vision-based feed monitoring. Certain aspects are directed to monitoring feed delivery and related animal behavior, and for managing feeding for the same.
Various aspects are directed to addressing challenges to one or more of the design, manufacture and resulting structure/implementation of feed troughs, controlling animal feeding, and ascertaining characteristics of the same. Certain aspects are directed to addressing challenges presented by weather, animal behavior and staff fluctuation, which can affect automated monitoring and related implementations. For instance, improper feeding of cattle can hinder mink production and growth. In some instances, lack of available feed can cause stress to animals, which can be exasperated over time. Further, monitoring large livestock operations in an accurate and efficient manner, particularly where such operations are in remote areas and/or otherwise do not have the resources to adequately monitor the livestock.
These and other matters have presented challenges to the implementation and management of feed troughs, for a variety of livestock and other applications.
Various example embodiments are directed articles of manufacture, related apparatuses and methods, which may address various challenges including those noted above.
The above discussion/summary is not intended to describe each embodiment or every implementation of the present disclosure. The figures and detailed description that follow also exemplify various embodiments.
BRIEF DESCRIPTION OF FIGURES
Various example embodiments may be more completely understood in consideration of the following detailed description and in connection with the accompanying drawings, in which:
Figure 1 shows an apparatus/system involving feed monitoring and methods therefor, as may be implemented in accordance with various embodiments;
Figure 2 shows a data flow diagram, as may be implemented in accordance with various embodiments;
Figure 3 shows a data flow diagram involving imaging for characterization of feed bunk amount and animal presence, as may be implemented in accordance with various embodiments;
Figure 4 shows a data flow diagram involving data collection and learning, such as may be implemented with the learning algorithm block in Figure 3 and/or otherwise in accordance with various embodiments; and
Figure 5 shows a data flow diagram involving data integration and generation of feed delivery recommendation, such as may be implemented with the optimization block in Figure 3 and/or otherwise in accordance with various embodiments.
While various embodiments discussed herein are amenable to modifications and alternative forms, aspects thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure including aspects defined in the claims. In addition, the term“example” as may be used throughout this application is by way of illustration, and not limitation.
DETAILED DESCRIPTION
Aspects of the present disclosure are believed to be applicable to a variety of different types of apparatuses, systems and methods involving feed troughs and related monitoring and control, as well as overall livestock feeding applications. Various aspects of the present disclosure have been shown to be beneficial when used in the context of utilizing machine vision for monitoring feed troughs, and related approaches for correlating the detected presence of animals and amount of available feed for generating and outputting a feeding schedule. In accordance with a particular embodiment, a computer vision system includes a plurality of cameras configured and located to capture feed bunk and livestock images. The cameras may be located at different locations in feedlots or other agricultural environments. These images are processed through image analysis algorithms, and utilized in conjunction with one another to generate an output useful in controlling one or more aspects of livestock management, such as related machinery and componentry. In certain specific embodiments, an amount of feed in a feed bunk as well as a number of livestock at the bunks are assessed relative to time. Results of the assessment relative to amount and/or leftover feed and the number of livestock at respective times are utilized to characterize aspects of the feeding environment. For instance, the amount of feed and livestock present in an agricultural environment at respective times can be utilized to determine conditions under which more feed is needed and/or when too much feed is present. This data can also be automatically utilized to generate a call for additional feed, or to generate an output indicating a condition when too much feed has been presented. This data can further be utilized to generate a predictive feeding schedule, based on time of day and predicted feeding needs. This data may also be utilized to characterize animal starvation status, by combining the feeding behavior and bunk management predictions.
Various other environmental conditions can be monitored and used with feed amount and livestock presence, such as noted above. For instance, in some embodiments one or more of actual or forecast precipitation, temperature, wind, humidity, barometric pressure, and amount of sun (or clouds) can be ascertained and utilized in generating outputs indicative of a feeding need, or in generating a predictive output in regard to the same.
Outputs generated based on the monitoring and machine vision can be tailored to suit particular applications. In some embodiments, an output is generated to instruct an amount of feed to be delivered to one or more bunks to maximize feedstock gain or milk yield while minimizing feed waste. This output can be dynamically generated based on monitored characteristics as noted herein, based on livestock behavior and/or environmental conditions.
Certain embodiments are directed to tracking characteristics of feed delivery, such as quantity, time, and an amount of time an employee or machine performs a specific task. For instance, objects may be identified via machine vision, and characteristics of feed delivery as related to the presence of the objects can be tracked relative to time and utilized with data characterizing feed amount and livestock presence. A variety of power sources may be utilized for powering machine vision and/or processing techniques implemented herein, to suit particular embodiments. In some embodiments electricity is provided through an electric wire, by solar panels, or induction.
Results of machine vision and related processing can be processed or otherwise provided in a variety of manners. In some embodiments, images are processed at or near a location at which they are obtained, for ascertaining feed amount and livestock presence, or a variety of other imaged aspects as characterized herein. Such processing may further involve ascertaining aspects of delivering feed. In other embodiments, images are transmitted to a remote location (e.g., to cloud computing circuitry) where the images are processed (e.g., algorithms are run), which transmits results for use in controlling the provision of feed to the monitored livestock and related feed trough. The resulting instructions/output can be provided in a number of manners, such as through a dashboard, on computers, tablets or smartphones.
It has been recognized/discovered that, using aspects as noted herein with behavior of the livestock, an association, relationship or correlation between the behavior and feed presence and amount can be made and utilized to generate and output enhanced data that can be used in managing the delivery of feed. Various such associations, relationships and/or correlations may be made in this manner, with certain embodiments directed to such correlation being further discussed herein.
In some implementations, machine vision is utilized for ascertaining an amount of feed in a bunk as well as a number of animals within an area at the bunk. The ascertained amount of feed is categorized into predetermined ranges of feed amounts, and the ascertained number of animals is also categorized into predetermined ranges of numbers of animals. Such ranges may include, for example, amounts of feed corresponding to full, medium, low and empty states of the bunk. For the animals, such ranges may include numbers of animals corresponding to empty, low, half, and full area around the bunk. The images can be processed in real time and utilized to generate outputs based on a combination of the respective states. For instance, an empty bunk and empty area (of animals), or a full bunk with a full area of animals may correspond to a“green flag” condition. In a first scenario, although the feed bunk is empty, animals may not hungry or not anxiously waiting for feed to be delivered. In another scenario, animals are feeding and there is plenty of feed available. However, a half full amount of animals around the feed bunk and an empty bunk may correspond to a“red flag”, condition indicative of a need for feed in the bunk, which may be communicated via an automatically-generated warning output.
In various embodiments, animal feeding patterns are predicted based upon the respective states noted above, relative to time. Such patterns may also relate to when and how the feed is made available, or can be dictated based on desired feeding times. For instance, where a feeding pattern is indicative of a total amount of feed that may be consumed at a particular given time, the amount and timing of feeding can be tailored accordingly. Weather conditions and weather forecasts can also be utilized as noted above, for example to avoid having feed present while precipitation is occurring (or delivery of feed if strong rain is expected to come), helping to limit the amount of feed exposed to precipitations, which may facilitate the mitigation of spoilage.
In various embodiments, a feed bunk management system includes a network of interconnected data collection stations. A central station, such as may involve a computer server, is also connected to the data collection stations. This connection may, for example, involve an internet protocol connection, cellular connection or other data connection. An internet protocol connection may be made using a POE (Power Over Ethernet) connection that also provides power. Each data collection station includes a camera and logic circuitry, such as a NUC (Next Unit of Computing) computer. The camera captures images of a feed bunk and the NUC performs preliminary data processing and stores the data temporarily until the data is uploaded to the central station (server and/or cloud).
The cameras and NUCs may be powered either through electric wires, if available at the feedlot, or through solar panels. Such solar panels may be paired with batteries for storage of energy for the system to be able to continue working during nights and when sunlight is not available, due to weather conditions. Batteries can be installed on feedlots with electric energy, as a backup system when power is lost.
As may be implemented in accordance with one or more embodiments, an apparatus includes a plurality of networked cameras, machine-vision logic circuitry and feed-control logic circuitry. Each camera is configured and arranged to capture images of a livestock feed area. The machine-vision logic circuitry is configured and arranged to, for each of the captured images, characterize an amount of available feed in the livestock feed area depicted in the captured image over time, and characterize the presence of livestock in the livestock feed area depicted in the captured image over time. The feed-control logic circuitry is configured and arranged to, for each respective feed area characterized by the plurality of networked cameras, assign time-based condition values based on the characterized amount of available feed and the characterized presence of livestock provided via the machine-vision logic circuitry, and output an instruction characterizing the presentation of feed in the feed area based on the assigned time-based condition values and a current time.
The feed-control logic circuitry can be implemented in a variety of manners. In some embodiments, the feed-control logic circuitry is configured and arranged to assign the time- based condition values based on a number of livestock present over time during which the characterized amount of available feed is below a threshold level. Variables may be utilized, such as by assigning the time-based condition values using a score assigned to respective variables representing a level of feed available in the feed area and the number of livestock in the feed area at a common time. The scored variables are processed in an algorithm that utilizes the scored variables as inputs for providing a notification that feed is needed in the feed area, which is generated as the output. The feed-control logic circuitry may be used to predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values, and output an instruction directing the provision of feed in the feed area at a future time, based on the predicted future feeding needs. For instance, the feed-control logic circuitry may operate with the machine-vision logic circuitry to assign the time-based condition values based on the characterized presence of livestock under conditions when the characterized amount of available feed is below a threshold, and to predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values. The feed-control logic circuitry may further operate with the machine-vision logic circuitry to assign the time-based condition values based on the characterized presence of livestock relative to one or more threshold amounts of the characterized amount of available feed, and to predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values. Weather conditions may also be utilized by the feed-control logic circuitry to output an instruction characterizing the presentation of the feed for each respective feed area based on current or predicted weather-based variables applicable to the feed area.
In another embodiment, the feed-control logic circuitry triggers a prediction task for captured images in each feed area, and implements a deep learning algorithm to generate predicted classes based on a model trained using transfer-learning strategies. The predicted classes and respective probabilities are stored with timestamp and location data. Feeding needs for an upcoming feeding period for the livestock are predicted based on the assigned time-based condition values, the predicted classes, weather forecast data, and animal behavior characteristics linked to the feed area.
In certain implementations, the machine-vision logic circuitry determines the animal behavior characteristics in the feed area based on one or more of animal movement and animal presence. For instance, restless animals may move around more than calm animals, or animals may feed less when nervous. Animals may also move faster or slower than an expected rate of movement, based on a variety of characteristics ranging from nervous conditions to other health conditions.
The machine-vision logic circuitry may also be utilized in a variety of manners. In some implementations, the machine-vision logic circuitry characterizes an amount of available feed by detecting a level of feed available in a feed container accessible by the livestock for feeing, and characterizes the presence of livestock in the livestock feed area by characterizing a number of livestock present at the feed container. The feed-control logic circuitry then assigns the time-based condition values by assigning a score to respective variables representing the level of feed and the number of livestock for one or more points in time, and outputs the instruction by providing a notification in response to the assigned scores of the variables satisfying a condition.
In certain embodiments, different ones of the networked cameras are located at respective ones of the livestock feed areas. The machine-vision logic circuitry includes respective logic circuits located at each of the livestock feed areas, each logic circuit being configured to process images captured by the networked camera at its corresponding livestock feed areas to provide the characterization of the amount of available feed and the presence of livestock. An output representing the respective characterizations are transmitted to the feed-control logic circuitry.
An amount of feed may be estimated in a variety of manners. In some embodiments, each of a plurality of networked cameras is configured to capture an image of a livestock feed area by capturing a portion of the livestock feed area that is less than all of the livestock feed area. The machine-vision logic circuitry is then configured to characterize the amount of available feed and the presence of the livestock by estimating a total amount of feed and a total number of livestock in the entire livestock feed area, based on the image of the portion of the livestock feed area.
Weather data may be used to augment data concerning feed amount and livestock presence as ascertained via machine vision or otherwise, in accordance with the various embodiments herein. In a particular embodiment, the aforementioned feed-control logic circuitry is configured to operate with the machine-vision logic circuitry to utilize a data- mining algorithm with the characterized amount of available feed, the characterized presence of livestock, and weather data as inputs to the algorithm. This algorithm and these inputs are used predict an amount of feed needed, and to output an instruction based on the predicted amount of feed.
A variety of algorithm type approaches can be implemented for assessing and implementing predictive approaches to controlling an amount of feed and adjusted based on learning algorithms and actual observed feed and livestock amounts. In some embodiments, feed-control logic circuitry is operates to generate an algorithm model for predicting feed levels and cattle presence based on a plurality of the images of the livestock feed area, the characterized amount of available feed and the characterized presence of livestock. In response to a new captured image of the livestock feed area, the amount of available feed and the presence of livestock depicted in the captured image are characterized and used by the algorithm model together with the amount of available feed and presence of livestock as inputs, to generate a predictive output. Such an output may indicate characteristics at which one or more of an insufficient amount of feed will be present as defined for a threshold level of livestock and an excess amount of feed is/will be present as defined for the threshold level of livestock.
Networked cameras as characterized herein may be implemented in one or more of a variety of manners. In some implementations, each camera is an autonomous unit and does not rely on an internet connection or network signal, and may otherwise communicate with logic/processing circuitry for assessing feed/livestock. Each camera or camera unit may have a computer, such as a small single-board computer with storage, a broadband cellular network or other communication link, solar panel and battery. Each camera or camera unit may also have a Wi-Fi connection, which may be implemented for example in environments in which no cellular network is present and/or in which Wi-Fi is preferred. A local server may be installed in a local environment involving the networked cameras, and may distribute internet through a radio signal. Such cameras/camera units may operate in parallel and autonomously. In the absence of an Internet or other network connection, images may be temporarily stored on a single board computer as noted above, and uploaded later to cloud or other storage when an Internet or other network connection is re-established. Such images may be uploaded in batches. In certain embodiments, a deep-leaming algorithm may be used to perform instance segmentation in parallel with the tasks associated with identification and localization. This approach may be divided into three phases. First, a backbone network (with a deep learning algorithm) may extract feature maps from input images. Second, feature maps generated from the backbone deep learning algorithm may be sent to the region proposal network (RPN) to produce regions of interest (ROIs). Third, the ROIs generated by the RPN are mapped to extract corresponding target features in the shared feature maps and subsequently output to fully connected layers (FC) and a fully convolutional network (FCN), which may be used to classify targets and segment instances, respectively. Such an approach may be carried out using a Mask-RCNN type algorithm as denoted in He, K., Gkioxari, G., Dollar,
P. and Girshick, R. Mask RCNN, In Proceedings of the IEEE International Conference on Computer Vision, Venice - Italy, 2980-2988, 2017, which is fully incorporated herein by reference. In certain embodiments, such an approach may be carried out with the first, second and third phases respectively implemented in accordance with Figures 3, 4 and 5 as characterized below.
Turning now to the figures, Figure 1 shows an apparatus 100 (or system), as may be implemented in accordance with one or more embodiments. The apparatus 100 includes a plurality of cameras respectively located within feed areas 110, 111, and 112-N (and further feed areas). These cameras communicate with machine vision logic circuitry 120, which operates to assess images ascertained via the cameras to characterize an amount of feed and animals in each respective one of the feed areas and provide an output indicative of the same. This approach may involve, for example, imaging a feed trough and a predefined area around the feed trough, and ascertaining an amount of feed in the trough as well as a number of animals around the feed trough. Feed-control logic circuitry 130 utilizes the feed/livestock characterization to generate a feed instruction, for example by indicating whether feed is needed at a current time, or by generating a feed schedule based upon monitoring. Generating a feed schedule may, for example, involve predicting a feed schedule or otherwise providing an output as noted herein.
The machine vision logic circuitry 120 and feed-control logic circuitry 130 may be implemented in a variety of manners. In some embodiments, the machine vision logic circuitry is implemented with the feed-control logic circuitry in a common circuit. In certain embodiments, the machine vision logic circuitry is implemented as separate circuits within and/or connected locally (e.g., directly) to each camera in the feed areas 110-N, facilitating the transmission of data characterizing the feed/livestock, which may be useful for limiting the amount of data transmitted over distance (e.g., without the need for transmitting images that may involve a large amount of data). In other embodiments, the machine vision logic circuitry is located remotely from the cameras/feed areas 110-N, and processes the data from each feed area to provide an output characterizing the feed and/or livestock.
Figure 2 is a data-flow type diagram characterizing an approach and
apparatus/system 200 to assessing images via machine vision for determining an amount of feed and livestock present, in accordance with another example embodiment. Data transfer and storage is carried out on a local server (1), and transferred to a cloud platform (2), from which data analyses is performed at (3) and data visualization is provided at (4). In some implementations, the local server (1) is omitted and communications are made directly to the cloud platform (2). In other implementations, one or more aspects shown at 1, 2, 3 and 4 are combined. For instance, preliminary data analysis may take place locally at the feed trough location.
In a particular embodiment involving the apparatus 200, an image is acquired on an interval (e.g., every 15 minutes) by a Wi-Fi camera and is sent through a network to a local server (1), where the image is stored and sent to a cloud platform (2). Each image may have an average size of 700 KB, and include image types such as RGB, depth and infrared. If Internet is available, data may be transferred automatically from the local server (1) to the cloud platform (2) in real-time. If Internet is temporarily unavailable, data is stored locally at (1) and sent to the cloud at (2) when an Internet connection is re-established. In the cloud at (2), images are stored, such as by using Blob storage (Binary Large Objects).
Processing of the image data can be carried out in a variety of manners, to suit particular applications. In some embodiments, each new image arriving on the Blob storage triggers a function that calls an algorithm to generate predictions on the respective image. Thousands of images may be labeled for bunk score classes to characterize a level of feed and livestock presence. For instance, levels corresponding to empty, low, medium, and full, and livestock presence corresponding to empty, low, half, and full may be utilized for labeling images.
After a desired amount of images are labeled, the images can be used for predicting feed levels and cattle presence. For instance, a Convolutional Neural Network (CNN) can be trained in order to generate accurate predictions. After model assessment (in terms of prediction quality), an algorithm corresponding to the trained model can be stored in the cloud. Thus, for every new image coming to the cloud, the algorithm can be called and a prediction is made. The result of the prediction (e.g., prediction and its associate probability, as a measure of uncertainty), date, time, and unique identifier can be saved (e.g., in another Blob storage). Results of the predictions can then be downloaded to a local server where they can be visualized in a dashboard.
Figure 3 shows a data flow diagram involving imaging for characterization of feed bunk amount and animal presence, as may be implemented in accordance with various embodiments. Images are collected at block 310, and internet (or other network) availability is checked at block 311. If the internet is unavailable, images are stored in a local computer (or other memory) at 312, and internet availability is checked again (e.g., iteratively) at 313. If the internet is available at 313 or at 311, batches of images are sent to cloud storage at 314.
At block 320, a process is initiated for each new image arriving and proceeds by trigging a function that calls a deep learning algorithm at block 321. At block 322, a predictive model generates and outputs predictions for a feed bunk amount 323 and number of animals at the bunk 324, for each respective image (or, e.g., for a few images taken closely in time). These predictions may include, for example, four feed bunk levels as shown (full, medium, low, empty) and three animal levels as shown (full, medium and empty).
Processing is initiated at block 330 for each prediction, with each prediction classified/named and being assigned a probability at block 331, with the information being stored. At block 332, an optimization model is applied to the database and used to determine a feed amount.
Figure 4 shows a data flow diagram involving data collection and learning, such as may be implemented with the learning algorithm block 321 in Figure 3 and/or otherwise in accordance with various embodiments. Image collection is shown at block 410, and the images are used for implementing a deep learning algorithm. This may be carried out, for example, by collecting images as inputs at 420, with a backbone-learning network 430 processing the images to generate feature maps 432. The feature maps may be processed in a region proposal network 440, which produces regions of interest at block 442 that can be combined with the feature maps at 450. These regions of interest can thus be mapped to extract corresponding target features in the shared feature maps and, at used at 460 to generate coordinates and categories with fully connected layers (FC), and a mask with a fully convolutional network (FCN) that may be used to classify targets and segment instances, respectively. Such an approach may be carried out using a Mask-RCNN type algorithm as referenced above.
Figure 5 shows a data flow diagram involving data integration and generation of feed delivery recommendation, such as may be implemented with the optimization block 332 in Figure 3 and/or otherwise in accordance with various embodiments. At block 510, images are collected and used at block 512 to populate a database. Weather data is collected at block 520, integrated with the collected images at block 530, and used to populate a database at 540. An optimization algorithm 550 is initiated at block 550 and used to generate a recommended feed delivery at 560, based on the collected image and weather data. Results may be displayed at block 565. Success of the recommendation may be checked at block 570, such as by assessing an actual implementation of a recommended amount of feed that is delivered (e.g., is it enough, too little, or did it get too wet, etc.). The optimization may be adjusted at 575 based on the assessed recommendation(s) from block 570, and used in providing a subsequent feed delivery recommendation at 560.
Various terminology used herein (and in the claims) may be implemented by way of various circuits or circuitry, as may be illustrated by or referred to as a block, module, device, system, unit, controller, model, computer function and/or other circuit-type depictions (e.g., the various blocks/modules depicted in Figures 3-5). Such circuits or circuitry are used together with other elements to exemplify how certain embodiments may be carried out in the form or structures, steps, functions, operations, and activities. For example, in certain of the above-discussed embodiments, one or more modules are discrete logic circuits or programmable logic circuits configured for implementing
operations/activities, as may be carried out in the approaches shown in the Figures and/or otherwise characterized herein. In certain embodiments, a programmable circuit as may be implemented for one or more blocks is one or more computer circuits, including memory circuitry for storing and accessing a program to be executed as a set (or sets) of instructions (and/or to be used as configuration data to define how the programmable circuit is to perform), and an algorithm or process as described in connection with the channel estimation/decoding approaches, or as described with the figures, is used by the
programmable circuit to perform the related steps, functions, operations, activities, etc. Depending on the application, the instructions (and/or configuration data) can be configured for implementation in logic circuitry, with the instructions (whether characterized in the form of object code, firmware or software) stored in and accessible from a memory (circuit).
Based upon the above discussion and illustrations, those skilled in the art will readily recognize that various modifications and changes may be made to the various embodiments without strictly following the exemplary embodiments and applications illustrated and described herein. For example, a variety of different types of feed troughs and approaches may be monitored, and a variety of different types of animals be monitored. Other factors, such as time of year, number of animals, environmental conditions, may also be utilized as part of a characterization of an overall feeding environment and to provide insight as to how to manage feeding. Various modelling approaches may be utilized to generate specific characterizations based on available data sets and use thereof. Such modifications do not depart from the true spirit and scope of various aspects of the invention, including aspects set forth in the claims.

Claims

What is Claimed is:
1. An apparatus comprising:
a plurality of networked cameras, each camera configured and arranged to capture images of a livestock feed area;
machine-vision logic circuitry configured and arranged to, for each of the captured images,
characterize an amount of available feed in the livestock feed area depicted in the captured image over time, and
characterize the presence of livestock in the livestock feed area depicted in the captured image over time; and
feed-control logic circuitry configured and arranged to, for each respective feed area characterized by the plurality of networked cameras,
assign time-based condition values to the feed area based on the characterized amount of available feed and the characterized presence of livestock provided via the machine-vision logic circuitry, and
output an instruction characterizing the presentation of feed in the feed area based on the assigned time-based condition values and a current time.
2. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged to assign the time-based condition values based on a number of livestock present over time during which the characterized amount of available feed is below a threshold level.
3. The apparatus of claim 1, wherein
the machine- vision logic circuitry is configured and arranged to
characterize the amount of available feed by detecting a level of feed available in a feed container accessible by the livestock for feeing, and
characterize the presence of livestock in the livestock feed area by characterizing a number of livestock present at the feed container; and
the feed-control logic circuitry is configured and arranged to
assign the time-based condition values by assigning a score to respective variables representing the level of feed and the number of livestock for one or more points in time, and output the instruction by providing a notification in response to the assigned scores of the variables satisfying a condition.
4. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged to
assign the time-based condition values by assigning a score to respective variables representing a level of feed available in the feed area and the number of livestock in the feed area at a common time, and
output the instruction by processing the scored variables in an algorithm that utilizes the scored variables as inputs for providing a notification that feed is needed in the feed area.
5. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged to:
predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values, and
output the instruction by outputting an instruction directing the provision of feed in the feed area at a future time, based on the predicted future feeding needs.
6. The apparatus of claim 5, wherein the feed-control logic circuitry is configured and arranged with the machine-vision logic circuitry to
assign the time-based condition values based on the characterized presence of livestock under conditions when the characterized amount of available feed is below a threshold, and
predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values.
7. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged with the machine-vision logic circuitry to
assign the time-based condition values based on the characterized presence of livestock relative to one or more threshold amounts of the characterized amount of available feed, and
predict future feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values.
8. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged to output the instruction characterizing the presentation of the feed for each respective feed area based on current or predicted weather-based variables applicable to the feed area.
9. The apparatus of claim 1, wherein
different ones of the plurality of networked cameras are located at respective ones of the livestock feed areas,
the machine-vision logic circuitry includes respective logic circuits located at each of the livestock feed areas, each logic circuit being configured and arranged to process images captured by the networked camera at its corresponding livestock feed areas to provide the characterization of the amount of available feed and the presence of livestock, and to transmit an output representing the respective characterizations to the feed-control logic circuitry.
10. The apparatus of claim 1, wherein
each of the plurality of networked cameras is configured and arranged to capture an image of the livestock feed area by capturing a portion of the livestock feed area that is less than all of the livestock feed area; and
the machine- vision logic circuitry is configured and arranged to characterize the amount of available feed and the presence of the livestock by estimating a total amount of feed and a total number of livestock in the entire livestock feed area, based on the image of the portion of the livestock feed area.
11. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged to:
generate an algorithm model for predicting feed levels and cattle presence based on a plurality of the images of the livestock feed area, the characterized amount of available feed and the characterized presence of livestock;
in response to a new captured image of the livestock feed area, characterize the amount of available feed and the presence of livestock depicted in the captured image, execute the algorithm model with the amount of available feed and presence of livestock as inputs, and generate a predictive output indicating characteristics at which one or more of: an insufficient amount of feed will be present as defined for a threshold level of livestock; and
an excess amount of feed is present as defined for the threshold level of livestock.
12. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged with the machine-vision logic circuitry to utilize a data mining algorithm with the characterized amount of available feed, the characterized presence of livestock, and weather data as inputs to the algorithm, to predict an amount of feed needed, and to output an instruction based on the predicted amount of feed.
13. The apparatus of claim 1, wherein the feed-control logic circuitry is configured and arranged to, for each feed area:
trigger a prediction task for the captured images and implement a deep learning algorithm to generate predicted classes based on a model trained using transfer-learning strategies;
store the predicted classes and respective probabilities with timestamp and location data; and
predict feeding needs for an upcoming feeding period for the livestock based on the assigned time-based condition values, the predicted classes, weather forecast data, and animal behavior characteristics linked to the feed area.
14. The apparatus of claim 13, wherein the machine-vision logic circuitry is configured and arranged to determine the animal behavior characteristics in the feed area based on one or more of animal movement and animal presence.
15. An apparatus comprising:
machine-vision logic circuitry configured and arranged to, for images of a livestock feed area captured by a plurality of networked cameras:
characterize an amount of available feed in the feed area depicted in the images over time, and
characterize the presence of livestock in the feed area depicted in the images over time; and feed-control logic circuitry configured and arranged to:
assign time-based condition values to the feed area based on the characterized amount of available feed and the characterized presence of livestock provided via the machine-vision logic circuitry, and
output an instruction characterizing the feed in the feed area based on the assigned time-based condition values.
16. A method comprising:
for each captured image of a livestock feed area,
characterizing an amount of available feed in the livestock feed area depicted in the captured image over time, and
characterizing the presence of livestock in the livestock feed area depicted in the captured image over time; and
for each respective feed area characterized by each captured image,
assigning time-based condition values based on the characterized amount of available feed and the characterized presence of livestock, and
outputting an instruction characterizing the presentation of feed in the feed area based on the assigned time-based condition values and a current time.
17. The method of claim 16, further including assigning the time-based condition values based on a number of livestock present over time during which the characterized amount of available feed is below a threshold level.
18. The method of claim 16, wherein:
characterizing the amount of available feed includes detecting a level of feed available in a feed container accessible by the livestock for feeing;
characterizing the presence of livestock in the livestock feed area includes characterizing a number of livestock present at the feed container;
assigning the time-based condition values includes assigning a score to respective variables representing the level of feed and the number of livestock for one or more points in time; and
outputting the instruction includes providing a notification in response to the assigned scores of the variables satisfying a condition.
19. The method of claim 16, wherein:
assigning the time-based condition values includes assigning a score to respective variables representing a level of feed available in the feed area and the number of livestock in the feed area at a common time; and
outputting the instruction includes processing the scored variables in an algorithm that utilizes the scored variables as inputs for providing a notification that feed is needed in the feed area.
20. The method of claim 16, further including predicting feeding needs of the livestock in each livestock feed area based on the assigned time-based condition values, wherein outputting the instruction includes outputting an instruction directing the provision of feed in the feed area at a future time, based on the predicted future feeding needs.
PCT/US2020/016804 2019-02-05 2020-02-05 Computer vision-based feeding monitoring and method therefor WO2020163484A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
BR112021013111-6A BR112021013111A2 (en) 2019-02-05 2020-02-05 POWER MONITORING BASED ON COMPUTER VISION AND METHOD FOR THE SAME
EP20751980.2A EP3920691A4 (en) 2019-02-05 2020-02-05 Computer vision-based feeding monitoring and method therefor
US17/428,230 US11937580B2 (en) 2019-02-05 2020-02-05 Computer vision-based feeding monitoring and method therefor
US18/615,874 US20240224947A1 (en) 2019-02-05 2024-03-25 Computer vision-based feeding monitoring and method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962801439P 2019-02-05 2019-02-05
US62/801,439 2019-02-05

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/428,230 A-371-Of-International US11937580B2 (en) 2019-02-05 2020-02-05 Computer vision-based feeding monitoring and method therefor
US18/615,874 Continuation US20240224947A1 (en) 2019-02-05 2024-03-25 Computer vision-based feeding monitoring and method therefor

Publications (1)

Publication Number Publication Date
WO2020163484A1 true WO2020163484A1 (en) 2020-08-13

Family

ID=71947212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/016804 WO2020163484A1 (en) 2019-02-05 2020-02-05 Computer vision-based feeding monitoring and method therefor

Country Status (4)

Country Link
US (2) US11937580B2 (en)
EP (1) EP3920691A4 (en)
BR (1) BR112021013111A2 (en)
WO (1) WO2020163484A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131973A (en) * 2020-09-07 2020-12-25 北京海益同展信息科技有限公司 Feed processing supervision method, system, equipment and storage medium
WO2022174228A1 (en) * 2021-02-10 2022-08-18 Can Technologies, Inc. Feedbunk volume estimation via image segmentation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020163484A1 (en) * 2019-02-05 2020-08-13 Wisconsin Alumni Research Foundation Computer vision-based feeding monitoring and method therefor
CN117456472B (en) * 2023-12-25 2024-04-23 北京市农林科学院信息技术研究中心 Herbivore feed intake monitoring method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008118004A1 (en) 2007-03-26 2008-10-02 Maasland N.V. Unmanned vehicle for supplying feed to an animal
JP2012205555A (en) * 2011-03-30 2012-10-25 National Agriculture & Food Research Organization Control method and system of health condition of dairy cow
KR20140110739A (en) * 2013-02-28 2014-09-17 한국과학기술원 Individual Feeding Preference Based Multi-Pet Feeding Automation System
WO2014166498A1 (en) 2013-04-10 2014-10-16 Viking Genetics Fmba System for determining feed consumption of at least one animal
KR20160005456A (en) * 2014-07-07 2016-01-15 배보현 System and method for adjusting the feed according to the activity of animal
WO2017001538A1 (en) 2015-07-01 2017-01-05 Viking Genetics Fmba System and method for identification of individual animals based on images of the back
KR101867672B1 (en) * 2016-06-23 2018-06-14 주식회사 연합축산 method of control feeder suppling
US20190008124A1 (en) * 2016-01-29 2019-01-10 Sony Corporation Information processing apparatus, information processing system, and information processing method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1033589C2 (en) * 2007-03-26 2008-09-29 Maasland Nv Assembly of a milking robot with a milking robot feeding place, and device for gripping and moving material.
NL2008673C2 (en) * 2012-04-20 2013-10-23 Lely Patent Nv DEVICE FOR MOVING FOOD.
BR112015012761A2 (en) * 2012-12-02 2017-07-11 Agricam Ab system and method for predicting the outcome of an individual's health in an environment, and use of a system
CN103674857B (en) 2013-12-23 2015-09-30 中国科学院自动化研究所 Based on forage detection system and the method for machine vision
NL2017364B1 (en) * 2016-08-25 2018-03-01 Lely Patent Nv System for handling feed at a farm
WO2018223102A1 (en) 2017-06-02 2018-12-06 Performance Livestock Analytics, Inc. Adaptive livestock growth modeling using machine learning approaches to predict growth and recommend livestock management operations and activities
CN113163733A (en) * 2018-10-17 2021-07-23 集团罗-曼公司 Livestock monitoring equipment
WO2020163484A1 (en) * 2019-02-05 2020-08-13 Wisconsin Alumni Research Foundation Computer vision-based feeding monitoring and method therefor

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008118004A1 (en) 2007-03-26 2008-10-02 Maasland N.V. Unmanned vehicle for supplying feed to an animal
JP2012205555A (en) * 2011-03-30 2012-10-25 National Agriculture & Food Research Organization Control method and system of health condition of dairy cow
KR20140110739A (en) * 2013-02-28 2014-09-17 한국과학기술원 Individual Feeding Preference Based Multi-Pet Feeding Automation System
WO2014166498A1 (en) 2013-04-10 2014-10-16 Viking Genetics Fmba System for determining feed consumption of at least one animal
KR20160005456A (en) * 2014-07-07 2016-01-15 배보현 System and method for adjusting the feed according to the activity of animal
WO2017001538A1 (en) 2015-07-01 2017-01-05 Viking Genetics Fmba System and method for identification of individual animals based on images of the back
US20190008124A1 (en) * 2016-01-29 2019-01-10 Sony Corporation Information processing apparatus, information processing system, and information processing method
KR101867672B1 (en) * 2016-06-23 2018-06-14 주식회사 연합축산 method of control feeder suppling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3920691A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131973A (en) * 2020-09-07 2020-12-25 北京海益同展信息科技有限公司 Feed processing supervision method, system, equipment and storage medium
CN112131973B (en) * 2020-09-07 2023-11-07 京东科技信息技术有限公司 Feed processing supervision method, system, equipment and storage medium
WO2022174228A1 (en) * 2021-02-10 2022-08-18 Can Technologies, Inc. Feedbunk volume estimation via image segmentation

Also Published As

Publication number Publication date
EP3920691A4 (en) 2022-10-26
EP3920691A1 (en) 2021-12-15
US20220287276A1 (en) 2022-09-15
BR112021013111A2 (en) 2021-09-21
US20240224947A1 (en) 2024-07-11
US11937580B2 (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US11937580B2 (en) Computer vision-based feeding monitoring and method therefor
CN108875647B (en) Moving track monitoring method and system based on livestock identity
CN105100724B (en) A kind of smart home telesecurity monitoring method of view-based access control model analysis
CN111025969B (en) Wild animal monitoring system and method based on information fusion
KR20200057405A (en) Cow's Signs Detection System based on the Internet of Things
CN115457468A (en) Intelligent livestock monitoring method and system for large grassland
CN112068464A (en) Bird repelling device and method based on active detection and visual recognition
CN112183487A (en) Livestock health monitoring system and method based on 5G
KR102268040B1 (en) Apparatus and method for managing livestock using machine learning
CN112766587B (en) Logistics order processing method, device, computer equipment and storage medium
CN117250994B (en) Method and system for tracking insect migration track based on unmanned aerial vehicle
CN114326648A (en) Remote cooperative electrolysis control method and system
US20240099265A1 (en) Device and method for the automated identification of a pig that is ready for onward transfer
US20230081930A1 (en) Data collection device, data collection method, and data collection program
US11978247B2 (en) Adversarial masks for scene-customized false detection removal
US20220335725A1 (en) Monitoring presence or absence of an object using local region matching
JP7107423B1 (en) Power consumption prediction device, power consumption prediction method, and power consumption prediction program
CN113759210B (en) Power distribution room state monitoring system and power distribution room monitoring data transmission method
CN111291597B (en) Crowd situation analysis method, device, equipment and system based on image
CN115345305A (en) Inference system, method, device and related equipment
Park et al. Deep learning-based method for detecting anomalies of operating equipment dynamically in livestock farms
CN113613164A (en) Property positioning method, device and system based on Bluetooth and image
CN111627060A (en) Data processing method and system for animal motion information statistics
CN111311637A (en) Alarm event processing method and device, storage medium and electronic device
Gravemeier et al. Conceptualizing a holistic smart dairy farming system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20751980

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021013111

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020751980

Country of ref document: EP

Effective date: 20210906

ENP Entry into the national phase

Ref document number: 112021013111

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210701