CN110796043B - Container detection and feeding detection method and device and feeding system - Google Patents

Container detection and feeding detection method and device and feeding system Download PDF

Info

Publication number
CN110796043B
CN110796043B CN201910985342.6A CN201910985342A CN110796043B CN 110796043 B CN110796043 B CN 110796043B CN 201910985342 A CN201910985342 A CN 201910985342A CN 110796043 B CN110796043 B CN 110796043B
Authority
CN
China
Prior art keywords
image
container
detected
trough
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910985342.6A
Other languages
Chinese (zh)
Other versions
CN110796043A (en
Inventor
张为明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Jingdong Technology Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN201910985342.6A priority Critical patent/CN110796043B/en
Publication of CN110796043A publication Critical patent/CN110796043A/en
Application granted granted Critical
Publication of CN110796043B publication Critical patent/CN110796043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K39/00Feeding or drinking appliances for poultry or other birds
    • A01K39/01Feeding devices, e.g. chainfeeders
    • A01K39/012Feeding devices, e.g. chainfeeders filling automatically, e.g. by gravity from a reserve
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K45/00Other aviculture appliances, e.g. devices for determining whether a bird is about to lay
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/02Automatic devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The application relates to a container detection and feeding detection method, a device and a feeding system, wherein the container detection method comprises the following steps: acquiring a first image to be detected of a target container; identifying target object information in the first image to be detected according to a pre-trained container detection model; and generating a label corresponding to the target container according to the target object information. According to the technical scheme, the target object in the container is not required to be monitored and added manually, the efficiency and accuracy of monitoring and adding of the target object in the container are improved, the waste of the target object is avoided, and the labor cost is reduced.

Description

Container detection and feeding detection method and device and feeding system
Technical Field
The application relates to the field of image processing, in particular to a container detection and feeding detection method, a device and a feeding system.
Background
At present, the animal feeding mode of china still stops on traditional mode of feeding, so the fodder of a lot of factories needs artifical delivery, how much of fodder also is judged through the experience by the breeder, and the feed does not have unified feeding standard, often has the problem of input greatly, the output is low. Manual feeding is mostly judged by experience, feeding is not uniform, and feed is often seriously wasted, so that intelligent deployment is very important.
Most of the existing intelligent feeding systems are integrated by hardware, so that the cost is high, the problems of low reliability, poor real-time performance, high failure rate and the like exist, and meanwhile, the stress behavior of animals can be caused, and the healthy growth of the animals is influenced.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the application provides a container detection and feeding detection method, a device and a feeding system.
In a first aspect, the present application provides a container inspection method, comprising:
acquiring a first image to be detected of a target container;
identifying target object information in the first image to be detected according to a pre-trained container detection model;
and generating a label corresponding to the target container according to the target object information.
Optionally, the identifying the target object information in the first image to be detected according to the pre-trained container detection model includes:
identifying the residual quantity of the target object in the target container in the first image to be detected according to the container detection model;
and determining the target object information according to the allowance.
Optionally, the identifying the target object information in the first image to be detected according to the pre-trained container detection model further includes:
identifying the shielded rate of the target container in the first image to be detected according to the container detection model;
and determining the target object information according to the allowance and the shielded rate.
Optionally, the method further includes:
generating a throwing instruction according to the label, wherein the throwing instruction is used for controlling throwing equipment to throw the target object corresponding to the target object information to the target container;
and sending the releasing instruction to the releasing device.
Optionally, the generating a release instruction according to the tag includes:
acquiring a second image to be detected at the target container;
identifying the number of objects of a preset object in the second image to be detected according to a pre-trained object detection model;
and generating the releasing instruction according to the labels and the number of the objects.
Optionally, the method further includes:
acquiring a target container sample image;
acquiring first marking information corresponding to the target container sample image, wherein the first marking information comprises a first marking frame framing the target container in the target container sample image and target object information in the target container;
and training the target container sample image and the first marking information by adopting a first neural network, and determining the target object information in the target container according to the image characteristics of the target container sample image to obtain the container detection model.
Optionally, the target object information includes: a margin of the target object and/or an occlusion rate of the target container.
Optionally, the method further includes:
acquiring a preset object sample image;
acquiring second labeling information corresponding to the preset object sample image, wherein the second labeling information comprises a third labeling frame framing a preset object in the preset object sample image and the category of the preset object;
and training the preset object sample image and the second labeling information by adopting a second neural network, and determining the number of preset objects according to the number of third labeling frames in accordance with preset categories to obtain the object detection model.
In a second aspect, the present application provides a feeding detection method comprising:
acquiring a first image to be detected of the crib;
identifying feed information in the first image to be detected according to a pre-trained crib detection model;
and generating a label corresponding to the trough according to the feed information.
Optionally, the identifying the feed information in the first image to be detected according to a pre-trained crib detection model includes:
identifying the allowance of the feed in the first image to be detected in the trough according to the trough detection model;
and determining the feed information according to the allowance.
Optionally, the identifying the feed information in the first image to be detected according to a pre-trained crib detection model further includes:
identifying the shielded rate of the crib in the first image to be detected according to the crib detection model;
and determining the feed information according to the allowance and the shielded rate. Optionally, the method further includes:
generating a feeding instruction according to the label, wherein the feeding instruction is used for controlling feeding equipment to feed the feed corresponding to the feed information to the trough;
and sending the releasing instruction to the releasing device.
Optionally, the generating a release instruction according to the tag includes:
acquiring a second image to be detected at the trough;
identifying the number of animals in the second image to be detected according to a pre-trained animal detection model;
and generating the releasing instruction according to the label and the animal number.
Optionally, the identifying the number of animals in the second image to be detected according to a pre-trained animal detection model includes:
identifying the head direction of the animal in the second image to be detected according to the animal detection model;
determining the number of animals with the head oriented towards the trough.
In a third aspect, the present application provides a container inspection device comprising:
the acquisition module is used for acquiring a first image to be detected of the target container;
the identification module is used for identifying the target object information in the first image to be detected according to a pre-trained container detection model;
and the generating module is used for generating a label corresponding to the target container according to the target object information.
In a fourth aspect, the present application provides a feeding detection device, comprising:
the acquisition module is used for acquiring a first image to be detected of the trough;
the identification module is used for identifying the feed information in the first image to be detected according to a pre-trained crib detection model;
and the generating module is used for generating a label corresponding to the trough according to the feed information.
In a fifth aspect, the present application provides a feeding system comprising: the feeding device comprises a shooting device, a feeding detection device and a throwing device;
the shooting device is used for shooting the crib to obtain a first image to be detected;
the feeding detection device is used for acquiring the first image to be detected; identifying feed information in the first image to be detected according to a pre-trained crib detection model; generating a label corresponding to the trough according to the feed information; generating a release instruction according to the label; sending the launching instruction to the launching device;
and the feeding equipment is used for feeding the feed corresponding to the allowance into the trough.
Optionally, the shooting device is further configured to shoot a second image to be detected at the trough;
the feeding detection device is also used for identifying the number of animals in the second image to be detected according to a pre-trained animal detection model; and generating the releasing instruction according to the label and the animal number.
In a sixth aspect, the present application provides an electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above method steps when executing the computer program.
In a seventh aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the above-mentioned method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the image recognition is carried out on the container, the target object information in the container is determined, the label corresponding to the target container is generated according to the target object information, and whether the target object is put in the container or not can be determined based on the label. Therefore, the target objects in the container do not need to be monitored and added manually, the efficiency and the accuracy of monitoring and adding the target objects in the container are improved, the waste of the target objects is avoided, and the labor cost is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a container inspection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a target object in a target container provided in an embodiment of the present application;
FIG. 3 is a schematic illustration of a target in a target container provided in accordance with another embodiment of the present application;
FIG. 4 is a flow chart of a method for inspecting a container according to another embodiment of the present application;
FIG. 5 is a flow chart of a feeding method provided in the examples of the present application;
FIG. 6 is a flow chart of a feeding method provided in another embodiment of the present application;
FIG. 7 is a flow chart of a feeding method provided in another embodiment of the present application;
fig. 8 is a block diagram of a container detection apparatus according to an embodiment of the present disclosure;
FIG. 9 is a block diagram of a feeding detection device provided in an embodiment of the present application;
FIG. 10 is a block diagram of a feeding system provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method is based on a computer vision technology, and analyzes the eating state of the animals by monitoring the feed allowance in the trough, so that the feed is scientifically and reasonably put in.
First, a container inspection method according to an embodiment of the present invention will be described.
Fig. 1 is a flowchart of a container detection method according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the steps of:
step S11, acquiring a first image to be detected of the target container;
step S12, identifying the target object information in the first image to be detected according to the pre-trained container detection model;
in step S13, a label corresponding to the target container is generated based on the target object information.
In this embodiment, image recognition is performed on the container, information of the target object in the container is determined, a tag corresponding to the target container is generated according to the information of the target object, and then whether to put the target object into the container may be determined based on the tag. Therefore, the target objects in the container do not need to be monitored and added manually, the efficiency and the accuracy of monitoring and adding the target objects in the container are improved, the waste of the target objects is avoided, and the labor cost is reduced.
Wherein, step S12 includes: identifying the allowance of the target object in the first image to be detected in the target container according to the container detection model; and determining the target object information according to the allowance.
For example, as shown in fig. 2, when a first image to be detected is captured from directly above the target container, the container detection model can identify the area ratio of the target object 21 in the target container 20 in the first image to be detected; as shown in fig. 3, when the first image to be detected is captured from the side of the target container, the container detection model can identify the height ratio of the target object 21 in the target container 20 in the first image to be detected. By the area ratio or the height ratio, the remaining amount of the target object can be determined.
In another alternative embodiment, step S12 includes: identifying the shielded rate of the target container in the first image to be detected according to the container detection model; and determining the target object information according to the allowance and the shielded rate.
For example, when the target container is a trough to be fed by an animal and the target object is a feed preset object, the feed may not be accurately recognized from the first image to be measured, which is captured, because the feed may block the trough. Therefore, in order to determine the state of the feed in the trough more accurately, it is necessary to consider the shielding rate of the trough by the animal in addition to the margin recognized from the image.
Fig. 4 is a flowchart of a container inspection method according to another embodiment of the present disclosure. In another alternative embodiment, as shown in fig. 4, the method further comprises:
step S14, generating a throwing instruction according to the label, wherein the throwing instruction is used for controlling throwing equipment to throw a target object corresponding to the target object information to a target container;
step S15, sending the launch instruction to the launch device.
In this embodiment, whether to put in the target object to this container is determined based on this label, need not artifical target object to the container and add to can adopt same standard to add to the target object in the target container automatically, not only improve efficiency and the degree of accuracy to the interpolation of target object in the container, and avoid the waste of target object, reduce the cost of labor.
In another alternative embodiment, the step S14, generating the placement instruction according to the label includes:
acquiring a second image to be detected at the target container;
identifying the number of objects of a preset object in a second image to be detected according to a pre-trained object detection model;
and generating a throwing instruction according to the number of the labels and the objects.
In this embodiment, the number of objects added to the target container also depends on the number of the preset objects. For example, when the target container is a trough for feeding an animal, the target object is feed, and the predetermined object is an animal, the more animals around the trough, the greater the amount of feed added relatively. Therefore, in order to more accurately determine the number of target objects dropped to the target container, the number of preset objects at the target container is further identified based on the image.
In another optional embodiment, the method further comprises a training process of the container detection model, which is as follows:
step A1, acquiring a target container sample image;
step A2, acquiring first labeling information corresponding to a target container sample image, wherein the first labeling information comprises a first labeling frame for framing a container in the target container sample image and target object information of a target object in the target container;
step A3, training the target container sample image and the first labeling information by using a first neural network, and determining target object information of a target object in the target container according to the image characteristics of the target container sample image to obtain a container detection model.
In another alternative embodiment, the object information includes: the remaining amount of the target object and/or the occluded rate of the target container.
Wherein the first neural network may be the following target detection algorithm: YOLOv1, YOLOv2, YOLOv3, R-CNN, Fast R-CNN, SPP-net, Faster R-CNN, R-FCN and SSD, etc., or a target detection algorithm using a lightweight network such as MobileNet as a backbone network, such as MobileNet-YOLOv1, MobileNet-YOLOv2, MobileNet-YOLOv3, etc.
The following describes the training process of the container detection model in detail, taking the first neural network as MobileNet-YOLOv3 as an example.
(1) Inputting the target container sample image and the marked first marking information into a MobileNet-YOLOv3 network;
(2) the MobileNet-YOLOv3 divides the picture into 13 × 13, 26 × 26 and 52 × 52 grids at different convolution layers respectively through a MobileNet convolution network;
(3) each mesh has 3 prior boxes of different sizes, which are responsible for predicting objects of different shapes and sizes, and each prior box is responsible for predicting one bounding box, i.e. each mesh will predict 3 bounding boxes.
The prior frame sizes of grids in 13 × 13, 26 × 26 and 52 × 52 orders are also inconsistent and are used for predicting a large target, a medium target and a small target respectively;
(4) calculating the information of coordinates, width and height, confidence coefficient, category (more, less, none and abnormal) and the like of the center point of each bounding box;
(5) and (4) calculating a loss function through the information in the step (4), and continuously and reversely propagating the loss function to optimize the network until the network converges to obtain a container detection model.
In practical application, a first image to be detected is input into a container detection model, and the center point coordinate, the width and the height, the confidence coefficient and the category information of the container are output through the model, wherein the category information comprises the proportion or the allowance of a target object in a target container. When the confidence is greater than 0.5, the detection result is considered to be valid.
In the embodiment, through the training of the container detection model, the model can be applied to detect and identify the container image, the allowance of the target object in the container is determined, the target object in the container does not need to be monitored manually, the efficiency and the accuracy of monitoring the target object in the container are improved, and the labor cost is reduced.
In another optional embodiment, the method further comprises a training process of the object detection model, which is as follows:
step B1, acquiring a preset object sample image;
step B2, acquiring second labeling information corresponding to the preset object sample image, wherein the second labeling information comprises a third labeling frame framing the preset object in the preset object sample image and the category of the preset object;
and step B3, training the preset object sample image and the second labeling information by adopting a second neural network, and determining the number of preset objects according to the image characteristics of the preset object sample image to obtain an object detection model.
Wherein the second neural network may be the following target detection algorithm: YOLOv1, YOLOv2, YOLOv3, R-CNN, Fast R-CNN, SPP-net, Faster R-CNN, R-FCN and SSD, etc., or a target detection algorithm using a lightweight network such as MobileNet as a backbone network, such as MobileNet-YOLOv1, MobileNet-YOLOv2, MobileNet-YOLOv3, etc., and the Darknet backbone network in the YOLO is replaced by MobileNet, thereby improving the network operation speed while ensuring the accuracy.
The following describes the training process of the object detection model in detail by taking the second neural network as MobileNet-YOLOv2 as an example.
(1) Inputting the preset object sample image and the second annotation information into a MobileNet-YOLOv2 network;
(2) the MobileNet-YOLOv2 divides the picture into 13x13 grids through a MobileNet convolution network, and each grid is responsible for predicting the object with the center falling into the grid;
(3) each grid is provided with 5 prior frames with different sizes and is responsible for predicting objects with different shapes and sizes, and each prior frame is responsible for predicting a boundary frame, namely each grid can predict 5 boundary frames;
(4) calculating the coordinate, width and height, confidence coefficient, category (whether the category is a preset category or not, or the name of the category, and the like) and other information of the center point of each bounding box;
(5) and (4) calculating a loss function through the information in the step (4), and continuously and reversely propagating the loss function to optimize the network until the network converges to obtain an object detection model.
And in actual application, inputting the second image to be detected into the object detection model, and outputting the frame information, confidence coefficient and quantity of the object through the model. When the confidence is greater than 0.5, the detection result is considered to be valid.
In the embodiment, the number of the preset objects in the image can be quickly and accurately identified by applying the model through training the object detection model, the objects do not need to be artificially preset for monitoring, the monitoring efficiency and the accuracy of the preset objects are improved, and the labor cost is reduced.
The container detection method can be applied to the field of animal feeding, in particular to the feeding of poultry and livestock.
Fig. 5 is a flowchart of a feeding method provided in the present application. As shown in fig. 5, the method further comprises the steps of:
step S21, acquiring a first image to be detected of the trough;
step S22, identifying feed information in the first image to be detected according to a pre-trained crib detection model;
and step S23, generating a label corresponding to the trough according to the feed information.
In this embodiment, through carrying out image recognition to the trough, confirm the fodder information in the trough, generate the label that this trough corresponds according to the fodder information, whether follow-up can be based on this label and confirm whether to put in the fodder in this trough. Like this, need not the manual work and monitor the fodder in the trough, but feed balance and the animal condition of eating in the real time monitoring trough have improved the efficiency and the degree of accuracy that the animal fed, and avoid the waste of fodder, reduce the cost of labor.
Wherein, step S22 includes:
identifying the allowance of the feed in the first image to be detected in the trough according to the trough detection model;
and determining the feed information according to the allowance.
For example, when the first image to be detected is photographed from directly above the trough, the ratio of the area occupied by the feed in the first image to be detected in the trough can be identified by the trough detection model. By means of this area ratio, the remaining amount of feed can be determined, so that a label corresponding to the trough can be generated from this remaining amount. If the area of the feed occupying the trough is larger than 1/2, the detection result output by the trough detection model is polyphagia; when the area of the feed occupying the trough is larger than 1/3 and smaller than 1/2, the detection result output by the trough detection model is that the food is short.
In an alternative embodiment, the actual remaining amount of feed may not be accurately identified from the first image to be measured taken, since the feeding of the animal may cause an obstruction to the feeding trough. Therefore, in order to determine the state of the feed in the trough more accurately, it is necessary to consider the shielding rate of the trough by the animal in addition to the margin recognized from the image.
Step S22 further includes: identifying the shielded rate of the crib in the first image to be detected according to the crib detection model; and determining the feed information according to the allowance and the shielded rate.
For example, when the foodstuff occupying area is less than 1/3 and the animal occupying area is less than 1/3, the detection result output by the crib detection model is no food; if the area of the trough covered by the animal exceeds 1/3 or the trough cannot be detected, the detection result output by the trough detection model is abnormal.
Fig. 6 is a flow chart of a feeding method provided in another embodiment of the present application. As shown in fig. 6, the method further includes:
step S24, generating a feeding instruction according to the label, wherein the feeding instruction is used for controlling feeding equipment to feed the trough with feed corresponding to the feed information;
step S25, sending the launch instruction to the launch device.
In this embodiment, whether confirm whether to put in the fodder in the trough based on this label, need not the manual work and in time add the fodder to the unloading, not only improved the efficiency and the degree of accuracy of feeding, and avoided the waste of fodder, reduced the cost of labor.
Fig. 7 is a flow chart of a feeding method provided in another embodiment of the present application. As shown in fig. 7, step S24 includes:
step S31, acquiring a second image to be detected at the trough;
step S32, identifying the number of animals in the second image to be detected according to the pre-trained animal detection model;
and step S33, generating a throwing instruction according to the label and the number of animals.
In this embodiment, the amount of feed added to the trough is also dependent on the number of animals. The more animals around the trough, the greater the amount of feed added. Thus, to more accurately determine the amount of feed delivered to the trough, the number of animals at the trough is further identified based on the image.
In an alternative embodiment, step S32 includes:
identifying the head direction of the animal in the second image to be detected according to the animal detection model;
the number of animals with their heads oriented towards the trough is determined.
In this example, the number of animals was further determined according to the head direction of the animals. Although there may be a plurality of animals around the trough, the animal may not need to be fed if it is not head-facing the trough. Thus, only the number of animals whose heads are facing the trough, i.e. only the number of animals that need to be fed, is counted. And after the second image to be detected is input into the animal detection model, outputting the frame information of the head and the tail of each animal or the frame information of the shoulder and the tail of each animal, and obtaining the number of animals with the heads facing the trough based on the frame information. Therefore, the feed feeding accuracy is further improved, the feed can be added to the animals in time, and the feed waste is avoided.
The above method will be described in detail below, taking pig feeding as an example.
Whether to put in the feed is determined based on the trough feed balance and the number of pigs, as shown in table 1 below,
TABLE 1
Figure BDA0002236517450000161
And if the trough detection model returns more food and less food, the food is not discharged.
And if the trough detection model returns no food and the animal detection model returns no pig, no blanking is performed.
And if the trough detection model returns no food and the animal detection model returns more than or equal to 1 pig, blanking. If the feed feeding amount corresponding to each pig is w, and the number of pigs is N, the feed feeding amount in the trough is wN.
And if the crib detection model returns abnormity and the return result of the animal detection model is larger than 1 pig, blanking, and putting the feed in the crib by the amount wN.
If the trough detection model returns to be abnormal and the animal detection model returns to 1 pig, the pig is not fed when playing in the trough transversely.
In this embodiment, carry out the accurate of fodder and put in based on the surplus of fodder in the trough and the quantity of pig around the trough, not only realize automatic timely unloading, improve pig quality and efficiency of feeding, avoid the fodder extravagant again, practice thrift manpower and financial resources cost.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application.
Fig. 8 is a block diagram of a container detection apparatus provided in an embodiment of the present application, which may be implemented as part of or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 8, the container inspection apparatus includes:
an obtaining module 41, configured to obtain a first to-be-detected image of a target container;
the identification module 42 is configured to identify target object information of a target object in the first image to be detected according to a pre-trained container detection model;
and a generating module 43, configured to generate a label corresponding to the target container according to the target object information.
Fig. 9 is a block diagram of a feeding detection device provided in an embodiment of the present application, and the feeding detection device may be implemented as part or all of an electronic device through software, hardware, or a combination of the software and the hardware. As shown in fig. 9, the feeding detection device includes:
an obtaining module 51, configured to obtain a first image to be detected of a trough;
the identification module 52 is configured to identify feed information in the first image to be detected according to a pre-trained crib detection model;
and the generating module 53 is configured to generate a label corresponding to the trough according to the feed information.
Fig. 10 is a block diagram of a feeding system provided in an embodiment of the present application. As shown in fig. 10, the system includes: a shooting device 61, a feeding detection device 62 and a feeding device 63;
the shooting device 61 is used for shooting the trough to obtain a first image to be detected;
the feeding detection device 62 is used for acquiring a first image to be detected; identifying feed information in a first image to be detected according to a pre-trained crib detection model; generating a label corresponding to the trough according to the feed information; generating a release instruction according to the label; sending a launch instruction to launch equipment 63;
and the feeding equipment 63 is used for feeding the feed corresponding to the allowance into the trough.
In another embodiment, the shooting device 61 is further configured to shoot a second image to be measured at the trough; the feeding detection device 62 is further configured to identify the number of animals in the second image to be detected according to a pre-trained animal detection model; and generating a release instruction according to the label and the number of the animals.
Wherein, the shooting device 61 can be arranged above the trough. The feeding detection device 62 may be located locally or at a cloud server. The shooting device 61, the feeding detection device 62 and the feeding device 63 can communicate with each other in a wired or wireless manner.
An embodiment of the present application further provides an electronic device, as shown in fig. 11, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (pci) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method embodiments described below.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (16)

1. A method of inspecting a container, comprising:
acquiring a first image to be detected of a target container;
identifying target object information in the first image to be detected according to a pre-trained container detection model;
generating a label corresponding to the target container according to the target object information;
the identifying of the target object information in the first image to be detected according to the pre-trained container detection model comprises:
identifying the residual quantity of the target object in the target container in the first image to be detected according to the container detection model;
determining the target object information according to the allowance;
the identifying of the target object information in the first image to be detected according to the pre-trained container detection model further comprises:
identifying the shielded rate of the target container in the first image to be detected according to the container detection model;
and determining the target object information according to the allowance and the shielded rate.
2. The method of claim 1, further comprising:
generating a throwing instruction according to the label, wherein the throwing instruction is used for controlling throwing equipment to throw the target object corresponding to the target object information to the target container;
and sending the releasing instruction to the releasing device.
3. The method of claim 2, wherein generating placement instructions from the label comprises:
acquiring a second image to be detected at the target container;
identifying the number of objects of a preset object in the second image to be detected according to a pre-trained object detection model;
and generating the releasing instruction according to the labels and the number of the objects.
4. The method of claim 1, further comprising:
acquiring a target container sample image;
acquiring first marking information corresponding to the target container sample image, wherein the first marking information comprises a first marking frame framing the target container in the target container sample image and target object information in the target container;
and training the target container sample image and the first marking information by adopting a first neural network, and determining the target object information in the target container according to the image characteristics of the target container sample image to obtain the container detection model.
5. The method of claim 4, wherein the object information comprises: a margin of the target object and/or an occlusion rate of the target container.
6. The method of claim 3, further comprising:
acquiring a preset object sample image;
acquiring second labeling information corresponding to the preset object sample image, wherein the second labeling information comprises a third labeling frame framing a preset object in the preset object sample image and the category of the preset object;
and training the preset object sample image and the second labeling information by adopting a second neural network, and determining the number of preset objects according to the number of third labeling frames in accordance with preset categories to obtain the object detection model.
7. A feeding detection method is characterized by comprising the following steps:
acquiring a first image to be detected of the crib;
identifying feed information in the first image to be detected according to a pre-trained crib detection model;
generating a label corresponding to the trough according to the feed information;
the identifying of the feed information in the first image to be detected according to the pre-trained crib detection model comprises the following steps:
identifying the allowance of the feed in the first image to be detected in the trough according to the trough detection model;
determining the feed information according to the allowance;
the fodder information in the first image that awaits measuring of discernment according to the trough detection model of training in advance still includes:
identifying the shielded rate of the crib in the first image to be detected according to the crib detection model;
and determining the feed information according to the allowance and the shielded rate.
8. The method of claim 7, further comprising:
generating a feeding instruction according to the label, wherein the feeding instruction is used for controlling feeding equipment to feed the feed corresponding to the feed information to the trough;
and sending the releasing instruction to the releasing device.
9. The method of claim 8, wherein generating placement instructions from the label comprises:
acquiring a second image to be detected at the trough;
identifying the number of animals in the second image to be detected according to a pre-trained animal detection model;
and generating the releasing instruction according to the label and the animal number.
10. The method of claim 9, wherein the identifying the number of animals in the second image under test according to a pre-trained animal detection model comprises:
identifying the head direction of the animal in the second image to be detected according to the animal detection model;
determining the number of animals with the head oriented towards the trough.
11. A container inspection apparatus, comprising:
the acquisition module is used for acquiring a first image to be detected of the target container;
the identification module is used for identifying the target object information in the first image to be detected according to a pre-trained container detection model;
the generating module is used for generating a label corresponding to the target container according to the target object information;
the identification module is further used for identifying the allowance of the target object in the first image to be detected in the target container and the shielded rate of the target container according to the container detection model; and determining the target object information according to the allowance and the shielded rate.
12. A feeding detection device, which is characterized by comprising:
the acquisition module is used for acquiring a first image to be detected of the trough;
the identification module is used for identifying the feed information in the first image to be detected according to a pre-trained crib detection model;
the generating module is used for generating a label corresponding to the trough according to the feed information;
the identification module is further used for identifying the allowance of the feed in the trough in the first image to be detected and the shielded rate of the trough according to the trough detection model; and determining the feed information according to the allowance and the shielded rate.
13. A feeding system, comprising: the feeding device comprises a shooting device, a feeding detection device and a throwing device;
the shooting device is used for shooting the crib to obtain a first image to be detected;
the feeding detection device is used for acquiring the first image to be detected; identifying the allowance of the feed in the first image to be detected in the trough and the sheltered rate of the trough according to a pre-trained trough detection model; determining the feed information according to the allowance and the shielded rate; generating a label corresponding to the trough according to the feed information; generating a release instruction according to the label; sending the launching instruction to the launching device;
and the feeding equipment is used for feeding the feed corresponding to the allowance into the trough.
14. The system of claim 13,
the shooting device is also used for shooting a second image to be detected at the trough;
the feeding detection device is also used for identifying the number of animals in the second image to be detected according to a pre-trained animal detection model; and generating the releasing instruction according to the label and the animal number.
15. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, implementing the method steps of any of claims 1-10.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 10.
CN201910985342.6A 2019-10-16 2019-10-16 Container detection and feeding detection method and device and feeding system Active CN110796043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910985342.6A CN110796043B (en) 2019-10-16 2019-10-16 Container detection and feeding detection method and device and feeding system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910985342.6A CN110796043B (en) 2019-10-16 2019-10-16 Container detection and feeding detection method and device and feeding system

Publications (2)

Publication Number Publication Date
CN110796043A CN110796043A (en) 2020-02-14
CN110796043B true CN110796043B (en) 2021-04-30

Family

ID=69440351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910985342.6A Active CN110796043B (en) 2019-10-16 2019-10-16 Container detection and feeding detection method and device and feeding system

Country Status (1)

Country Link
CN (1) CN110796043B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111264405A (en) * 2020-02-19 2020-06-12 北京海益同展信息科技有限公司 Feeding method, system, device, equipment and computer readable storage medium
CN111382739A (en) * 2020-03-03 2020-07-07 北京海益同展信息科技有限公司 Method, apparatus, system and computer-readable storage medium for feeding foodstuff
CN111406662B (en) * 2020-03-12 2022-01-28 中国地质大学(武汉) Automatic detection system and method for feed quantity of nursery pig feeder based on machine vision
US20220022417A1 (en) * 2020-07-24 2022-01-27 Go Dogo ApS Automatic detection of treat release and jamming with conditional activation of anti-jamming in an autonomous pet interaction device
CN112741012B (en) * 2020-12-31 2022-07-15 水利部牧区水利科学研究所 Pasturing area livestock water supply system based on automatic identification
CN112861734A (en) * 2021-02-10 2021-05-28 北京农业信息技术研究中心 Trough food residue monitoring method and system
CN113331083B (en) * 2021-07-12 2023-02-03 四川省畜牧科学研究院 Individual quantitative feeding system of poultry
CN114041429A (en) * 2022-01-14 2022-02-15 北京探感科技股份有限公司 Method and system for controlling feeding of feeding container through visual recognition
CN114092687A (en) * 2022-01-19 2022-02-25 北京探感科技股份有限公司 Animal feeding device and method based on visual identification and readable storage medium
CN114586691B (en) * 2022-02-28 2023-07-07 珠海一微半导体股份有限公司 Feeding method of intelligent feeding robot, intelligent feeding robot and system
CN116076382B (en) * 2023-02-23 2024-03-22 江苏华丽智能科技股份有限公司 Automatic induction-based feeding regulation control method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105432486A (en) * 2015-11-03 2016-03-30 内蒙古农业大学 Feeding detection system and feeding detection method thereof
CN107820616A (en) * 2015-07-01 2018-03-20 维京遗传学Fmba System and method for identifying individual animals based on back image
CN108236786A (en) * 2016-12-23 2018-07-03 深圳光启合众科技有限公司 The virtual feeding method and machine animal of machine animal
CN109618961A (en) * 2018-12-12 2019-04-16 北京京东金融科技控股有限公司 A kind of intelligence of domestic animal feeds system and method
CN110263685A (en) * 2019-06-06 2019-09-20 北京迈格威科技有限公司 A kind of animal feeding method and device based on video monitoring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107820616A (en) * 2015-07-01 2018-03-20 维京遗传学Fmba System and method for identifying individual animals based on back image
CN105432486A (en) * 2015-11-03 2016-03-30 内蒙古农业大学 Feeding detection system and feeding detection method thereof
CN108236786A (en) * 2016-12-23 2018-07-03 深圳光启合众科技有限公司 The virtual feeding method and machine animal of machine animal
CN109618961A (en) * 2018-12-12 2019-04-16 北京京东金融科技控股有限公司 A kind of intelligence of domestic animal feeds system and method
CN110263685A (en) * 2019-06-06 2019-09-20 北京迈格威科技有限公司 A kind of animal feeding method and device based on video monitoring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于专家系统的智能化母猪饲喂系统设计与实现;张亮;《万方学位论文数据库》;20160603;正文第1-58页 *
数字化猪场的饲喂模型;柯柯;《国外畜牧学(猪与禽)》;20161231;第36卷(第5期);第1-2页 *

Also Published As

Publication number Publication date
CN110796043A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN110796043B (en) Container detection and feeding detection method and device and feeding system
KR102014353B1 (en) Smart farm livestock management system based on machine learning
CN110222791B (en) Sample labeling information auditing method and device
CN111183917B (en) Animal abnormity monitoring and image processing method and device
CN110991222B (en) Object state monitoring and sow oestrus monitoring method, device and system
CN110741963B (en) Object state monitoring and sow oestrus monitoring method, device and system
CN110296660B (en) Method and device for detecting livestock body ruler
CN109345798B (en) Farm monitoring method, device, equipment and storage medium
CN109658414A (en) A kind of intelligent checking method and device of pig
CN110991220B (en) Egg detection and image processing method and device, electronic equipment and storage medium
US20200196568A1 (en) System and method for controlling animal feed
CN109559342B (en) Method and device for measuring animal body length
CN111297367A (en) Animal state monitoring method and device, electronic equipment and storage medium
CN115294185B (en) Pig weight estimation method and related equipment
CN111325181A (en) State monitoring method and device, electronic equipment and storage medium
JP2021107991A (en) Information processing device, computer program and information processing method
CN111080697A (en) Method, device, computer equipment and storage medium for detecting direction of target object
CN111178381A (en) Poultry egg weight estimation and image processing method and device
CN111126402A (en) Image processing method and device, electronic equipment and storage medium
CN110991235B (en) State monitoring method and device, electronic equipment and storage medium
CN111667450A (en) Ship quantity counting method and device and electronic equipment
KR102372107B1 (en) Image-based sow farrowing notification system
CN112985518B (en) Intelligent temperature and humidity monitoring method and device based on Internet of things
KR102404137B1 (en) Stationary Livestock weight estimation system based on 3D images and Livestock weight estimation method using the same
CN110930360A (en) Egg detection method, egg image processing method, egg detection device, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP03 Change of name, title or address

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Patentee before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address