CN110896871A - Method and device for putting food and intelligent food throwing machine - Google Patents
Method and device for putting food and intelligent food throwing machine Download PDFInfo
- Publication number
- CN110896871A CN110896871A CN201910945826.8A CN201910945826A CN110896871A CN 110896871 A CN110896871 A CN 110896871A CN 201910945826 A CN201910945826 A CN 201910945826A CN 110896871 A CN110896871 A CN 110896871A
- Authority
- CN
- China
- Prior art keywords
- category
- pet
- food
- detection image
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000013305 food Nutrition 0.000 title claims abstract description 175
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000001514 detection method Methods 0.000 claims abstract description 206
- 230000004913 activation Effects 0.000 claims description 27
- 238000011176 pooling Methods 0.000 claims description 22
- 238000013527 convolutional neural network Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 11
- 230000009849 deactivation Effects 0.000 claims description 8
- 230000006870 function Effects 0.000 description 27
- 230000008569 process Effects 0.000 description 19
- 238000004891 communication Methods 0.000 description 14
- 230000000875 corresponding effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 241000282472 Canis lupus familiaris Species 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 241000282326 Felis catus Species 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 241000283973 Oryctolagus cuniculus Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000002779 inactivation Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K5/00—Feeding devices for stock or game ; Feeding wagons; Feeding stacks
- A01K5/02—Automatic devices
- A01K5/0291—Automatic devices with timing mechanisms, e.g. pet feeders
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K5/00—Feeding devices for stock or game ; Feeding wagons; Feeding stacks
- A01K5/02—Automatic devices
- A01K5/0275—Automatic devices with mechanisms for delivery of measured doses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Environmental Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Birds (AREA)
- General Engineering & Computer Science (AREA)
- Animal Husbandry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a method and a device for throwing food and an intelligent food throwing machine, and relates to the field of intelligent home furnishing. The method comprises the following steps: acquiring a detection image acquired by an image acquisition device; identifying the type of a detection object contained in the detection image through a pre-trained pet identification model; and when the detection image contains the detection object belonging to the pet category, putting the target food of the pet category. By adopting the method and the device, automatic feeding with pertinence can be realized according to pet categories.
Description
Technical Field
The application relates to the field of smart home, in particular to a method and a device for throwing food and an intelligent food throwing machine.
Background
With the improvement of living standard of people, more and more families begin to raise pets. In the process of raising the pet, people need to feed the pet to ensure the healthy survival of the pet. At present, the feeding mode is that people irregularly add food and water in a food basin and a water basin of the pet so as to meet the diet requirement of the pet.
However, if the person is not at home (e.g., traveling out, or cannot go home because of heavy work), the pet cannot be fed.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for putting food and an intelligent food throwing machine, so that targeted automatic feeding according to pet categories is realized. The specific technical scheme is as follows:
in a first aspect, a method for delivering food is provided, the method comprising:
acquiring a detection image acquired by an image acquisition device;
identifying the type of a detection object contained in the detection image through a pre-trained pet identification model;
and when the detection image contains the detection object belonging to the pet category, putting the target food of the pet category.
Optionally, when the detection image includes a detection object belonging to a pet category, the delivering of the target food of the pet category includes:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
Optionally, the identifying, by using a pet identification model trained in advance, a category of a detection object included in the detection image includes:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
Optionally, the pet identification model is a deep convolutional neural network, and the deep convolutional neural network includes an input layer, a convolutional coding network, a deconvolution decoding network, and an output layer;
the convolutional coding network comprises a first convolutional layer, a first batch of regularization layers, a first activation function layer and a maximum pooling layer;
the deconvolution decoding network includes an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
Optionally, the convolutional coding network further includes: a deactivation layer disposed between the first activation function layer and the maximum pooling layer.
Optionally, when the detection image includes a detection object belonging to a pet category, the delivering of the target food of the pet category includes:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
Optionally, when the detection image includes a detection object belonging to a pet category, the delivering of the target food of the pet category includes:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
In a second aspect, there is provided an apparatus for delivering food, the apparatus comprising:
the acquisition module is used for acquiring a detection image acquired by the image acquisition device;
the identification module is used for identifying the type of a detection object contained in the detection image through a pre-trained pet identification model;
and the putting module is used for putting the target food of the pet category when the detection image contains the detection object belonging to the pet category.
Optionally, the releasing module is specifically configured to:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
Optionally, the identification module is specifically configured to:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
Optionally, the pet identification model is a deep convolutional neural network, and the deep convolutional neural network includes an input layer, a convolutional coding network, a deconvolution decoding network, and an output layer;
the convolutional coding network comprises a first convolutional layer, a first batch of regularization layers, a first activation function layer and a maximum pooling layer;
the deconvolution decoding network includes an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
Optionally, the convolutional coding network further includes: a deactivation layer disposed between the first activation function layer and the maximum pooling layer.
Optionally, the releasing module is specifically configured to:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
Optionally, the releasing module is specifically configured to:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
In a third aspect, an intelligent food throwing machine is provided, which comprises an image acquisition device, an identification device and a food throwing device:
the image acquisition device is used for acquiring a detection image;
the recognition device is used for recognizing the type of a detection object contained in the detection image through a pet recognition model trained in advance;
and the feeding device is used for feeding target food of the pet category when the detection image contains a detection object belonging to the pet category.
Optionally, the feeding device is specifically configured to:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
Optionally, the identification device is specifically configured to:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
Optionally, the pet identification model is a deep convolutional neural network, and the deep convolutional neural network includes an input layer, a convolutional coding network, a deconvolution decoding network, and an output layer;
the convolutional coding network comprises a first convolutional layer, a first batch of regularization layers, a first activation function layer and a maximum pooling layer;
the deconvolution decoding network includes an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
Optionally, the convolutional coding network further includes: a deactivation layer disposed between the first activation function layer and the maximum pooling layer.
Optionally, the feeding device is specifically configured to:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
Optionally, the feeding device is specifically configured to:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
In a fourth aspect, the present application provides an electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to, when executing the computer program, implement the method steps of the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps of the first aspect described above.
In a sixth aspect, embodiments of the present application further provide a computer program product containing instructions, which when executed on a computer, cause the computer to perform the method steps of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the food putting method, the detection image acquired by the image acquisition device can be acquired, and then the type of the detection object contained in the detection image is identified through the pre-trained pet identification model. When the detection object belonging to the pet category is included in the detection image, the target food of the pet category is delivered. In the scheme, the pet category of the pet can be detected, and then the target food of the pet category is fed, so that targeted automatic feeding according to the pet category is realized.
Of course, not all of the above advantages need be achieved in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a method for delivering food according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a deep convolutional neural network provided in an embodiment of the present application;
fig. 3a is a schematic diagram of a convolutional encoding network according to an embodiment of the present application;
fig. 3b is a schematic diagram of a deconvolution decoding network according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a device for delivering food according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an intelligent food throwing machine provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a method for putting food, which can be applied to electronic equipment, wherein the electronic equipment can be a food throwing machine used for throwing food for pets. As shown in fig. 1, the process of the method may include the following steps.
In the embodiment of the application, the food throwing machine can be provided with at least one image acquisition device, and the image acquisition device can be a device with an image shooting function, such as a camera or a camera. Specifically, can be provided with a plurality of image acquisition devices in the machine of food throwing, a plurality of image acquisition devices's orientation can be different to make image acquisition device's detection range can cover the whole scope around the machine of food throwing, realize 360 degrees all-round detections. When the food throwing machine is in an open state, the image acquisition device can acquire surrounding images in real time to obtain a detection image.
In the embodiment of the application, the pet recognition model trained in advance can be stored in the food throwing machine, and the pet recognition model can be realized through a neural network. The neural network may be a deep convolutional neural network, a residual neural network, or a BP (back propagation) neural network. After the food throwing machine acquires the detection image collected by the image acquisition device, the type of the detection object contained in the detection image can be identified through the pre-trained pet identification model, and the type can comprise a non-pet type and at least one pet type (such as a cat, a dog or a rabbit).
Optionally, in the embodiment of the present application, the pet identification model is taken as an example of the deep convolutional neural network, and the processing procedure in step 102 is specifically described. The deep convolutional neural network may include an input layer, a convolutional encoding network, a deconvolution decoding network, and an output layer. The input layer may be configured to input a detection image, the convolutional coding network and the deconvolution decoding network may be configured to process the detection image, the output layer may be configured to output a processing result, the processing result may include a type of each pixel point of the detection image, and a structure of the deep convolutional neural network may be as shown in fig. 2. Specifically, as shown in fig. 3a, the convolutional coding network may include a first convolutional layer, a first batch of regularization layers, a first activation function layer, and a maximum pooling layer; as shown in fig. 3b, the deconvolution decoding network can include an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
In the embodiment of the present application, the number of convolutional layers included in the convolutional coding network is the same as the number of convolutional layers included in the deconvolution decoding network, and the number of layers may be denoted as N. Where N may be an integer not less than 2, and specifically, N may be equal to 16. Based on the structure, the deep convolutional neural network comprises a convolutional coding network of a plurality of convolutional layers and a deconvolution decoding network of the plurality of convolutional layers. The convolutional coding network can be a full convolutional neural network, and a downsampling layer is connected behind each convolutional layer of the convolutional coding network. Moreover, the convolutional coding network does not use a fully-connected layer in the traditional neural network, so that the characteristic diagram with high resolution is output in the convolutional layer of the deepest layer (the Nth layer), the parameters of the convolutional coding network are reduced, and the training time for training the convolutional coding network is reduced. When N is 16, the convolutional coding network and the deconvolution decoding network each include 16 convolutional layers, and the whole network structure reaches 32 layers, which is favorable for the accuracy of identification.
After the detection image is acquired, the detection image can be preprocessed. For example, the acquired image may have noise or distortion, and accordingly, the preprocessing may include a filtering process, an image rectification process, and the like. After preprocessing, the detection image is input to a first convolutional layer contained in the convolutional coding network through the input layer, and the first convolutional layer outputs a characteristic diagram. The input values of other convolutional layers after the first convolutional layer in the encoding process are all the output characteristic diagram of the last convolutional layer. Each convolution layer can extract features of an output and input image through a convolution kernel of 3 x 3, then a first Batch of regularization layers perform Batch regularization operation (Batch regularization) on the extracted features, and then the features after the Batch regularization operation are transmitted to a first activation function layer. The first activation function layer performs nonlinear mapping on the features after the batch regularization operation by using an activation function (for example, by using a ReLu activation function), and finally performs maximum pooling processing to output a feature image. The maximum pooling processing can obtain the translation invariance of the detection image on small space displacement change, and the length and the width of each feature after pooling are changed into one half of the original length and width. The feature image with higher robustness can be obtained by the maximum pooling process.
Due to continuous pooling downsampling, the detection image is continuously distorted, the boundary information is lost, and the pixel points of the detection image need to be classified in the application, so that the category of the detection object contained in the detection image is determined. Since the pixel points need to be segmented when the pixel classes are classified, the loss of boundary information is not beneficial to the segmentation of the detection image. Therefore, in the present application, a deconvolution decoding network is provided after the convolutional encoding network to restore the information in the detected image.
Optionally, in order to restore information in the detected image as much as possible, the feature index of the maximum feature value is recorded during the maximum pooling process performed by the maximum pooling layer. In the process of decoding the deconvolution decoding network, an up-sampling layer uses a feature index recorded in the process of maximum pooling to up-sample input features, a second convolution layer uses a convolution kernel to perform convolution operation on a sparse feature map obtained by up-sampling to obtain a dense feature map, then a second batch of regularization layers performs batch regularization on the dense feature map, and the feature map after the batch regularization is input to a second activation function layer. And the second activation function layer performs nonlinear mapping on the features after the batch regularization processing and outputs a feature image. In this embodiment, the batch regularization layers are correspondingly arranged after the first convolutional layer and the second convolutional layer, so that the defect that the deep convolutional neural network is difficult to train is overcome, the training process of the neural network is accelerated, the problem of gradient disappearance easily occurring in the training process of the deep convolutional neural network is prevented, and the convergence rate and the model accuracy of the training are improved.
Optionally, the convolutional encoding network may further include an inactivation layer (i.e., DropOut layer in the field of neural networks) disposed between the first activation function layer and the max pooling layer. The deactivation layer may cause nodes (i.e., neurons) in the network to close with a certain probability (the nodes may be randomly determined), and the output values of the closed nodes will not be input to the next layer. The inactivation layer has the functions of preventing overfitting and improving the generalization capability of the neural network model, when the neural network model is designed, each set layer of neurons represents one learned intermediate feature (namely the combination of several weights), and all the neurons in the neural network model act together to represent the specific attribute of input data (detection image). In the process of training neural network models, overfitting occurs when the input data is too small relative to the complexity of the network, and it is apparent that there is much duplication and redundancy of features represented by each neuron with respect to each other. The direct effect of the deactivation layer is to reduce the number of intermediate features and thereby reduce redundancy, i.e. increase orthogonality between the individual features of each layer.
Optionally, through the pet recognition model trained in advance, the specific process of recognizing the category of the detection object included in the detection image may be: the method comprises the steps of identifying the category of each pixel point in a detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category. And counting the number of pixel points belonging to the category in the detection image for each category, calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image, and then determining the category of which the percentage is greater than a preset threshold value as the category of the detection object contained in the detection image.
In the embodiment of the application, the detection image can be input into the pet identification model trained in advance, and the category of each pixel point in the detection image is output. Wherein the categories include a non-pet category and at least one pet category. The pet category may include, among others, cats, dogs, rabbits, etc. For each category, the number of the pixel points belonging to the category in the detection image can be counted, the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image is calculated, and then the category of which the percentage is greater than a preset threshold value is determined and is used as the category of the detection object contained in the detection image. The detection image may include a plurality of detection objects (for example, the detection image includes both a person and a dog), and accordingly, the category of each detection object may be identified.
For example, the predetermined threshold is 0.3, the size of the detected image is 300 × 500, the detected image includes a puppy image, and the total number of pixels is 300 × 500. After the pet identification model identifies, the number of the pixels of the type puppy is 67500, 67500/(300 × 500) > 0.45>0.3, the number of the pixels of the type cat is 15000, 15000/(300 × 500) > 0.1<0.3, the number of the pixels of the type non-pet object is 67500, and 67500/(300 × 500) > 0.45>0.3, it can be determined that the detected image includes the detected object of the type dog and the detected object of the type non-pet object.
And 103, putting target food of the pet category when the detection image contains the detection object belonging to the pet category.
In the embodiment of the application, when the detection image contains the detection object belonging to the pet category, the food throwing machine can throw the target food of the pet category. Specifically, the user can put the food to be fed in the feeding machine in advance, and the feeding machine can support to feed at least one food. When the food throwing machine supports the throwing of one kind of food, the food throwing machine can directly throw the pre-placed food (namely, the target food of the pet category) when the food throwing machine determines that the detection image contains the detection object belonging to the pet category. When the food throwing machine supports throwing of various foods, a plurality of food storage devices can be arranged in the food throwing machine and used for storing different kinds of foods. When the detection image contains the detection object belonging to the pet category, the food throwing machine can determine the target food corresponding to the pet category to which the detection object belongs according to the preset corresponding relation between the pet categories and the food, and then throw in the target food through the food storage device for storing the target food.
Optionally, the feeding machine can also implement feeding at regular time, and the specific processing process may be: when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period; and if the current time is within the preset time period, putting the target food of the pet category.
In the embodiment of the application, the food throwing machine may be preset with a time range (i.e., a preset time period) for feeding food, and the preset time period may be set by a user or may be preset in the food throwing machine by a technician. The number of the preset time periods can be multiple, for example, the preset time periods can be 8: 00-9: 00 in the morning, 1: 00-2: 00 in the noon and 7: 00-8: 00 in the evening. When the feeder determines that the detection image contains the detection object belonging to the pet category, the current time can be obtained through the clock device, and then whether the current time is within a preset time period or not is judged; and if the current time is not within the preset time period, not putting the food.
Optionally, the food throwing machine may further calculate the food throwing amount, and the specific processing procedure is as follows: when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time; determining the food putting amount according to the time interval between the last time of putting food and the current time; and putting target food of the pet category according to the determined food putting amount.
In the embodiment of the application, the food throwing machine can record the time of throwing food every time. When the food throwing machine determines that the detection image contains the detection object belonging to the pet category, the food throwing machine may acquire the time of last food throwing and then calculate the time interval between the last time food throwing and the current time. The food throwing machine can calculate the current food throwing amount according to the time interval and a preset calculation rule of the food throwing amount, and then throw the target food of the pet category according to the calculated food throwing amount. The calculation rule of the food input amount may be set by a technician according to experience, and this embodiment is not limited. In one example, the food placement amount and the time interval may be positively correlated, i.e., the longer the time interval of placement, the larger the food placement amount, the smaller the time interval of placement, and the smaller the food placement amount. The food delivery amount and the time interval may be in a linear relationship or a non-linear relationship.
Based on the same technical concept, the embodiment of the present application further provides a device for delivering food, as shown in fig. 4, the device includes:
an obtaining module 410, configured to obtain a detection image collected by an image collecting device;
the identification module 420 is used for identifying the category of the detection object contained in the detection image through a pet identification model trained in advance;
and an putting module 430, configured to put in the target food of the pet category when the detection image includes the detection object belonging to the pet category.
Optionally, the releasing module 430 is specifically configured to:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
Optionally, the identifying module 420 is specifically configured to:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
Optionally, the pet identification model is a deep convolutional neural network, and the deep convolutional neural network includes an input layer, a convolutional coding network, a deconvolution decoding network, and an output layer;
the convolutional coding network comprises a first convolutional layer, a first batch of regularization layers, a first activation function layer and a maximum pooling layer;
the deconvolution decoding network includes an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
Optionally, the convolutional coding network further includes: a deactivation layer disposed between the first activation function layer and the maximum pooling layer.
Optionally, the when-releasing module 430 is specifically configured to:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
Optionally, the releasing module 430 is specifically configured to:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
Based on the same technical concept, an embodiment of the present application further provides an intelligent food throwing machine, as shown in fig. 5, the intelligent food throwing machine includes an image acquisition device 510, an identification device 520, and a food throwing device 530:
an image acquisition device 510 for acquiring a detection image;
a recognition device 520, configured to recognize a category of a detection object included in the detection image through a pet recognition model trained in advance;
and the feeding device 530 is used for feeding the target food of the pet category when the detection object belonging to the pet category is contained in the detection image.
Optionally, the feeding device 530 is specifically configured to:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
Optionally, the identifying device 520 is specifically configured to:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
Optionally, the pet identification model is a deep convolutional neural network, and the deep convolutional neural network includes an input layer, a convolutional coding network, a deconvolution decoding network, and an output layer;
the convolutional coding network comprises a first convolutional layer, a first batch of regularization layers, a first activation function layer and a maximum pooling layer;
the deconvolution decoding network includes an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
Optionally, the convolutional coding network further includes: a deactivation layer disposed between the first activation function layer and the maximum pooling layer.
Optionally, the feeding device 530 is specifically configured to:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
Optionally, the feeding device 530 is specifically configured to:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the above-mentioned method steps for delivering food when executing the program stored in the memory 603.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In a further embodiment provided by the present invention, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the methods described above.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (14)
1. A method of delivering food, the method comprising:
acquiring a detection image acquired by an image acquisition device;
identifying the type of a detection object contained in the detection image through a pre-trained pet identification model;
and when the detection image contains the detection object belonging to the pet category, putting the target food of the pet category.
2. The method according to claim 1, wherein the delivering the target food of the pet category when the detection object belonging to the pet category is included in the detection image comprises:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
3. The method of claim 1, wherein the identifying the category of the detection object included in the detection image through the pre-trained pet recognition model comprises:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
4. The method of any one of claims 1-3, wherein the pet identification model is a deep convolutional neural network comprising an input layer, a convolutional coding network, a deconvolution decoding network, and an output layer;
the convolutional coding network comprises a first convolutional layer, a first batch of regularization layers, a first activation function layer and a maximum pooling layer;
the deconvolution decoding network includes an upsampling layer, a second convolution layer, a second batch of regularization layers, and a second activation function layer.
5. The method of claim 4, wherein the convolutional encoding network further comprises: a deactivation layer disposed between the first activation function layer and the maximum pooling layer.
6. The method according to claim 1, wherein the delivering the target food of the pet category when the detection object belonging to the pet category is included in the detection image comprises:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
7. The method according to claim 1, wherein the delivering the target food of the pet category when the detection object belonging to the pet category is included in the detection image comprises:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
8. An apparatus for delivering food, the apparatus comprising:
the acquisition module is used for acquiring a detection image acquired by the image acquisition device;
the identification module is used for identifying the type of a detection object contained in the detection image through a pre-trained pet identification model;
and the putting module is used for putting the target food of the pet category when the detection image contains the detection object belonging to the pet category.
9. The utility model provides an intelligence machine of food throwing, its characterized in that, intelligence machine of food throwing includes image acquisition device, recognition device and throws edible device:
the image acquisition device is used for acquiring a detection image;
the identification device is used for identifying the type of a detection object contained in the detection image through a pet identification model trained in advance;
and the feeding device is used for feeding target food of the pet category when the detection image contains a detection object belonging to the pet category.
10. The intelligent food throwing machine of claim 9, wherein the food throwing device is specifically configured to:
when the detection image contains a detection object belonging to a pet category, determining target food corresponding to the pet category to which the detection object belongs according to a preset corresponding relation between each pet category and the food;
and putting the target food.
11. The intelligent food throwing machine of claim 9, wherein the identification device is specifically configured to:
identifying the category of each pixel point in the detection image through a pre-trained pet identification model, wherein the category comprises a non-pet category and at least one pet category;
counting the number of pixel points belonging to the category in the detection image aiming at each category, and calculating the percentage of the number of the pixel points belonging to the category in the total number of the pixel points of the detection image;
and determining the category with the percentage larger than a preset threshold value as the category of the detection object contained in the detection image.
12. The intelligent food throwing machine of claim 9, wherein the food throwing device is specifically configured to:
when the detection image contains a detection object belonging to the pet category, judging whether the current time is within a preset time period;
and if the current time is within a preset time period, releasing the target food of the pet category.
13. The intelligent food throwing machine of claim 9, wherein the food throwing device is specifically configured to:
when the detection image contains a detection object belonging to the pet category, acquiring the last food putting time;
determining the food putting amount according to the time interval between the last time of putting food and the current time;
and putting the target food of the pet category according to the determined food putting amount.
14. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910945826.8A CN110896871A (en) | 2019-09-30 | 2019-09-30 | Method and device for putting food and intelligent food throwing machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910945826.8A CN110896871A (en) | 2019-09-30 | 2019-09-30 | Method and device for putting food and intelligent food throwing machine |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110896871A true CN110896871A (en) | 2020-03-24 |
Family
ID=69815155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910945826.8A Pending CN110896871A (en) | 2019-09-30 | 2019-09-30 | Method and device for putting food and intelligent food throwing machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110896871A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111887175A (en) * | 2020-07-30 | 2020-11-06 | 北京小米移动软件有限公司 | Feeding monitoring method, device, equipment and storage medium |
CN112167074A (en) * | 2020-10-14 | 2021-01-05 | 北京科技大学 | Automatic feeding device based on pet face recognition |
CN112598116A (en) * | 2020-12-22 | 2021-04-02 | 王槐林 | Pet appetite evaluation method, device, equipment and storage medium |
CN113349105A (en) * | 2021-06-01 | 2021-09-07 | 深圳市天和荣科技有限公司 | Intelligent bird feeding method, electronic equipment, bird feeder and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107432251A (en) * | 2016-05-27 | 2017-12-05 | 杨仲辉 | Feeding animal method |
CN109618961A (en) * | 2018-12-12 | 2019-04-16 | 北京京东金融科技控股有限公司 | A kind of intelligence of domestic animal feeds system and method |
CN109887220A (en) * | 2019-01-23 | 2019-06-14 | 珠海格力电器股份有限公司 | The control method of air-conditioning and air-conditioning |
CN110263685A (en) * | 2019-06-06 | 2019-09-20 | 北京迈格威科技有限公司 | A kind of animal feeding method and device based on video monitoring |
-
2019
- 2019-09-30 CN CN201910945826.8A patent/CN110896871A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107432251A (en) * | 2016-05-27 | 2017-12-05 | 杨仲辉 | Feeding animal method |
CN109618961A (en) * | 2018-12-12 | 2019-04-16 | 北京京东金融科技控股有限公司 | A kind of intelligence of domestic animal feeds system and method |
CN109887220A (en) * | 2019-01-23 | 2019-06-14 | 珠海格力电器股份有限公司 | The control method of air-conditioning and air-conditioning |
CN110263685A (en) * | 2019-06-06 | 2019-09-20 | 北京迈格威科技有限公司 | A kind of animal feeding method and device based on video monitoring |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111887175A (en) * | 2020-07-30 | 2020-11-06 | 北京小米移动软件有限公司 | Feeding monitoring method, device, equipment and storage medium |
CN111887175B (en) * | 2020-07-30 | 2022-03-18 | 北京小米移动软件有限公司 | Feeding monitoring method, device, equipment and storage medium |
CN112167074A (en) * | 2020-10-14 | 2021-01-05 | 北京科技大学 | Automatic feeding device based on pet face recognition |
CN112598116A (en) * | 2020-12-22 | 2021-04-02 | 王槐林 | Pet appetite evaluation method, device, equipment and storage medium |
CN113349105A (en) * | 2021-06-01 | 2021-09-07 | 深圳市天和荣科技有限公司 | Intelligent bird feeding method, electronic equipment, bird feeder and storage medium |
CN113349105B (en) * | 2021-06-01 | 2022-06-21 | 深圳市天和荣科技有限公司 | Intelligent bird feeding method, electronic equipment, bird feeder and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110896871A (en) | Method and device for putting food and intelligent food throwing machine | |
Wang et al. | Recognition and classification of broiler droppings based on deep convolutional neural network | |
Oczak et al. | Classification of aggressive behaviour in pigs by activity index and multilayer feed forward neural network | |
CN109753948B (en) | Microwave radar-based air conditioner control method and device, storage medium and processor | |
CN109857860A (en) | File classification method, device, computer equipment and storage medium | |
WO2020125057A1 (en) | Livestock quantity identification method and apparatus | |
CN107229947A (en) | A kind of banking and insurance business method and system based on animal identification | |
JP2019125340A (en) | Systems and methods for automated inferencing of changes in spatiotemporal images | |
CN110533950A (en) | Detection method, device, electronic equipment and the storage medium of parking stall behaviour in service | |
CN111797835B (en) | Disorder identification method, disorder identification device and terminal equipment | |
Abinaya et al. | Naive Bayesian fusion based deep learning networks for multisegmented classification of fishes in aquaculture industries | |
Dohmen et al. | Image-based body mass prediction of heifers using deep neural networks | |
CN110991222B (en) | Object state monitoring and sow oestrus monitoring method, device and system | |
CN111178364A (en) | Image identification method and device | |
CN111325181B (en) | State monitoring method and device, electronic equipment and storage medium | |
CN111832707A (en) | Deep neural network interpretation method, device, terminal and storage medium | |
CN111046944A (en) | Method and device for determining object class, electronic equipment and storage medium | |
CN110807463B (en) | Image segmentation method and device, computer equipment and storage medium | |
CN111563439A (en) | Aquatic organism disease detection method, device and equipment | |
CN111597937A (en) | Fish gesture recognition method, device, equipment and storage medium | |
CN114926897A (en) | Target object statistical method, target detection method and neural network training method | |
CN110008881A (en) | The recognition methods of the milk cow behavior of multiple mobile object and device | |
CN112766387B (en) | Training data error correction method, device, equipment and storage medium | |
CN111104952A (en) | Method, system and device for identifying food types and refrigerator | |
Li et al. | Automatic Counting Method of Fry Based on Computer Vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200324 |
|
RJ01 | Rejection of invention patent application after publication |