WO2020058908A1 - Method and electronic apparatus for enabling the access to a resource by one or more animals through image processing - Google Patents

Method and electronic apparatus for enabling the access to a resource by one or more animals through image processing Download PDF

Info

Publication number
WO2020058908A1
WO2020058908A1 PCT/IB2019/057927 IB2019057927W WO2020058908A1 WO 2020058908 A1 WO2020058908 A1 WO 2020058908A1 IB 2019057927 W IB2019057927 W IB 2019057927W WO 2020058908 A1 WO2020058908 A1 WO 2020058908A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
access
animal
electronic apparatus
enabling
Prior art date
Application number
PCT/IB2019/057927
Other languages
French (fr)
Inventor
Silvio REVELLI
Original Assignee
Volta Robots S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volta Robots S.R.L. filed Critical Volta Robots S.R.L.
Priority to EP19786871.4A priority Critical patent/EP3852517A1/en
Publication of WO2020058908A1 publication Critical patent/WO2020058908A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/01Feed troughs; Feed pails
    • A01K5/0114Pet food dispensers; Pet food trays
    • A01K5/0142Pet food dispensers; Pet food trays with means for preventing other animals or insects from eating
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/02Automatic devices

Definitions

  • the present invention relates, in general, to an electronic apparatus for enabling or inhibiting the access to a resource by one or more animals by means of image processing and the relevant operating method.
  • the invention is applicable to livestock feeders or fish tanks which are configured to ensure or deny access to a food or pharmacological resource to a group of animals or to a particular animal of the group.
  • the invention relates to an electronic bowl for enabling or inhibiting the access to a food resource accommodated into the bowl by an animal on the basis of image processing.
  • Electronic bowls for containing food resources for animals are known, equipped with barrier means which may be opened and re-closed to give access and dispense such food resources.
  • such electronic bowls comprise suitable sensors, for example infrared sensors, placed along the perimeter of the bowl itself to detect a living being approaching the bowl of and to open, consequently, the barrier means.
  • Such a bowl solution although useful for the purpose of preserving the integrity of the food resource, has the inconvenience that the barrier means open when any living being approaches the bowl, both animals and humans, including children.
  • Electronic bowls for pets which include controllable barrier means to selectively dispense food resources to animals based on RFID sensors.
  • sensors are configured to detect the presence of a respective nameplate or tag associated with an animal, which may correspond to a subcutaneous chip applied to the animal or may be fixed to a pet tag associated with the animal.
  • a tag By comparing such a tag with a pre-set list of tags authorized to access the food resource, such an electronic bowl selectively enables or inhibits the access to the resource contained into the bowl to authorized animals only .
  • the subcutaneous RFID tag configured to dialogue with this type of bowl is invasive, since the relative sensor should be positioned exactly on the back of the animal while the latter eats from the bowl. Furthermore, the reading distance of the RFID is still limited within the power ranges of the electromagnetic signals involved.
  • the selection of the accesses to a resource may not be made on the basis of RFID sensors. This occurs, in particular, when the criterion for discriminating an animal is a specific attribute of the animal itself, for example: a pathological condition of the skin or coat or scales of the animal; the presence or absence of parasites; reaching a certain length or height or pigmentation; a particular color of the plumage in the case of avian species .
  • Such an object is achieved by a method for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with claim 1.
  • the present invention also relates to an electronic apparatus operating on the basis of the aforesaid method for enabling or inhibiting the access to a resource by one or more animals in accordance with claim 16.
  • such an electronic apparatus is an electronic bowl for enabling or inhibiting the access to a food resource accommodated into the bowl by an animal on the basis of image processing.
  • the method for allowing the access to a resource by an animal is based on the processing of images of the area in front of the resource to which access is ensured, separated by a barrier element.
  • such an image processing takes place by employing convolutional neural networks trained to supply the electronic apparatus with information about the presence or absence of the authorized animal in the image of the area in front of the resource.
  • the detection of the presence of the animal discriminates the species of the animal itself. In other embodiments it is possible to discriminate the race of the animal or the individual for a targeted access control.
  • the user may set, by means of a suitable interface, the parameters useful for selecting the animals adequately, without requiring a retraining of the neural network.
  • the information returned by the processing is used to control means for actuating the apparatus, for example a motor, adapted to move the aforesaid barrier element (for example, a cover in the case of the bowl or of a feeder, or a door in the case of access to a stable or a kennel) which allows or denies the animal access to the resource.
  • actuating the apparatus for example a motor, adapted to move the aforesaid barrier element (for example, a cover in the case of the bowl or of a feeder, or a door in the case of access to a stable or a kennel) which allows or denies the animal access to the resource.
  • Figure 1 shows a perspective image of an electronic apparatus, in particular an electronic bowl for pets, for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with the invention
  • FIG. 1 diagrammatically shows structural details of the electronic bowl of Figure 1;
  • Figure 3 shows, in a flow diagram, a method for enabling or inhibiting the access to a resource by one or more animals by means of image processing implemented by the electronic bowl of Figures 1-2;
  • Figure 4 shows, in a logical diagram, an embodiment of a neural network, comprising convolutional levels, employed in the method of the invention and configured to return a classification of digital images of the area in front of the resource to be accessed, which includes the aforesaid animal;
  • Figure 5 shows, in a flow diagram, a training method of the neural network of Figure 4.
  • FIG. 1-2 an example of electronic apparatus for enabling or inhibiting the access to a resource by one or more animals by means of image processing, operating in accordance with the method of the invention, is indicated as a whole by reference numeral 10.
  • the electronic apparatus 10 comprises a body 1 which includes a portion 11 for accessing the resource and a portion 12 for controlling the access.
  • the electronic apparatus 10 is configured to allow or deny access by an animal to the resource by means of the movement of a barrier element 14 enabling or inhibiting the access to such a resource.
  • the apparatus 10 advantageously operates on the basis of image processing employing trained convolutional neural networks.
  • the electronic apparatus 10 comprises digital image acquisition means 21 configured to acquire at least one digital image of a volume proximal to the portion 11 for accessing the resource and outside the apparatus 10 adapted to contain the animal.
  • such means 21 are characterized by a respective orientation and angle width so that the Field of View (or FOV) , indicated by the width of angle A in Figure 2, is sufficiently extended to include at least one portion of the body of the animal used to identify the animal itself, during the attempt by the animal to access the resource.
  • Such digital image acquisition means 21 are configured to acquire, for example, continuously or at predetermined time intervals, sequences of images or frames of the volume proximal to the portion 11 for accessing the resource of the apparatus 10 which includes such a portion of the body of the animal.
  • Such image acquisition means are embodied, for example, by one or more cameras 21.
  • Each camera 21 is configured to acquire images in grayscale or, preferably, in the color-coded visible spectrum (for example, RGB) .
  • the camera 21 may be chosen to operate in the visible or infrared spectrum, in the thermal radiation spectrum or in the ultraviolet spectrum, or is configured to complete the optical information on the image acquired by employing a channel dedicated to depth (for example, RGB-D) .
  • the electronic apparatus 10 further comprises an electronic processing unit 22 associated with the portion 12 for controlling the access to the apparatus 10 and connected to the digital image acquisition means 21.
  • Such an electronic processing unit 22 comprises at least one processor 23 and one memory block 24, associated with the processor for storing instructions.
  • a memory block 24 is connected to the processor 20 by means of a data communication line or bus 26 (for example, PCI) and consists, for example, of a service memory of the volatile type (for example, of the SDRAM type) , and of a system memory of the non volatile type (for example, of the SSD type) .
  • the processor 23 may be connected by means of a suitable communication interface to a computational accelerator specialized in convolution operations, such as, for example, a Neural Processing Unit (NPU) or a Graphic Processing Unit (GPU) or a Visual Processing Unit (VPU) .
  • a computational accelerator specialized in convolution operations, such as, for example, a Neural Processing Unit (NPU) or a Graphic Processing Unit (GPU) or a Visual Processing Unit (VPU) .
  • the processor 23 is configured to delegate the necessary convolution operations to such a computational accelerator, according to the implementation of the method described.
  • the electronic processing unit 22 comprises a data communication interface 27, for example, of the wireless type, configured to connect such a processing unit 22 to a data communication network 28, for example, the Internet, and to allow the processing unit to communicate with remote electronic devices, such as, for example, servers or portable devices (smartphones, tablets, laptops) associated with one or more users.
  • a data communication interface 27 for example, of the wireless type, configured to connect such a processing unit 22 to a data communication network 28, for example, the Internet, and to allow the processing unit to communicate with remote electronic devices, such as, for example, servers or portable devices (smartphones, tablets, laptops) associated with one or more users.
  • the electronic apparatus 10 comprises means 13 for actuating a barrier element 14 connected to the electronic processing unit 22.
  • actuation means 13 are controlled by the electronic processing unit 22 on the basis of a processing of the at least one digital image acquired to move the barrier element 14 between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element 14.
  • the electronic processing unit 22 of the apparatus 10 comprises an input/output interface 25 for connecting the at least one processor 23 and the memory block 24 to the digital image acquisition means 21 and to the means 13 for actuating the barrier element 14.
  • the aforesaid electronic apparatus 10 is a pet bowl, and the resource is a food resource accommodated in a seat 15 provided in the body 1 of the pet bowl 10.
  • the teachings of the invention may be applied, with minimal modifications, even to other applications in the field of selective access for domestic animals, livestock, poultry, and fish resources.
  • the method of the invention may be applied, for example, to beddings, kennels or shelters for pets which are provided with controllable access doors and to all those situations in which it is necessary to authorize one or more animals to access a resource by discriminating them on the basis of how such animals appear visually.
  • the electronic apparatus and the method of the invention may be used, with suitable adaptations and suitable mobile barriers already present on the market, with different types of pets and breeding animals, including cats, dogs, rabbits, rodents in general, horses, cows, goats, sheep, pigs, chickens, salmon, bream, bass.
  • the actuation means 13 comprise an electric motor configured to move a lid 14, for example, in plexiglass and transparent, sliding between the closed position, in which the access by the pet to the seat 15 containing the food resource is inhibited, and an opening position, in which the access by the pet to the seat 15 is enabled, and vice versa.
  • the seat 15, which may be closed again from the lid 14, is obtained in the portion 11 for accessing the resource of the bowl 10.
  • the camera or cameras 21 of the electronic bowl 10 are fastened to a supporting element 2 protruding from the body 1 of the bowl 10, in particular, from the portion 12 for controlling the access .
  • the electronic processing unit 22 of the apparatus 10 is set to run the codes of an application program implementing the method 100 of the invention .
  • the processor 23 is configured to load, in the memory block 24, and to run the codes of the application program implementing the method 100 of the present invention.
  • the method 100 comprises a symbolic starting step STR and a symbolic ending step ED.
  • the method 100 for enabling or inhibiting the access to a resource by one or more animals comprises a first step of acquiring 101, by the digital image acquisition means 21 installed on the electronic apparatus 10, at least one digital image of a volume proximal to the portion 11 for accessing the resource and outside the electronic apparatus 10, in which such a volume is adapted to contain the animal.
  • the method 100 comprises a step of processing 102, by an electronic processing unit 22 associating with the portion 12 for controlling the access, the at least one digital image acquired.
  • the aforesaid step of processing 102 the at least one digital image acquired comprises a step of performing at least one convolution operation on the at least one digital image by means of a trained convolutional neural network.
  • the method comprises a step of controlling 103, by the electronic processing unit 22, means 13 for actuating a barrier element 14 of the electronic apparatus 10 on the basis of a processing of the at least one digital image acquired to move the barrier element 14 between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element 14.
  • the image Before passing to the neural network (step 102), the image may be pre-processed, for example, by adjusting the color channels.
  • a technique given by way of explanation includes, for example, the application, on the bowl and inside the Field Of View A of the camera 21, of a color marker known a priori. By comparing the colors detected by the camera with the actual colors known a priori , it is possible to correct the color channels of the image according to techniques known to those skilled in the art. According to a particular embodiment, such a marker may be associated with one or more colors of the bowl 10 itself, if the bowl is within the FOV of the camera 21.
  • the aforesaid step of controlling 103 the actuation means 13 of the barrier element 14, i.e., of the lid (in the case of the bowl), comprises the steps of:
  • the descriptive class of the at least one image corresponds to the activation level of at least one neuron descriptive of the at least one image.
  • the one or more descriptive neurons are the exit neurons of the trained convolutional neural network.
  • the at least one descriptive class i.e., the activation level of a single descriptive neuron, expresses a binary classification of the image indicative of the presence of at least one animal authorized to access the resource.
  • the neural network may be trained so that the activation level of the descriptive neuron is one if the authorized animal is present in the image, otherwise such a level is kept at zero.
  • the camera 21 acquires the images of the animal proximal to or moving towards the bowl 10. Such images are processed by the processor 23 and are inserted in an input layer of the neural network.
  • a neuronal activation is matched to the value of each pixel of each color channel of the image, proportional thereto.
  • the processor 23 or the computational accelerator is configured to run a "forward" performance of a first embodiment of a trained convolutional neural network.
  • Such a first embodiment of a neural network returns a binary classification depending on whether the authorized animal has been identified or not.
  • Such a classification is expressed in the form of neuronal activations of the last layer of the network the architecture thereof will be described below.
  • a control logic implemented by the processor 23 opens or keeps the lid 14 of the bowl 10 closed, to enable or to inhibit the access to the food resource by the animal.
  • the ways in which the network is trained and how the user may interact with the electronic equipment to regulate the access by the authorized animal will be described below.
  • the at least one descriptive class i.e., the activation levels of a vector of descriptive neurons, expresses the physical features of the animal detected in the image and comparable with a first vector representative of physical features of at least one animal authorized to access the resource.
  • the camera 21 acquires the images of the animal proximal to or moving towards the bowl 10.
  • Such images are processed by the processor 23, or by the accelerator (VPU, GPU, NPU) in a manner similar to that described with regard to the first embodiment, by a "forward" performance of a second embodiment of a trained neural network.
  • a second embodiment of neural network returns a vector of physical features detected in the image.
  • Such a vector is expressed in the form of neuronal activations of the last layer of the network which will be described below.
  • Such a vector of features may be compared with a vector of features representative of the authorized animal, conveniently stored in the memory 24 of the processing unit 22.
  • the method 100 of the invention comprises the steps of:
  • the method 100 comprises, as mentioned, a step of comparing the vector of physical features of the animal detected in the image with the first vector stored in the electronic apparatus 10 for identifying the animal authorized to access the resource.
  • Such a comparing step comprises a step of calculating a distance between the vector of physical features of the animal detected in the image with the first vector stored in the electronic apparatus 10.
  • Such a distance between vectors may, for example, be calculated in a Euclidean space or with a cosine distance. Such a distance between vectors is representative of a degree of similarity between the animal detected by the cameras 21 and the authorized animal. If such a distance is below a preset threshold, this implies that the authorized animal has been detected .
  • the method 100 therefore comprises the steps of: establishing a threshold value for the distance between vectors, on the basis of an interaction of a user with the aforesaid electronic apparatus 10;
  • controlling the access to the resource so that: the access is inhibited when the distance between the vector of physical features of the animal detected in the image and the first vector stored exceeds the threshold value, the access is enabled when the distance between the vector of physical features of the animal detected in the image and the first vector stored are below the threshold value.
  • a control logic implemented by the processor 23 opens or closes the lid 14 of the bowl 10, to enable or to inhibit the access to the food resource by the animal.
  • the at least one descriptive class is a vector of physical features representative of a part of the body of a user, for example of a hand.
  • the camera 21 acquires the images of the user proximal to or moving towards the bowl. Such images are processed by the processor, by a "forward" performance of a third embodiment of a trained neural network. Such a third embodiment returns, as output, the presence or absence of different elements of parts of the body of the user and the features thereof. Such a presence is expressed in the form of neuronal activations of the last layer of the network.
  • the at least one descriptive class is an information representative of the presence or absence of the food resource in the electronic apparatus 10, i.e., in the compartment 15 of the bowl.
  • the control logic has information available about the fact that the food has been eaten and in what quantities.
  • the aforesaid third and fourth network embodiments may coexist with one of the previous two or may be integrated therewith in a single neural network.
  • a control logic implemented by the processor 23 may be refined, making the bowl 10 an intelligent bowl capable of responding to events related to the state of the food and the intentions of the user.
  • Such a neural network comprises at least the following layers:
  • an input layer 301 configured to receive the entire digital image or the sum of the digital images or at least one down-sample of digital image acquired with the cameras 21; at least one convolutional layer conv 1;
  • an output layer 304 with at least one neuron configured to provide the distinction between an authorized animal and an unauthorized one, for example, distinguishing the animal species, according to the first embodiment of the neural network mentioned above.
  • the output layer 304 provides the vector of features detected according to the second embodiment of the neural network mentioned above.
  • the network 300 comprises a convolution block 302 consisting, for example, of twenty-two convolutional layers conv 1, conv 2, conv 3,
  • convolutional level input is connected to the output of a respective convolutional layer with linearity of the ReLU type and BatchNorm of the type known to those skilled in the art.
  • each neuron is connected only to some neighboring neurons in the previous layer.
  • the same set of weights (and local connection layout) is used for each neural connection.
  • each neuron is connected to each neuron of the previous layer and each connection has its own weight.
  • the neural network 300 consists of two fully connected layers 303a and 303b. These two layers are similar to convolutions having a kernel covering the entire input layer of the neural network 300. Therefore, these may be considered as two further convolutions configured to give a global meaning to the input layer.
  • the last layer of the block 302, conv 22, may be of a different type: for example, de-convolutionary layers may be used which perform a semantic segmentation of the image revealing which pixels correspond to the animal to be identified.
  • de-convolutionary layers may be used which perform a semantic segmentation of the image revealing which pixels correspond to the animal to be identified.
  • specific embodiments of the processing 102 and control 103 step of the method do not alter the generality of the present invention.
  • the bowl comprises the camera 21 with RGB Bayer filter, with a dynamic range of 69.5 dB and a lens with a field of view FOV at 175 degrees.
  • the camera 21 is positioned at a distance of about 16 cm from the seat 15 containing the food resource and is oriented downwards by 20 degrees.
  • the neural network 300 shall be a trained network.
  • a training procedure 400 of the network 300 is described with reference to Figure 5.
  • the training method 400 includes an initial step of defining 401 a position and an orientation of the digital image acquisition means 21.
  • the method involves the input acquisition 402 by the camera 21 of a plurality of digital images of the bowl configured to capture various situations in which different animals eat from the bowl 10 situated in different environments or in different lighting conditions .
  • a step of notating 403 the plurality of digital images acquired is included. Such a notation is performed by associating a suitable label or code to each digital image acquired.
  • the images are divided into two classes: those containing the authorized animal and those showing an animal or animals other than the authorized one. It is apparent that such a number of classes may be arbitrarily changed without altering the meaning of the invention, for example, to authorize more than one animal.
  • the activation level thereof (conveniently normalized to the activation level of the other exit neurons) expressing the confidence that the corresponding animal has been identified.
  • the training method 400 includes the initialization 405 of the neural network 300 through the association thereof with neural connection weights in a random or predefined manner.
  • such a step of training the neural network 300 further comprises a step of increasing 404 the number of images employable for the training by performing further processing operations on the original images acquired. This is accomplished, for example, by performing rotations of each image, by selecting down-samples of the images or by correcting, for each image, at least one color channel.
  • the advantage achieved by such a step of increasing 404 is that of providing the neural network 300 with a greater number of images to be used for the training and, therefore, improving the learning by the network itself.
  • the following step of training 406 the network 300 occurs by means of a back-propagation method of the type known to those skilled in the art.
  • the SGD Spochastic Gradient Descent
  • a loss is calculated at each backpropagation cycle, calculating the error between the classes predicted by the network during the training step and the real ones.
  • the loss is typically a "cross entropy” or a binary " cross entropy " .
  • This type of training is advantageous to distinguish one or more animal species, or the presence of a specific feature which is very common in the animal population.
  • the network is configured to recognize the presence in the access area of a generic cat or a generic dog.
  • the network may be trained so that the activation of the last exit neuron is in the range [0,1] .
  • an activation level close to zero is representative of the fact that the animal has not been identified. Since the activation level may take on any intermediate value, such a value is compared, during the control step, with a threshold stored in the memory of the electronic device.
  • Such a threshold value may be modified by means of the communication interface 27 of the bowl 10 connected in a wireless manner, by means of the Internet 28, to the personal device of the user. The user may therefore make the bowl 10 more selective, raising the threshold, or less selective by lowering it.
  • the present invention provides a method configured to allow an automatic adjustment of the threshold value.
  • the method of the invention involves storing in the memory 24 of the bowl 10: the images relating to the animal identified and the activation level of the neuron describing the presence of the animal (e.g., cat) .
  • the memory 24 of the bowl 10 includes groups of images which contain the animal to be authorized (for example, the cat) , but also the animals which have attempted the access however needing to be blocked (for example, the dog, the rabbit, etc).
  • Such images are sent to the user device, not necessarily in real time, by means of the communication interface 27, possibly through the mediation of a server.
  • the bowl 10 is configured to use a "clustering" algorithm, of the type known to those skilled in the art, to identify the threshold value that best separates the images including the animal from those without animal.
  • a clustering algorithm of the type known to those skilled in the art, to identify the threshold value that best separates the images including the animal from those without animal.
  • the populations of authorized animals and unauthorized animals are usually distributed around the mean value according to a Gaussian curve. It is therefore possible to use a probabilistic model, such as the Gaussian Mixture Model to assign a class of belonging to the new detections.
  • the training method provides that the bowl 10 learns a vector of typical features of animals.
  • the training occurs on several types of different animals so that the network learns to represent ("encoding” or "embedding") the different features with a vector.
  • the "loss” used to update the weights in the backpropagation step it is possible, for example, to use the so-called “triplet loss", i.e., present the network with two different images of the same animal and a third image showing a different animal.
  • the "loss” is calculated by calculating the two vector distances in Euclidean space.
  • the neural network is induced to internally adjust the weights so that the distance between the two images of the same animal is ideally zero and the distance between images of different animals is maximized.
  • the network is induced to understand all those invariant features which denote a specific animal and at the same time abstract from all the circumstantial features which are not useful to characterize it (for example, in another posture, with a different expression, with a different lighting, etc%) .
  • the vector encoding the features is not necessarily interpretable by a human being.
  • the training step is performed on a processing unit (e.g., personal computer) different from the processor of the bowl.
  • a processing unit e.g., personal computer
  • the bowl 10 may be sold to the user with this second embodiment of a trained neural network already loaded. Obviously, such a neural network has not been trained for the specific animal of the user.
  • the method involves a set-up step in which the user instructs the bowl on the authorized animals and a usage step in which the bowl selects the authorized animal .
  • the bowl initially opens for any animal, regardless of the features of the latter and therefore of the neuron vector which has been activated.
  • the neural network calculates the vector of exit neurons and saves it together with the image of the animal in a local or remote database.
  • the user by means of the portable device thereof, receives from the bowl the images of the various openings in the bowl, and identifies the different animals which have accessed.
  • the user enters a unique identifier for each animal, such an identifier is then conveniently associated with the database.
  • specific activations of the neuron vector are associated with specific animal identities or related access privileges.
  • the user still by means of the interface 27, specifies one or more authorized animals and one or more unauthorized animals. Such a list is sent to bowl 10, then stored in the memory 24.
  • the electronic bowl 10 is ready to operate normally: when an animal enters the volume of space in front of the bowl, the image thereof is captured by the camera; sent to the processor 23; processed to obtain a vector which expresses the physical features of the animal; such a vector is compared with the aforesaid vectors stored in the database of the bowl according to a distance criterion; the identity of the animal corresponding to the nearest vector is obtained; the information concerning whether the identity of the animal is authorized to access is obtained and the barrier element 14 is moved according to logics which are detailed below.
  • the user may set, by means of the communication interface 27, the threshold value discriminating the distance (e.g.: cosine of similarity) between neuron vectors within which a certain animal is considered sufficiently similar to itself in other circumstances, so that the user may properly adjust the selectivity of the bowl.
  • the threshold value discriminating the distance (e.g.: cosine of similarity) between neuron vectors within which a certain animal is considered sufficiently similar to itself in other circumstances, so that the user may properly adjust the selectivity of the bowl.
  • the aforesaid at least one descriptive class is a vector of physical features of the animal detected in the image, such a vector being provided as input to a classifier adapted to generate information representative of the presence of the authorized animal.
  • the method comprises the steps of :
  • the step of providing the plurality of first vectors comprises the steps of:
  • the classifier is a further trained convolutional neural network configured to receive as input the vector of physical features of the animal detected in the image and to return as output the recognition of the animal or a relevant access privilege .
  • Such a classification method consists of training the further neural network, smaller in size with respect to the first network, adapted to classify the vector of animal features as an output of the convolutional neural network, in as many classes as there are user animals or in the two access privileges (access allowed, not allowed) .
  • a first convolutional neural network trained to extract, from the image of the animal, a vector of features and a second neural network trained to attribute such a vector of features to a specific animal identity or to a particular access privilege .
  • the first network is given already pre-trained to the user.
  • the second network smaller in size with respect to the first network, may be easily trained by the user by using a few examples for each individual animal .
  • the network may be trained for the specific animal on a remote computer or in the cloud by means of the Internet connection, and then downloaded locally.
  • the simplest control logic is adapted to command the opening of the bowl 10 in the presence of the authorized animal, while it closes the bowl after a predetermined period of time from the last moment in which the authorized animal has been recognized.
  • a more sophisticated control logic may prevent other animals from taking the place of the authorized animal after the bowl 10 has been opened: in this case, the electronic apparatus 10 would recognize that the animal has changed thus imposing the closure of the lid 14 of the bowl 10.
  • the control logic of the bowl 10 is also configured to send to a user notification messages in real time of the moments in which the animal eats, by virtue of the connection to the Internet 28.
  • control logic having acquired information regarding the amount of food resource contained in the bowl 10, may, for example, alert a user of the empty bowl status or operate a filling mechanism, if present. Such an alert may occur with a sound, with a light code placed on the bowl, with a voice assistant or as a notification on the mobile device of the user.
  • control logic may recognize the approaching gesture and open up without the user pressing a button.
  • the information saved in the memory 24 of the bowl 10 may be sent by means of the interface 27 and the Internet 28 to a remote server. This allows the continuous training of the neural network. New versions of neural networks may be downloaded locally by the bowl.
  • the electronic apparatus 10 and the relevant method 100 for enabling or inhibiting the access to a resource by one or more animals by means of image processing of the present invention have several advantages.
  • the electronic bowl 10 enables or inhibits access to the food resource by an animal in a selective manner, distinguishing between animals of different species and even distinguishing animals of the same species.
  • the electronic bowl 10 of the invention opens only following the detection of the authorized animal, not in the presence of unauthorized animals or other living beings in general, and therefore overcomes the limits of the known bowls based on proximity sensors.
  • the electronic bowl 10 does not require applying to the animal chips with subcutaneous RFID tags or fixed to nameplates: the electronic bowl 10 is therefore more practical than the known solutions.
  • Visual identification by means of convolutional neural networks allows to operate with a high degree of freedom: position of the animal in the proximal volume, lighting of the environment, expressions or postures of the animal, shadows and reflections.
  • the application of the convolutional network technology to the specific technical problem, according to the methods described in the method allows a very high degree of abstraction in the recognition of the animal itself.
  • Selective resource access control is important to prevent unauthorized individuals from ingesting food destined for a particular species of pet. This is particularly useful in contexts in which avoiding small children ingesting animal food is required.
  • the suggested methodology may also be applied in the fish industry, to allow only certain fishes to access tanks in which a type of food or a pharmacological treatment is provided.
  • the method of the invention solves the problem of assigning individual animals, which are authorized to access in a rapid manner as it does not require a long and repeated collection of images of the specific individual. Conversely, it allows, by leveraging only one pre-training of the network, to add a new individual by presenting only a few images thereof by means of an interface which is simple for the user.
  • the methodology of the invention it is possible to discriminate the presence of a particular condition of the skin or hair of the animal or the presence of a fish parasite. Furthermore, with the method of the invention it is possible to prevent the access to the resource by a domestic animal with which a foreign body is temporarily associated, such as, for example, in the case of a cat attempting to introduce a prey (e.g., a mouse) within the domestic environment.
  • a prey e.g., a mouse
  • the system improves the safety ensured by the bowl with respect to known solutions.
  • the camera 21 may recognize the status of the food resource contained in the compartment 15 avoiding unsuitable, dangerous or unpleasant closures for the animals.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Birds (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and an electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals. Such an electronic apparatus includes a portion (11) for accessing the resource and a portion for controlling (12) the access. The method comprises the steps of: acquiring (101) at least one digital image of a volume proximal to the portion for accessing the resource and outside the electronic apparatus containing the animal; processing (102), the at least one digital image acquired, said step of processing comprising a step of performing at least one convolution operation on the at least one digital image by means of a trained convolutional neural network; controlling (103) the means (13) for actuating a barrier element (14) of the electronic apparatus on the basis of a processing of said at least one digital image acquired to move the barrier element between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element. The step of controlling the means for actuating the barrier element comprises the steps of: obtaining (104) at least one descriptive class of the at least one image of a volume proximal to the portion for accessing the resource and outside the electronic apparatus on the basis of said processing; generating (105) at least one signal for controlling the means for actuating the barrier element on the basis of said at least one descriptive class of the image.

Description

DESCRIPTION
METHOD AND ELECTRONIC APPARATUS FOR ENABLING THE ACCESS TO A RESOURCE BY ONE OR MORE ANIMALS BY MEANS OF IMAGE PROCESSING TECHNOLOGICAL BACKGROUND OF THE INVENTION
Field of application
The present invention relates, in general, to an electronic apparatus for enabling or inhibiting the access to a resource by one or more animals by means of image processing and the relevant operating method. In general, the invention is applicable to livestock feeders or fish tanks which are configured to ensure or deny access to a food or pharmacological resource to a group of animals or to a particular animal of the group. In particular, the invention relates to an electronic bowl for enabling or inhibiting the access to a food resource accommodated into the bowl by an animal on the basis of image processing.
Prior art
Electronic bowls for containing food resources for animals, for example, domestic or breeding animals, are known, equipped with barrier means which may be opened and re-closed to give access and dispense such food resources. In particular, such electronic bowls comprise suitable sensors, for example infrared sensors, placed along the perimeter of the bowl itself to detect a living being approaching the bowl of and to open, consequently, the barrier means.
Such a bowl solution, although useful for the purpose of preserving the integrity of the food resource, has the inconvenience that the barrier means open when any living being approaches the bowl, both animals and humans, including children.
Electronic bowls for pets are also known which include controllable barrier means to selectively dispense food resources to animals based on RFID sensors. In particular, such sensors are configured to detect the presence of a respective nameplate or tag associated with an animal, which may correspond to a subcutaneous chip applied to the animal or may be fixed to a pet tag associated with the animal. By comparing such a tag with a pre-set list of tags authorized to access the food resource, such an electronic bowl selectively enables or inhibits the access to the resource contained into the bowl to authorized animals only .
Such a solution also has drawbacks. In fact, not all animals are generally provided with a subcutaneous chip. Furthermore, for types of animals of low economic value, such as fish and poultry, implanting the subcutaneous chip may represent a substantial cost.
Furthermore, pet owners may prefer not to equip the animals with pet tags. In addition, the subcutaneous RFID tag configured to dialogue with this type of bowl is invasive, since the relative sensor should be positioned exactly on the back of the animal while the latter eats from the bowl. Furthermore, the reading distance of the RFID is still limited within the power ranges of the electromagnetic signals involved.
Furthermore, there are circumstances in which the selection of the accesses to a resource may not be made on the basis of RFID sensors. This occurs, in particular, when the criterion for discriminating an animal is a specific attribute of the animal itself, for example: a pathological condition of the skin or coat or scales of the animal; the presence or absence of parasites; reaching a certain length or height or pigmentation; a particular color of the plumage in the case of avian species .
SUMMARY OF THE INVENTION
It is the object of the present invention to devise and provide an electronic device and a relative method for enabling or inhibiting the access to a resource by one or more animals by means of image processing which allows to at least partially overcome the drawbacks complained above in relation with the known solutions.
Such an object is achieved by a method for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with claim 1.
The present invention also relates to an electronic apparatus operating on the basis of the aforesaid method for enabling or inhibiting the access to a resource by one or more animals in accordance with claim 16.
In greater detail, such an electronic apparatus is an electronic bowl for enabling or inhibiting the access to a food resource accommodated into the bowl by an animal on the basis of image processing.
Advantageously, the method for allowing the access to a resource by an animal is based on the processing of images of the area in front of the resource to which access is ensured, separated by a barrier element.
Advantageously, such an image processing takes place by employing convolutional neural networks trained to supply the electronic apparatus with information about the presence or absence of the authorized animal in the image of the area in front of the resource.
According to an embodiment, the detection of the presence of the animal discriminates the species of the animal itself. In other embodiments it is possible to discriminate the race of the animal or the individual for a targeted access control.
According to an embodiment, the user may set, by means of a suitable interface, the parameters useful for selecting the animals adequately, without requiring a retraining of the neural network.
The information returned by the processing is used to control means for actuating the apparatus, for example a motor, adapted to move the aforesaid barrier element (for example, a cover in the case of the bowl or of a feeder, or a door in the case of access to a stable or a kennel) which allows or denies the animal access to the resource.
Preferred embodiments of such an electronic apparatus and of the method for enabling or inhibiting the access to a resource by one or more animals by means of image processing are described in the dependent claims .
BRIEF DESCRIPTION OF THE DRAWINGS
Further features and advantages of such an electronic apparatus and of the method for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with the invention will become apparent from the following description of preferred embodiments, given by way of indicative and non-limiting example, with reference to the accompanying drawings, in which:
Figure 1 shows a perspective image of an electronic apparatus, in particular an electronic bowl for pets, for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with the invention;
Figure 2 diagrammatically shows structural details of the electronic bowl of Figure 1;
Figure 3 shows, in a flow diagram, a method for enabling or inhibiting the access to a resource by one or more animals by means of image processing implemented by the electronic bowl of Figures 1-2;
Figure 4 shows, in a logical diagram, an embodiment of a neural network, comprising convolutional levels, employed in the method of the invention and configured to return a classification of digital images of the area in front of the resource to be accessed, which includes the aforesaid animal;
Figure 5 shows, in a flow diagram, a training method of the neural network of Figure 4.
In the aforesaid Figures, equal or similar elements are indicated by means of the same reference numerals. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
With reference to Figures 1-2, an example of electronic apparatus for enabling or inhibiting the access to a resource by one or more animals by means of image processing, operating in accordance with the method of the invention, is indicated as a whole by reference numeral 10.
In particular, the electronic apparatus 10 comprises a body 1 which includes a portion 11 for accessing the resource and a portion 12 for controlling the access. The electronic apparatus 10 is configured to allow or deny access by an animal to the resource by means of the movement of a barrier element 14 enabling or inhibiting the access to such a resource.
In greater detail, the apparatus 10 advantageously operates on the basis of image processing employing trained convolutional neural networks.
The electronic apparatus 10 comprises digital image acquisition means 21 configured to acquire at least one digital image of a volume proximal to the portion 11 for accessing the resource and outside the apparatus 10 adapted to contain the animal.
In other words, such means 21 are characterized by a respective orientation and angle width so that the Field of View (or FOV) , indicated by the width of angle A in Figure 2, is sufficiently extended to include at least one portion of the body of the animal used to identify the animal itself, during the attempt by the animal to access the resource. Such digital image acquisition means 21 are configured to acquire, for example, continuously or at predetermined time intervals, sequences of images or frames of the volume proximal to the portion 11 for accessing the resource of the apparatus 10 which includes such a portion of the body of the animal.
Such image acquisition means are embodied, for example, by one or more cameras 21. Each camera 21 is configured to acquire images in grayscale or, preferably, in the color-coded visible spectrum (for example, RGB) . The camera 21 may be chosen to operate in the visible or infrared spectrum, in the thermal radiation spectrum or in the ultraviolet spectrum, or is configured to complete the optical information on the image acquired by employing a channel dedicated to depth (for example, RGB-D) .
The electronic apparatus 10 further comprises an electronic processing unit 22 associated with the portion 12 for controlling the access to the apparatus 10 and connected to the digital image acquisition means 21.
Such an electronic processing unit 22 comprises at least one processor 23 and one memory block 24, associated with the processor for storing instructions. In particular, such a memory block 24, is connected to the processor 20 by means of a data communication line or bus 26 (for example, PCI) and consists, for example, of a service memory of the volatile type (for example, of the SDRAM type) , and of a system memory of the non volatile type (for example, of the SSD type) .
The processor 23 may be connected by means of a suitable communication interface to a computational accelerator specialized in convolution operations, such as, for example, a Neural Processing Unit (NPU) or a Graphic Processing Unit (GPU) or a Visual Processing Unit (VPU) . In particular, the processor 23 is configured to delegate the necessary convolution operations to such a computational accelerator, according to the implementation of the method described.
Furthermore, the electronic processing unit 22 comprises a data communication interface 27, for example, of the wireless type, configured to connect such a processing unit 22 to a data communication network 28, for example, the Internet, and to allow the processing unit to communicate with remote electronic devices, such as, for example, servers or portable devices (smartphones, tablets, laptops) associated with one or more users.
In addition, the electronic apparatus 10 comprises means 13 for actuating a barrier element 14 connected to the electronic processing unit 22. Advantageously, such actuation means 13 are controlled by the electronic processing unit 22 on the basis of a processing of the at least one digital image acquired to move the barrier element 14 between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element 14.
Furthermore, the electronic processing unit 22 of the apparatus 10 comprises an input/output interface 25 for connecting the at least one processor 23 and the memory block 24 to the digital image acquisition means 21 and to the means 13 for actuating the barrier element 14.
In a preferred and non-limiting embodiment of the invention, the aforesaid electronic apparatus 10 is a pet bowl, and the resource is a food resource accommodated in a seat 15 provided in the body 1 of the pet bowl 10.
In the following description reference will be made to this specific embodiment of the electronic apparatus 10. However, the teachings of the invention may be applied, with minimal modifications, even to other applications in the field of selective access for domestic animals, livestock, poultry, and fish resources. In the domestic environment, the method of the invention may be applied, for example, to beddings, kennels or shelters for pets which are provided with controllable access doors and to all those situations in which it is necessary to authorize one or more animals to access a resource by discriminating them on the basis of how such animals appear visually.
It should be noted that the electronic apparatus and the method of the invention may be used, with suitable adaptations and suitable mobile barriers already present on the market, with different types of pets and breeding animals, including cats, dogs, rabbits, rodents in general, horses, cows, goats, sheep, pigs, chickens, salmon, bream, bass.
In the embodiment of the electronic bowl 10, the actuation means 13 comprise an electric motor configured to move a lid 14, for example, in plexiglass and transparent, sliding between the closed position, in which the access by the pet to the seat 15 containing the food resource is inhibited, and an opening position, in which the access by the pet to the seat 15 is enabled, and vice versa. The seat 15, which may be closed again from the lid 14, is obtained in the portion 11 for accessing the resource of the bowl 10. It should be noted that the camera or cameras 21 of the electronic bowl 10 are fastened to a supporting element 2 protruding from the body 1 of the bowl 10, in particular, from the portion 12 for controlling the access .
With reference to Figure 3, the operative steps of the method 100 for enabling or inhibiting the access to a resource by one or more animals on the basis of image processing implemented by the electronic apparatus 10 are described below in greater detail.
In an embodiment, the electronic processing unit 22 of the apparatus 10 is set to run the codes of an application program implementing the method 100 of the invention .
In a particular embodiment, the processor 23 is configured to load, in the memory block 24, and to run the codes of the application program implementing the method 100 of the present invention.
The method 100 comprises a symbolic starting step STR and a symbolic ending step ED.
In the most general embodiment, the method 100 for enabling or inhibiting the access to a resource by one or more animals comprises a first step of acquiring 101, by the digital image acquisition means 21 installed on the electronic apparatus 10, at least one digital image of a volume proximal to the portion 11 for accessing the resource and outside the electronic apparatus 10, in which such a volume is adapted to contain the animal.
Furthermore, the method 100 comprises a step of processing 102, by an electronic processing unit 22 associating with the portion 12 for controlling the access, the at least one digital image acquired.
In an embodiment of the method 100 of the invention, the aforesaid step of processing 102 the at least one digital image acquired, comprises a step of performing at least one convolution operation on the at least one digital image by means of a trained convolutional neural network.
Furthermore, the method comprises a step of controlling 103, by the electronic processing unit 22, means 13 for actuating a barrier element 14 of the electronic apparatus 10 on the basis of a processing of the at least one digital image acquired to move the barrier element 14 between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element 14.
Before passing to the neural network (step 102), the image may be pre-processed, for example, by adjusting the color channels. A technique given by way of explanation includes, for example, the application, on the bowl and inside the Field Of View A of the camera 21, of a color marker known a priori. By comparing the colors detected by the camera with the actual colors known a priori , it is possible to correct the color channels of the image according to techniques known to those skilled in the art. According to a particular embodiment, such a marker may be associated with one or more colors of the bowl 10 itself, if the bowl is within the FOV of the camera 21.
Advantageously, the aforesaid step of controlling 103 the actuation means 13 of the barrier element 14, i.e., of the lid (in the case of the bowl), comprises the steps of:
- obtaining 104, by the electronic processing unit 22, at least one descriptive class of the at least one image of a volume proximal to the portion 11 for accessing the resource and outside the electronic apparatus 10 on the basis of said processing by means of the convolutional neural network;
- generating 105, by the electronic processing unit 22, at least one signal for controlling the means 13 for actuating the barrier element 14 on the basis of said at least one descriptive class of the image.
In particular, in the present description, the descriptive class of the at least one image corresponds to the activation level of at least one neuron descriptive of the at least one image. Furthermore, the one or more descriptive neurons are the exit neurons of the trained convolutional neural network.
In a first non-limiting embodiment, the at least one descriptive class, i.e., the activation level of a single descriptive neuron, expresses a binary classification of the image indicative of the presence of at least one animal authorized to access the resource.
For example, the neural network may be trained so that the activation level of the descriptive neuron is one if the authorized animal is present in the image, otherwise such a level is kept at zero.
According to this example, the camera 21 acquires the images of the animal proximal to or moving towards the bowl 10. Such images are processed by the processor 23 and are inserted in an input layer of the neural network. As it is known, initially, in the input layer of the convolutional neural network, a neuronal activation is matched to the value of each pixel of each color channel of the image, proportional thereto. The processor 23 or the computational accelerator (CPU, GPU, VPU) is configured to run a "forward" performance of a first embodiment of a trained convolutional neural network. Such a first embodiment of a neural network returns a binary classification depending on whether the authorized animal has been identified or not. Such a classification is expressed in the form of neuronal activations of the last layer of the network the architecture thereof will be described below. On the basis of such a binary classification, a control logic implemented by the processor 23 opens or keeps the lid 14 of the bowl 10 closed, to enable or to inhibit the access to the food resource by the animal. The ways in which the network is trained and how the user may interact with the electronic equipment to regulate the access by the authorized animal will be described below.
In a second non-limiting embodiment, the at least one descriptive class, i.e., the activation levels of a vector of descriptive neurons, expresses the physical features of the animal detected in the image and comparable with a first vector representative of physical features of at least one animal authorized to access the resource.
In this example, the camera 21 acquires the images of the animal proximal to or moving towards the bowl 10. Such images are processed by the processor 23, or by the accelerator (VPU, GPU, NPU) in a manner similar to that described with regard to the first embodiment, by a "forward" performance of a second embodiment of a trained neural network. As mentioned, such a second embodiment of neural network returns a vector of physical features detected in the image. Such a vector is expressed in the form of neuronal activations of the last layer of the network which will be described below. Such a vector of features may be compared with a vector of features representative of the authorized animal, conveniently stored in the memory 24 of the processing unit 22.
In greater detail, the ways in which a user may store new vectors in the memory will be described below.
The method 100 of the invention comprises the steps of:
- generating, by the trained convolutional neural network run by the electronic processing unit 22, said first vector on the basis of a processing of a digital image of the animal;
- storing the first vector generated in the electronic apparatus 10;
- associating, on the basis of an interaction of a user with such an electronic apparatus 10, the first vector of physical features stored with an animal authorized to access the resource or with an animal not authorized.
The method 100 comprises, as mentioned, a step of comparing the vector of physical features of the animal detected in the image with the first vector stored in the electronic apparatus 10 for identifying the animal authorized to access the resource.
Such a comparing step comprises a step of calculating a distance between the vector of physical features of the animal detected in the image with the first vector stored in the electronic apparatus 10.
Such a distance between vectors may, for example, be calculated in a Euclidean space or with a cosine distance. Such a distance between vectors is representative of a degree of similarity between the animal detected by the cameras 21 and the authorized animal. If such a distance is below a preset threshold, this implies that the authorized animal has been detected .
The method 100 therefore comprises the steps of: establishing a threshold value for the distance between vectors, on the basis of an interaction of a user with the aforesaid electronic apparatus 10;
- comparing, by the electronic processing unit 22, such a threshold value with the distance calculated between the vector of physical features of the animal detected in the image and the first vector stored;
controlling the access to the resource so that: the access is inhibited when the distance between the vector of physical features of the animal detected in the image and the first vector stored exceeds the threshold value, the access is enabled when the distance between the vector of physical features of the animal detected in the image and the first vector stored are below the threshold value.
On the basis of such information, a control logic implemented by the processor 23 opens or closes the lid 14 of the bowl 10, to enable or to inhibit the access to the food resource by the animal.
In a third non-limiting embodiment, the at least one descriptive class is a vector of physical features representative of a part of the body of a user, for example of a hand.
In such a third embodiment, the camera 21 acquires the images of the user proximal to or moving towards the bowl. Such images are processed by the processor, by a "forward" performance of a third embodiment of a trained neural network. Such a third embodiment returns, as output, the presence or absence of different elements of parts of the body of the user and the features thereof. Such a presence is expressed in the form of neuronal activations of the last layer of the network.
For example, it is possible to recognize the presence of a hand of a user willing to access the bowl 10.
Alternatively, in a fourth embodiment, the at least one descriptive class is an information representative of the presence or absence of the food resource in the electronic apparatus 10, i.e., in the compartment 15 of the bowl. Thereby, the control logic has information available about the fact that the food has been eaten and in what quantities.
The aforesaid third and fourth network embodiments may coexist with one of the previous two or may be integrated therewith in a single neural network. With this additional information, a control logic implemented by the processor 23 may be refined, making the bowl 10 an intelligent bowl capable of responding to events related to the state of the food and the intentions of the user.
With reference to Figure 4, an example is described of a convolutional neural network 300 which may be employed in all the embodiments of the method 100 of the present invention.
Such a neural network comprises at least the following layers:
an input layer 301 configured to receive the entire digital image or the sum of the digital images or at least one down-sample of digital image acquired with the cameras 21; at least one convolutional layer conv 1;
at least one fully connected layer 303a;
an output layer 304 with at least one neuron configured to provide the distinction between an authorized animal and an unauthorized one, for example, distinguishing the animal species, according to the first embodiment of the neural network mentioned above. Alternatively, the output layer 304 provides the vector of features detected according to the second embodiment of the neural network mentioned above.
In greater detail, the network 300 comprises a convolution block 302 consisting, for example, of twenty-two convolutional layers conv 1, conv 2, conv 3,
... conv 22 in cascade, also of the DepthWise Convolution (DW) type known to those skilled in the art. The convolutional level input is connected to the output of a respective convolutional layer with linearity of the ReLU type and BatchNorm of the type known to those skilled in the art.
As known, in a convolutional layer of a neural network, each neuron is connected only to some neighboring neurons in the previous layer. The same set of weights (and local connection layout) is used for each neural connection. On the contrary, in a fully connected layer of the network, each neuron is connected to each neuron of the previous layer and each connection has its own weight.
In the example of Figure 4, the neural network 300 consists of two fully connected layers 303a and 303b. These two layers are similar to convolutions having a kernel covering the entire input layer of the neural network 300. Therefore, these may be considered as two further convolutions configured to give a global meaning to the input layer.
It should be noted that the last layer of the block 302, conv 22, may be of a different type: for example, de-convolutionary layers may be used which perform a semantic segmentation of the image revealing which pixels correspond to the animal to be identified. In any case, specific embodiments of the processing 102 and control 103 step of the method do not alter the generality of the present invention.
As known, as the number of convolutional levels of the network (a "deeper" network) increases, such a network acquires predictive accuracy. The choice of a specific convolutional neural network architecture does not alter the generality of the invention.
With reference to the solution for an electronic bowl 10 for animals described above, according to an exemplifying and non-limiting aspect, the bowl comprises the camera 21 with RGB Bayer filter, with a dynamic range of 69.5 dB and a lens with a field of view FOV at 175 degrees. With reference to Figure 2, the camera 21 is positioned at a distance of about 16 cm from the seat 15 containing the food resource and is oriented downwards by 20 degrees.
For the correct operation of the method 100, the neural network 300 shall be a trained network. A training procedure 400 of the network 300 is described with reference to Figure 5.
The training method 400 includes an initial step of defining 401 a position and an orientation of the digital image acquisition means 21.
The method involves the input acquisition 402 by the camera 21 of a plurality of digital images of the bowl configured to capture various situations in which different animals eat from the bowl 10 situated in different environments or in different lighting conditions .
Furthermore, a step of notating 403 the plurality of digital images acquired is included. Such a notation is performed by associating a suitable label or code to each digital image acquired.
In particular, in the first example of an embodiment of a neural network described above, the images are divided into two classes: those containing the authorized animal and those showing an animal or animals other than the authorized one. It is apparent that such a number of classes may be arbitrarily changed without altering the meaning of the invention, for example, to authorize more than one animal.
In this case, for each animal, there will be a neuron in the output layer of the neural network, the activation level thereof (conveniently normalized to the activation level of the other exit neurons) expressing the confidence that the corresponding animal has been identified.
At this point, the training method 400 includes the initialization 405 of the neural network 300 through the association thereof with neural connection weights in a random or predefined manner.
In a preferred embodiment of the training method 400, such a step of training the neural network 300 further comprises a step of increasing 404 the number of images employable for the training by performing further processing operations on the original images acquired. This is accomplished, for example, by performing rotations of each image, by selecting down-samples of the images or by correcting, for each image, at least one color channel. The advantage achieved by such a step of increasing 404, is that of providing the neural network 300 with a greater number of images to be used for the training and, therefore, improving the learning by the network itself.
The following step of training 406 the network 300 occurs by means of a back-propagation method of the type known to those skilled in the art. In the case of the invention, the SGD (Stochastic Gradient Descent) method was used. In particular, we proceed to train at least one level of the neural network 300 by modifying the weights associated with the network on the basis of the labels of the plurality of classified digital images. A loss is calculated at each backpropagation cycle, calculating the error between the classes predicted by the network during the training step and the real ones.
In particular, with reference to the first embodiment of neural network described above, in which the classification is binary (presence or absence of animal) or multi-class (presence or absence of several animals) the loss is typically a "cross entropy" or a binary " cross entropy " .
This type of training is advantageous to distinguish one or more animal species, or the presence of a specific feature which is very common in the animal population. For example, in the specific case of the pet bowl, the network is configured to recognize the presence in the access area of a generic cat or a generic dog.
In the specific case of a binary classification (presence or absence of a generic animal) the network may be trained so that the activation of the last exit neuron is in the range [0,1] . The closer the value is to 1, the higher the confidence in the detection of the animal in the image. Conversely, an activation level close to zero is representative of the fact that the animal has not been identified. Since the activation level may take on any intermediate value, such a value is compared, during the control step, with a threshold stored in the memory of the electronic device.
Such a threshold value may be modified by means of the communication interface 27 of the bowl 10 connected in a wireless manner, by means of the Internet 28, to the personal device of the user. The user may therefore make the bowl 10 more selective, raising the threshold, or less selective by lowering it.
However, in real applications it may be difficult for the user to change a threshold. Thereby, the present invention provides a method configured to allow an automatic adjustment of the threshold value.
Each time an animal is identified, with any neuronal activation level, the method of the invention involves storing in the memory 24 of the bowl 10: the images relating to the animal identified and the activation level of the neuron describing the presence of the animal (e.g., cat) .
Thereby, the memory 24 of the bowl 10 includes groups of images which contain the animal to be authorized (for example, the cat) , but also the animals which have attempted the access however needing to be blocked (for example, the dog, the rabbit, etc...).
Such images are sent to the user device, not necessarily in real time, by means of the communication interface 27, possibly through the mediation of a server.
This allows the user to view the different groups of images and to associate with each group the correct behavior of the bowl 10. The aforementioned binary information (access or non-access) is sent to the bowl itself and stored in the memory 24. Thereby, the correct behavior of the bowl 10 expressed by the user is associated with each activation level of each image group.
At this point the bowl 10 is configured to use a "clustering" algorithm, of the type known to those skilled in the art, to identify the threshold value that best separates the images including the animal from those without animal. It should be noted that the populations of authorized animals and unauthorized animals are usually distributed around the mean value according to a Gaussian curve. It is therefore possible to use a probabilistic model, such as the Gaussian Mixture Model to assign a class of belonging to the new detections.
Furthermore, it is possible to train a neural network to recognize a single individual - for example, in the case of the pet bowl, an individual cat.
In particular, with reference to the second embodiment of neural network described above, the training method provides that the bowl 10 learns a vector of typical features of animals. In this case the training occurs on several types of different animals so that the network learns to represent ("encoding" or "embedding") the different features with a vector.
To calculate the "loss" used to update the weights in the backpropagation step, it is possible, for example, to use the so-called "triplet loss", i.e., present the network with two different images of the same animal and a third image showing a different animal. At each backpropagation cycle, the "loss" is calculated by calculating the two vector distances in Euclidean space.
Thereby, the neural network is induced to internally adjust the weights so that the distance between the two images of the same animal is ideally zero and the distance between images of different animals is maximized. In other words, the network is induced to understand all those invariant features which denote a specific animal and at the same time abstract from all the circumstantial features which are not useful to characterize it (for example, in another posture, with a different expression, with a different lighting, etc...) . The vector encoding the features is not necessarily interpretable by a human being.
According to a preferential embodiment, the training step is performed on a processing unit (e.g., personal computer) different from the processor of the bowl. For example, the bowl 10 may be sold to the user with this second embodiment of a trained neural network already loaded. Obviously, such a neural network has not been trained for the specific animal of the user.
Therefore, the method involves a set-up step in which the user instructs the bowl on the authorized animals and a usage step in which the bowl selects the authorized animal .
In the set-up step, the bowl initially opens for any animal, regardless of the features of the latter and therefore of the neuron vector which has been activated. At each opening of the bowl 10, the neural network calculates the vector of exit neurons and saves it together with the image of the animal in a local or remote database. In this initial set-up step, the user, by means of the portable device thereof, receives from the bowl the images of the various openings in the bowl, and identifies the different animals which have accessed. Still by means of the portable device thereof, the user enters a unique identifier for each animal, such an identifier is then conveniently associated with the database. Thereby, in the memory of the bowl, specific activations of the neuron vector are associated with specific animal identities or related access privileges. Furthermore, the user, still by means of the interface 27, specifies one or more authorized animals and one or more unauthorized animals. Such a list is sent to bowl 10, then stored in the memory 24.
Once the set-up step is complete, the electronic bowl 10 is ready to operate normally: when an animal enters the volume of space in front of the bowl, the image thereof is captured by the camera; sent to the processor 23; processed to obtain a vector which expresses the physical features of the animal; such a vector is compared with the aforesaid vectors stored in the database of the bowl according to a distance criterion; the identity of the animal corresponding to the nearest vector is obtained; the information concerning whether the identity of the animal is authorized to access is obtained and the barrier element 14 is moved according to logics which are detailed below. Similarly to what happens for the classification, the user may set, by means of the communication interface 27, the threshold value discriminating the distance (e.g.: cosine of similarity) between neuron vectors within which a certain animal is considered sufficiently similar to itself in other circumstances, so that the user may properly adjust the selectivity of the bowl.
However, it is generally not true that the representations of the vectors representing the individual animals are linear; it is therefore possible to introduce a further improvement by using a classifier which takes into account any non-linearities.
According to a particular aspect of the present invention it is possible to train a classifier which takes into account any non-linearities, allowing a better distinction between the hyperspaces representative of the different individual animals. Different types of classifiers may be used, conveniently initialized according to machine learning techniques, to classify the embedding vectors, attributing them to an animal identity or to a resource access privilege. Therefore, in a further embodiment of the method 100 of the invention, the aforesaid at least one descriptive class is a vector of physical features of the animal detected in the image, such a vector being provided as input to a classifier adapted to generate information representative of the presence of the authorized animal.
In greater detail, the method comprises the steps of :
- providing, on the basis of an interaction of a user with the electronic apparatus 10, a plurality of first vectors representative of physical features of an animal authorized to access the resource;
training such a classifier on the basis of said plurality of first vectors.
In particular, the step of providing the plurality of first vectors comprises the steps of:
generating, by the trained convolutional neural network run by the electronic processing unit 22, the plurality of first vectors on the basis of a processing of a plurality of digital images of the animal;
- storing the plurality of first vectors generated in such an electronic apparatus 10;
associating, on the basis of an interaction of the user with said electronic apparatus 10, each vector of said plurality of first vectors of physical features stored with an animal authorized to access the resource or with an animal not authorized.
In a preferred embodiment, the classifier is a further trained convolutional neural network configured to receive as input the vector of physical features of the animal detected in the image and to return as output the recognition of the animal or a relevant access privilege .
Such a classification method consists of training the further neural network, smaller in size with respect to the first network, adapted to classify the vector of animal features as an output of the convolutional neural network, in as many classes as there are user animals or in the two access privileges (access allowed, not allowed) . Thereby, there will be a first convolutional neural network trained to extract, from the image of the animal, a vector of features and a second neural network trained to attribute such a vector of features to a specific animal identity or to a particular access privilege .
According to a preferential embodiment, the first network is given already pre-trained to the user. Conversely, the second network, smaller in size with respect to the first network, may be easily trained by the user by using a few examples for each individual animal .
In other words, it is possible to assimilate such two networks to a single entire network having a first convolutional network portion with fixed weights and a second "fully connected" final part portion having weights definable on the basis of the data supplied by the user.
According to another aspect, the network may be trained for the specific animal on a remote computer or in the cloud by means of the Internet connection, and then downloaded locally.
Finally, it is possible to perform the training by using the computing power of the processor 23 of the bowl 10 itself .
The simplest control logic is adapted to command the opening of the bowl 10 in the presence of the authorized animal, while it closes the bowl after a predetermined period of time from the last moment in which the authorized animal has been recognized. A more sophisticated control logic may prevent other animals from taking the place of the authorized animal after the bowl 10 has been opened: in this case, the electronic apparatus 10 would recognize that the animal has changed thus imposing the closure of the lid 14 of the bowl 10. The control logic of the bowl 10 is also configured to send to a user notification messages in real time of the moments in which the animal eats, by virtue of the connection to the Internet 28.
Similarly, the control logic, having acquired information regarding the amount of food resource contained in the bowl 10, may, for example, alert a user of the empty bowl status or operate a filling mechanism, if present. Such an alert may occur with a sound, with a light code placed on the bowl, with a voice assistant or as a notification on the mobile device of the user.
Finally, by identifying a part of the body, for example the hand, of an adult individual, the control logic may recognize the approaching gesture and open up without the user pressing a button.
The information saved in the memory 24 of the bowl 10 may be sent by means of the interface 27 and the Internet 28 to a remote server. This allows the continuous training of the neural network. New versions of neural networks may be downloaded locally by the bowl.
The electronic apparatus 10 and the relevant method 100 for enabling or inhibiting the access to a resource by one or more animals by means of image processing of the present invention have several advantages. In particular, the electronic bowl 10 enables or inhibits access to the food resource by an animal in a selective manner, distinguishing between animals of different species and even distinguishing animals of the same species. In other words, the electronic bowl 10 of the invention opens only following the detection of the authorized animal, not in the presence of unauthorized animals or other living beings in general, and therefore overcomes the limits of the known bowls based on proximity sensors.
Furthermore, the electronic bowl 10 does not require applying to the animal chips with subcutaneous RFID tags or fixed to nameplates: the electronic bowl 10 is therefore more practical than the known solutions.
Visual identification by means of convolutional neural networks allows to operate with a high degree of freedom: position of the animal in the proximal volume, lighting of the environment, expressions or postures of the animal, shadows and reflections. In other words, the application of the convolutional network technology to the specific technical problem, according to the methods described in the method, allows a very high degree of abstraction in the recognition of the animal itself.
Selective resource access control is important to prevent unauthorized individuals from ingesting food destined for a particular species of pet. This is particularly useful in contexts in which avoiding small children ingesting animal food is required.
The suggested methodology may also be applied in the fish industry, to allow only certain fishes to access tanks in which a type of food or a pharmacological treatment is provided.
Furthermore, the method of the invention solves the problem of assigning individual animals, which are authorized to access in a rapid manner as it does not require a long and repeated collection of images of the specific individual. Conversely, it allows, by leveraging only one pre-training of the network, to add a new individual by presenting only a few images thereof by means of an interface which is simple for the user.
Furthermore, with the methodology of the invention it is possible to discriminate the presence of a particular condition of the skin or hair of the animal or the presence of a fish parasite. Furthermore, with the method of the invention it is possible to prevent the access to the resource by a domestic animal with which a foreign body is temporarily associated, such as, for example, in the case of a cat attempting to introduce a prey (e.g., a mouse) within the domestic environment.
Finally, the system improves the safety ensured by the bowl with respect to known solutions. In fact, the camera 21 may recognize the status of the food resource contained in the compartment 15 avoiding unsuitable, dangerous or unpleasant closures for the animals.
Those skilled in the art, in order to meet contingent needs, may modify and adapt the embodiments of the method and electronic apparatus of the invention, and replace elements with others which are functionally equivalent, without departing from the scope of the following claims. Each of the features described as belonging to a possible embodiment may be achieved independently from the other embodiments described.
k k k k k k k

Claims

1. A method (100) for enabling or inhibiting the access to a resource by one or more animals by means of an electronic apparatus (10) comprising a portion (11) for accessing the resource and a portion (12) for controlling the access,
the method comprising the steps of:
acquiring (101), by digital image acquisition means (21) installed on the electronic apparatus (10), at least one digital image of a volume proximal to the portion
(11) for accessing the resource and outside the electronic apparatus (10), said volume being adapted to contain the animal;
processing (102), by an electronic processing unit (22) associated with said portion for controlling (12) the access, the at least one digital image acquired, said processing step comprising a step of performing at least one convolution operation on the at least one digital image by means of a trained convolutional neural network; - controlling (103), by the electronic processing unit (22), means (13) for actuating a barrier element (14) of the electronic apparatus (10) on the basis of a processing of said at least one digital image acquired to move the barrier element (14) between a first position, in which the access to the resource is inhibited, and a second position, in which the access to the resource is enabled, or to block the movement of the barrier element, wherein said step of controlling (103) the means (13) for actuating the barrier element (14) comprises the steps of :
obtaining (104), by the electronic processing unit (22), at least one descriptive class of the at least one image of a volume proximal to the portion (11) for accessing the resource and outside the electronic apparatus (10) on the basis of said processing;
generating (105), by the electronic processing unit (22), at least one signal for controlling the means (13) for actuating the barrier element (14) on the basis of said at least one descriptive class of the image.
2 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 1, wherein said at least one descriptive class is a binary classification of the image indicative of at least one animal authorized to access the resource.
3 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 1, wherein said at least one descriptive class is a vector of physical features of the animal detected in the image and comparable with a first vector representative of physical features of at least one animal authorized to access the resource.
4 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim
3, further comprising the steps of:
- generating, by the trained convolutional neural network run by the electronic processing unit (22), said first vector on the basis of a processing of a digital image of said animal;
- storing the first vector generated in said electronic apparatus (10) ;
- associating, on the basis of an interaction of a user with said electronic apparatus (10), said first vector of physical features stored with an animal authorized to access the resource or with an animal not authorized.
5 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim
4, further comprising a step of comparing said vector of physical features of the animal detected in the image with said first vector stored in the electronic apparatus (10) for identifying the animal authorized to access the resource .
6. Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim
5, wherein said step of comparing comprises a step of calculating a distance between said vector of physical features of the animal detected in the image with said first vector stored in the electronic apparatus (10) .
7 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 6, further comprising the steps of:
- establishing a threshold value for the distance between vectors, on the basis of an interaction of a user with said electronic apparatus (10);
- comparing, by the electronic processing unit (22), said threshold value with the distance calculated between the vector of physical features of the animal detected in the image and the first vector stored;
- controlling the access to the resource so that said access
is inhibited when the distance between the vector of physical features of the animal detected in the image and the first vector stored exceeds the threshold value, is enabled when the distance between the vector of physical features of the animal detected in the image and the first vector stored are below the threshold value.
8. Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 1, wherein said at least one descriptive class is a vector of physical features of the animal detected in the image, said vector being provided as input to a classifier adapted to generate information representative of the presence of the authorized animal.
9 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 8, further comprising the steps of:
- providing, on the basis of an interaction of a user with said electronic apparatus (10), a plurality of first vectors representative of physical features of an animal authorized to access the resource;
- training said classifier on the basis of said plurality of first vectors.
10 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 9, wherein said step of providing the plurality of first vectors comprises the steps of:
- generating, by the trained convolutional neural network run by the electronic processing unit (22), said plurality of first vectors on the basis of a processing of a plurality of digital images of said animals;
- storing the plurality of first vectors generated in said electronic apparatus (10);
- associating, on the basis of an interaction of the user with said electronic apparatus (10), each vector of said plurality of first vectors of physical features stored with an animal authorized to access the resource or with an animal not authorized.
11. Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 8, wherein said classifier is a further trained convolutional neural network configured to receive as input the vector of physical features of the animal detected in the image and to return as output the recognition of the animal or a relevant access privilege.
12. Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 1, wherein said at least one descriptive class is a vector of physical features representative of a user's hand .
13. Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 1, wherein said at least one descriptive class consists of information representative of the presence or absence of the resource in the electronic apparatus (10) .
14. Method (100) for enabling or inhibiting the access to a resource by one or more animals according to claim 1, further comprising a step of providing a marker of known colors associated with said electronic apparatus (10) for correcting the color channels detected by the digital image acquisition means (21), said marker being within a field of view of the aforesaid means.
15 . Method (100) for enabling or inhibiting the access to a resource by one or more animals according to any one of claims 1 to 14, wherein said apparatus (10) is a bowl for pets and said resource is a food resource accommodated in a seat (15) of the pet bowl.
16 . An electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals, said electronic apparatus (10) including a portion (11) for accessing the resource and a portion (12) for controlling the access,
the apparatus comprising:
- means (21) for acquiring digital images configured to acquire at least one digital image of a volume proximal to the portion (11) for accessing the resource and outside the apparatus adapted to contain the animal;
an electronic processing unit (22) associated with said portion for controlling (12) the access and connected to the digital image acquisition means (21), said electronic processing unit (22) comprising at least one processor (23) and a memory block (24) associated with the processor for storing instructions to perform at least one convolution operation on the at least one digital image by means of a trained convolutional neural network;
means (13) for actuating a barrier element (14) connected to the electronic processing unit (22), said electronic processing unit (22) being configured to:
- obtain at least one descriptive class of the at least one image of a volume proximal to the portion (11) for accessing the resource and outside the electronic apparatus (10) on the basis of the processing of said at least one digital image acquired;
- generate at least one signal for controlling the means
(13) for actuating the barrier element (14) on the basis of said at least one descriptive class of the image, said at least one controlling signal being adapted to move the barrier element (14) between a first position, in which the access to the resource is inhibited, and a second position, in which the access to the resource is enabled, or to block the movement of the barrier element
(14) .
17. Electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals according to claim 16, wherein said apparatus is a bowl for pets and said resource is a food resource accommodated in a seat (15) of the pet bowl.
18. Electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals according to claim 16 or 17, wherein said digital image acquisition means comprise at least one camera (21) .
19. Electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals according to claim 16 or 17, wherein said electronic processing unit (22) comprises an input/output interface (25) for connecting the at least one processor (23) and the memory block (24) to the digital image acquisition means (21) and to the means (13) for actuating the barrier element (14) .
20. Electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals according to claim 16 or 17, further comprising a wireless-type communication interface (27) for connecting said processing unit (22) to a data communication network (28) .
21. Electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals according to claim 16 or 17, wherein said actuation means (13) comprise an electric motor configured to move a lid (14) sliding between the closed position, in which the access by the pet to the seat (15) containing the food resource is inhibited, and an opening position, in which the access by the pet to the seat (15) is enabled, and vice versa .
22. Electronic apparatus (10) for enabling or inhibiting the access to a resource by one or more animals according to claim 16, wherein said apparatus is configured to perform the steps of the method according to any one of claims 1-15.
PCT/IB2019/057927 2018-09-19 2019-09-19 Method and electronic apparatus for enabling the access to a resource by one or more animals through image processing WO2020058908A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19786871.4A EP3852517A1 (en) 2018-09-19 2019-09-19 Method and electronic apparatus for enabling the access to a resource by one or more animals through image processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT201800008722 2018-09-19
IT102018000008722 2018-09-19

Publications (1)

Publication Number Publication Date
WO2020058908A1 true WO2020058908A1 (en) 2020-03-26

Family

ID=65031616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/057927 WO2020058908A1 (en) 2018-09-19 2019-09-19 Method and electronic apparatus for enabling the access to a resource by one or more animals through image processing

Country Status (2)

Country Link
EP (1) EP3852517A1 (en)
WO (1) WO2020058908A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7395782B1 (en) * 2004-03-24 2008-07-08 L.P. Holdings Llc System and method for providing selective access to animal food
US7685966B2 (en) * 2006-11-03 2010-03-30 Goehring Heidi L Lidded pet dish
US20140298230A1 (en) * 2013-03-28 2014-10-02 David Michael Priest Pattern-based design system
US20160227736A1 (en) * 2015-02-10 2016-08-11 Harold G. Monk Species specific feeder
US20160227737A1 (en) * 2015-02-05 2016-08-11 PetBot Inc. Device and method for dispensing a pet treat
US20170273277A1 (en) * 2016-03-23 2017-09-28 Harold G. Monk Species specific feeder
KR20180065850A (en) * 2017-03-23 2018-06-18 송수한 Automatic feeding movable apparatus, care robot for companion animal, care system for companion animal having the same and control method thereof
KR101889460B1 (en) * 2016-08-11 2018-09-04 주식회사 한스테크놀로지 Automatic feeding system for pet noticing user by detecting pet's movement using sensing module

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7395782B1 (en) * 2004-03-24 2008-07-08 L.P. Holdings Llc System and method for providing selective access to animal food
US7685966B2 (en) * 2006-11-03 2010-03-30 Goehring Heidi L Lidded pet dish
US20140298230A1 (en) * 2013-03-28 2014-10-02 David Michael Priest Pattern-based design system
US20160227737A1 (en) * 2015-02-05 2016-08-11 PetBot Inc. Device and method for dispensing a pet treat
US20160227736A1 (en) * 2015-02-10 2016-08-11 Harold G. Monk Species specific feeder
US20170273277A1 (en) * 2016-03-23 2017-09-28 Harold G. Monk Species specific feeder
KR101889460B1 (en) * 2016-08-11 2018-09-04 주식회사 한스테크놀로지 Automatic feeding system for pet noticing user by detecting pet's movement using sensing module
KR20180065850A (en) * 2017-03-23 2018-06-18 송수한 Automatic feeding movable apparatus, care robot for companion animal, care system for companion animal having the same and control method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADIT DESHPANDE: "A Beginner's Guide To Understanding Convolutional Neural Networks - Adit Deshpande - Engineering at Forward | UCLA CS '19", 20 July 2016 (2016-07-20), XP055648767, Retrieved from the Internet <URL:https://adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/> [retrieved on 20191203] *
MICHAEL NIELSEN: "Neural Networks and Deep Learning", 1 August 2018 (2018-08-01), XP055648841, Retrieved from the Internet <URL:http://static.latexstudio.net/article/2018/0912/neuralnetworksanddeeplearning.pdf> [retrieved on 20191203], DOI: 10.1093/annonc/mdy166 *

Also Published As

Publication number Publication date
EP3852517A1 (en) 2021-07-28

Similar Documents

Publication Publication Date Title
Achour et al. Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on Convolutional Neural Networks (CNN)
Alameer et al. Automatic recognition of feeding and foraging behaviour in pigs using deep learning
Chen et al. Recognition of feeding behaviour of pigs and determination of feeding time of each pig by a video-based deep learning method
US20230217903A1 (en) Animal Sensing System
WO2019101720A1 (en) Methods for scene classification of an image in a driving support system
KR102325259B1 (en) companion animal life management system and method therefor
CN111134033A (en) Intelligent animal feeder and method and system thereof
US20200342207A1 (en) 3d biometric identification system for identifying animals
Guo et al. Bigru-attention based cow behavior classification using video data for precision livestock farming
JP7360496B2 (en) Judgment system
Hindarto Use ResNet50V2 Deep Learning Model to Classify Five Animal Species
CN110896871A (en) Method and device for putting food and intelligent food throwing machine
WO2020058908A1 (en) Method and electronic apparatus for enabling the access to a resource by one or more animals through image processing
Sajithra Varun et al. DeepAID: a design of smart animal intrusion detection and classification using deep hybrid neural networks
Duraiswami et al. Cattle breed detection and categorization using image processing and machine learning
Sayed et al. An automated fish species identification system based on crow search algorithm
US20230263124A1 (en) Livestock restraining devices, systems for livestock management, and uses thereof
KR102655958B1 (en) System for feeding multiple dogs using machine learning and method therefor
Laishram et al. Biometric identification of Black Bengal goat: unique iris pattern matching system vs deep learning approach
KR20230101121A (en) Method and apparatus for identifying animal objects based on images
Van der Eijk et al. Seeing is caring–automated assessment of resource use of broilers with computer vision techniques
Alon et al. Machine vision-based automatic lamb identification and drinking activity in a commercial farm
Farah et al. Computing a rodent’s diary
WO2024198278A1 (en) Pet health management method, apparatus and pet companion robot
Humphreys et al. The principle of target-competitor differentiation in object recognition and naming (and its role in category effects in normality and pathology)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19786871

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019786871

Country of ref document: EP

Effective date: 20210419