EP3852517A1 - Procédé et appareil électronique pour permettre l'accès à une ressource à un ou plusieurs animaux par traitement d'image - Google Patents

Procédé et appareil électronique pour permettre l'accès à une ressource à un ou plusieurs animaux par traitement d'image

Info

Publication number
EP3852517A1
EP3852517A1 EP19786871.4A EP19786871A EP3852517A1 EP 3852517 A1 EP3852517 A1 EP 3852517A1 EP 19786871 A EP19786871 A EP 19786871A EP 3852517 A1 EP3852517 A1 EP 3852517A1
Authority
EP
European Patent Office
Prior art keywords
resource
access
animal
electronic apparatus
enabling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19786871.4A
Other languages
German (de)
English (en)
Inventor
Silvio REVELLI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volta Robots Srl
Original Assignee
Volta Robots Srl
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volta Robots Srl filed Critical Volta Robots Srl
Publication of EP3852517A1 publication Critical patent/EP3852517A1/fr
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/01Feed troughs; Feed pails
    • A01K5/0114Pet food dispensers; Pet food trays
    • A01K5/0142Pet food dispensers; Pet food trays with means for preventing other animals or insects from eating
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/02Automatic devices

Definitions

  • the present invention relates, in general, to an electronic apparatus for enabling or inhibiting the access to a resource by one or more animals by means of image processing and the relevant operating method.
  • the invention is applicable to livestock feeders or fish tanks which are configured to ensure or deny access to a food or pharmacological resource to a group of animals or to a particular animal of the group.
  • the invention relates to an electronic bowl for enabling or inhibiting the access to a food resource accommodated into the bowl by an animal on the basis of image processing.
  • Electronic bowls for containing food resources for animals are known, equipped with barrier means which may be opened and re-closed to give access and dispense such food resources.
  • such electronic bowls comprise suitable sensors, for example infrared sensors, placed along the perimeter of the bowl itself to detect a living being approaching the bowl of and to open, consequently, the barrier means.
  • Such a bowl solution although useful for the purpose of preserving the integrity of the food resource, has the inconvenience that the barrier means open when any living being approaches the bowl, both animals and humans, including children.
  • Electronic bowls for pets which include controllable barrier means to selectively dispense food resources to animals based on RFID sensors.
  • sensors are configured to detect the presence of a respective nameplate or tag associated with an animal, which may correspond to a subcutaneous chip applied to the animal or may be fixed to a pet tag associated with the animal.
  • a tag By comparing such a tag with a pre-set list of tags authorized to access the food resource, such an electronic bowl selectively enables or inhibits the access to the resource contained into the bowl to authorized animals only .
  • the subcutaneous RFID tag configured to dialogue with this type of bowl is invasive, since the relative sensor should be positioned exactly on the back of the animal while the latter eats from the bowl. Furthermore, the reading distance of the RFID is still limited within the power ranges of the electromagnetic signals involved.
  • the selection of the accesses to a resource may not be made on the basis of RFID sensors. This occurs, in particular, when the criterion for discriminating an animal is a specific attribute of the animal itself, for example: a pathological condition of the skin or coat or scales of the animal; the presence or absence of parasites; reaching a certain length or height or pigmentation; a particular color of the plumage in the case of avian species .
  • Such an object is achieved by a method for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with claim 1.
  • the present invention also relates to an electronic apparatus operating on the basis of the aforesaid method for enabling or inhibiting the access to a resource by one or more animals in accordance with claim 16.
  • such an electronic apparatus is an electronic bowl for enabling or inhibiting the access to a food resource accommodated into the bowl by an animal on the basis of image processing.
  • the method for allowing the access to a resource by an animal is based on the processing of images of the area in front of the resource to which access is ensured, separated by a barrier element.
  • such an image processing takes place by employing convolutional neural networks trained to supply the electronic apparatus with information about the presence or absence of the authorized animal in the image of the area in front of the resource.
  • the detection of the presence of the animal discriminates the species of the animal itself. In other embodiments it is possible to discriminate the race of the animal or the individual for a targeted access control.
  • the user may set, by means of a suitable interface, the parameters useful for selecting the animals adequately, without requiring a retraining of the neural network.
  • the information returned by the processing is used to control means for actuating the apparatus, for example a motor, adapted to move the aforesaid barrier element (for example, a cover in the case of the bowl or of a feeder, or a door in the case of access to a stable or a kennel) which allows or denies the animal access to the resource.
  • actuating the apparatus for example a motor, adapted to move the aforesaid barrier element (for example, a cover in the case of the bowl or of a feeder, or a door in the case of access to a stable or a kennel) which allows or denies the animal access to the resource.
  • Figure 1 shows a perspective image of an electronic apparatus, in particular an electronic bowl for pets, for enabling or inhibiting the access to a resource by one or more animals by means of image processing in accordance with the invention
  • FIG. 1 diagrammatically shows structural details of the electronic bowl of Figure 1;
  • Figure 3 shows, in a flow diagram, a method for enabling or inhibiting the access to a resource by one or more animals by means of image processing implemented by the electronic bowl of Figures 1-2;
  • Figure 4 shows, in a logical diagram, an embodiment of a neural network, comprising convolutional levels, employed in the method of the invention and configured to return a classification of digital images of the area in front of the resource to be accessed, which includes the aforesaid animal;
  • Figure 5 shows, in a flow diagram, a training method of the neural network of Figure 4.
  • FIG. 1-2 an example of electronic apparatus for enabling or inhibiting the access to a resource by one or more animals by means of image processing, operating in accordance with the method of the invention, is indicated as a whole by reference numeral 10.
  • the electronic apparatus 10 comprises a body 1 which includes a portion 11 for accessing the resource and a portion 12 for controlling the access.
  • the electronic apparatus 10 is configured to allow or deny access by an animal to the resource by means of the movement of a barrier element 14 enabling or inhibiting the access to such a resource.
  • the apparatus 10 advantageously operates on the basis of image processing employing trained convolutional neural networks.
  • the electronic apparatus 10 comprises digital image acquisition means 21 configured to acquire at least one digital image of a volume proximal to the portion 11 for accessing the resource and outside the apparatus 10 adapted to contain the animal.
  • such means 21 are characterized by a respective orientation and angle width so that the Field of View (or FOV) , indicated by the width of angle A in Figure 2, is sufficiently extended to include at least one portion of the body of the animal used to identify the animal itself, during the attempt by the animal to access the resource.
  • Such digital image acquisition means 21 are configured to acquire, for example, continuously or at predetermined time intervals, sequences of images or frames of the volume proximal to the portion 11 for accessing the resource of the apparatus 10 which includes such a portion of the body of the animal.
  • Such image acquisition means are embodied, for example, by one or more cameras 21.
  • Each camera 21 is configured to acquire images in grayscale or, preferably, in the color-coded visible spectrum (for example, RGB) .
  • the camera 21 may be chosen to operate in the visible or infrared spectrum, in the thermal radiation spectrum or in the ultraviolet spectrum, or is configured to complete the optical information on the image acquired by employing a channel dedicated to depth (for example, RGB-D) .
  • the electronic apparatus 10 further comprises an electronic processing unit 22 associated with the portion 12 for controlling the access to the apparatus 10 and connected to the digital image acquisition means 21.
  • Such an electronic processing unit 22 comprises at least one processor 23 and one memory block 24, associated with the processor for storing instructions.
  • a memory block 24 is connected to the processor 20 by means of a data communication line or bus 26 (for example, PCI) and consists, for example, of a service memory of the volatile type (for example, of the SDRAM type) , and of a system memory of the non volatile type (for example, of the SSD type) .
  • the processor 23 may be connected by means of a suitable communication interface to a computational accelerator specialized in convolution operations, such as, for example, a Neural Processing Unit (NPU) or a Graphic Processing Unit (GPU) or a Visual Processing Unit (VPU) .
  • a computational accelerator specialized in convolution operations, such as, for example, a Neural Processing Unit (NPU) or a Graphic Processing Unit (GPU) or a Visual Processing Unit (VPU) .
  • the processor 23 is configured to delegate the necessary convolution operations to such a computational accelerator, according to the implementation of the method described.
  • the electronic processing unit 22 comprises a data communication interface 27, for example, of the wireless type, configured to connect such a processing unit 22 to a data communication network 28, for example, the Internet, and to allow the processing unit to communicate with remote electronic devices, such as, for example, servers or portable devices (smartphones, tablets, laptops) associated with one or more users.
  • a data communication interface 27 for example, of the wireless type, configured to connect such a processing unit 22 to a data communication network 28, for example, the Internet, and to allow the processing unit to communicate with remote electronic devices, such as, for example, servers or portable devices (smartphones, tablets, laptops) associated with one or more users.
  • the electronic apparatus 10 comprises means 13 for actuating a barrier element 14 connected to the electronic processing unit 22.
  • actuation means 13 are controlled by the electronic processing unit 22 on the basis of a processing of the at least one digital image acquired to move the barrier element 14 between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element 14.
  • the electronic processing unit 22 of the apparatus 10 comprises an input/output interface 25 for connecting the at least one processor 23 and the memory block 24 to the digital image acquisition means 21 and to the means 13 for actuating the barrier element 14.
  • the aforesaid electronic apparatus 10 is a pet bowl, and the resource is a food resource accommodated in a seat 15 provided in the body 1 of the pet bowl 10.
  • the teachings of the invention may be applied, with minimal modifications, even to other applications in the field of selective access for domestic animals, livestock, poultry, and fish resources.
  • the method of the invention may be applied, for example, to beddings, kennels or shelters for pets which are provided with controllable access doors and to all those situations in which it is necessary to authorize one or more animals to access a resource by discriminating them on the basis of how such animals appear visually.
  • the electronic apparatus and the method of the invention may be used, with suitable adaptations and suitable mobile barriers already present on the market, with different types of pets and breeding animals, including cats, dogs, rabbits, rodents in general, horses, cows, goats, sheep, pigs, chickens, salmon, bream, bass.
  • the actuation means 13 comprise an electric motor configured to move a lid 14, for example, in plexiglass and transparent, sliding between the closed position, in which the access by the pet to the seat 15 containing the food resource is inhibited, and an opening position, in which the access by the pet to the seat 15 is enabled, and vice versa.
  • the seat 15, which may be closed again from the lid 14, is obtained in the portion 11 for accessing the resource of the bowl 10.
  • the camera or cameras 21 of the electronic bowl 10 are fastened to a supporting element 2 protruding from the body 1 of the bowl 10, in particular, from the portion 12 for controlling the access .
  • the electronic processing unit 22 of the apparatus 10 is set to run the codes of an application program implementing the method 100 of the invention .
  • the processor 23 is configured to load, in the memory block 24, and to run the codes of the application program implementing the method 100 of the present invention.
  • the method 100 comprises a symbolic starting step STR and a symbolic ending step ED.
  • the method 100 for enabling or inhibiting the access to a resource by one or more animals comprises a first step of acquiring 101, by the digital image acquisition means 21 installed on the electronic apparatus 10, at least one digital image of a volume proximal to the portion 11 for accessing the resource and outside the electronic apparatus 10, in which such a volume is adapted to contain the animal.
  • the method 100 comprises a step of processing 102, by an electronic processing unit 22 associating with the portion 12 for controlling the access, the at least one digital image acquired.
  • the aforesaid step of processing 102 the at least one digital image acquired comprises a step of performing at least one convolution operation on the at least one digital image by means of a trained convolutional neural network.
  • the method comprises a step of controlling 103, by the electronic processing unit 22, means 13 for actuating a barrier element 14 of the electronic apparatus 10 on the basis of a processing of the at least one digital image acquired to move the barrier element 14 between a first position, in which the access to the resource is inhibited, to a second position, in which the access to the resource is enabled, or to block the movement of the barrier element 14.
  • the image Before passing to the neural network (step 102), the image may be pre-processed, for example, by adjusting the color channels.
  • a technique given by way of explanation includes, for example, the application, on the bowl and inside the Field Of View A of the camera 21, of a color marker known a priori. By comparing the colors detected by the camera with the actual colors known a priori , it is possible to correct the color channels of the image according to techniques known to those skilled in the art. According to a particular embodiment, such a marker may be associated with one or more colors of the bowl 10 itself, if the bowl is within the FOV of the camera 21.
  • the aforesaid step of controlling 103 the actuation means 13 of the barrier element 14, i.e., of the lid (in the case of the bowl), comprises the steps of:
  • the descriptive class of the at least one image corresponds to the activation level of at least one neuron descriptive of the at least one image.
  • the one or more descriptive neurons are the exit neurons of the trained convolutional neural network.
  • the at least one descriptive class i.e., the activation level of a single descriptive neuron, expresses a binary classification of the image indicative of the presence of at least one animal authorized to access the resource.
  • the neural network may be trained so that the activation level of the descriptive neuron is one if the authorized animal is present in the image, otherwise such a level is kept at zero.
  • the camera 21 acquires the images of the animal proximal to or moving towards the bowl 10. Such images are processed by the processor 23 and are inserted in an input layer of the neural network.
  • a neuronal activation is matched to the value of each pixel of each color channel of the image, proportional thereto.
  • the processor 23 or the computational accelerator is configured to run a "forward" performance of a first embodiment of a trained convolutional neural network.
  • Such a first embodiment of a neural network returns a binary classification depending on whether the authorized animal has been identified or not.
  • Such a classification is expressed in the form of neuronal activations of the last layer of the network the architecture thereof will be described below.
  • a control logic implemented by the processor 23 opens or keeps the lid 14 of the bowl 10 closed, to enable or to inhibit the access to the food resource by the animal.
  • the ways in which the network is trained and how the user may interact with the electronic equipment to regulate the access by the authorized animal will be described below.
  • the at least one descriptive class i.e., the activation levels of a vector of descriptive neurons, expresses the physical features of the animal detected in the image and comparable with a first vector representative of physical features of at least one animal authorized to access the resource.
  • the camera 21 acquires the images of the animal proximal to or moving towards the bowl 10.
  • Such images are processed by the processor 23, or by the accelerator (VPU, GPU, NPU) in a manner similar to that described with regard to the first embodiment, by a "forward" performance of a second embodiment of a trained neural network.
  • a second embodiment of neural network returns a vector of physical features detected in the image.
  • Such a vector is expressed in the form of neuronal activations of the last layer of the network which will be described below.
  • Such a vector of features may be compared with a vector of features representative of the authorized animal, conveniently stored in the memory 24 of the processing unit 22.
  • the method 100 of the invention comprises the steps of:
  • the method 100 comprises, as mentioned, a step of comparing the vector of physical features of the animal detected in the image with the first vector stored in the electronic apparatus 10 for identifying the animal authorized to access the resource.
  • Such a comparing step comprises a step of calculating a distance between the vector of physical features of the animal detected in the image with the first vector stored in the electronic apparatus 10.
  • Such a distance between vectors may, for example, be calculated in a Euclidean space or with a cosine distance. Such a distance between vectors is representative of a degree of similarity between the animal detected by the cameras 21 and the authorized animal. If such a distance is below a preset threshold, this implies that the authorized animal has been detected .
  • the method 100 therefore comprises the steps of: establishing a threshold value for the distance between vectors, on the basis of an interaction of a user with the aforesaid electronic apparatus 10;
  • controlling the access to the resource so that: the access is inhibited when the distance between the vector of physical features of the animal detected in the image and the first vector stored exceeds the threshold value, the access is enabled when the distance between the vector of physical features of the animal detected in the image and the first vector stored are below the threshold value.
  • a control logic implemented by the processor 23 opens or closes the lid 14 of the bowl 10, to enable or to inhibit the access to the food resource by the animal.
  • the at least one descriptive class is a vector of physical features representative of a part of the body of a user, for example of a hand.
  • the camera 21 acquires the images of the user proximal to or moving towards the bowl. Such images are processed by the processor, by a "forward" performance of a third embodiment of a trained neural network. Such a third embodiment returns, as output, the presence or absence of different elements of parts of the body of the user and the features thereof. Such a presence is expressed in the form of neuronal activations of the last layer of the network.
  • the at least one descriptive class is an information representative of the presence or absence of the food resource in the electronic apparatus 10, i.e., in the compartment 15 of the bowl.
  • the control logic has information available about the fact that the food has been eaten and in what quantities.
  • the aforesaid third and fourth network embodiments may coexist with one of the previous two or may be integrated therewith in a single neural network.
  • a control logic implemented by the processor 23 may be refined, making the bowl 10 an intelligent bowl capable of responding to events related to the state of the food and the intentions of the user.
  • Such a neural network comprises at least the following layers:
  • an input layer 301 configured to receive the entire digital image or the sum of the digital images or at least one down-sample of digital image acquired with the cameras 21; at least one convolutional layer conv 1;
  • an output layer 304 with at least one neuron configured to provide the distinction between an authorized animal and an unauthorized one, for example, distinguishing the animal species, according to the first embodiment of the neural network mentioned above.
  • the output layer 304 provides the vector of features detected according to the second embodiment of the neural network mentioned above.
  • the network 300 comprises a convolution block 302 consisting, for example, of twenty-two convolutional layers conv 1, conv 2, conv 3,
  • convolutional level input is connected to the output of a respective convolutional layer with linearity of the ReLU type and BatchNorm of the type known to those skilled in the art.
  • each neuron is connected only to some neighboring neurons in the previous layer.
  • the same set of weights (and local connection layout) is used for each neural connection.
  • each neuron is connected to each neuron of the previous layer and each connection has its own weight.
  • the neural network 300 consists of two fully connected layers 303a and 303b. These two layers are similar to convolutions having a kernel covering the entire input layer of the neural network 300. Therefore, these may be considered as two further convolutions configured to give a global meaning to the input layer.
  • the last layer of the block 302, conv 22, may be of a different type: for example, de-convolutionary layers may be used which perform a semantic segmentation of the image revealing which pixels correspond to the animal to be identified.
  • de-convolutionary layers may be used which perform a semantic segmentation of the image revealing which pixels correspond to the animal to be identified.
  • specific embodiments of the processing 102 and control 103 step of the method do not alter the generality of the present invention.
  • the bowl comprises the camera 21 with RGB Bayer filter, with a dynamic range of 69.5 dB and a lens with a field of view FOV at 175 degrees.
  • the camera 21 is positioned at a distance of about 16 cm from the seat 15 containing the food resource and is oriented downwards by 20 degrees.
  • the neural network 300 shall be a trained network.
  • a training procedure 400 of the network 300 is described with reference to Figure 5.
  • the training method 400 includes an initial step of defining 401 a position and an orientation of the digital image acquisition means 21.
  • the method involves the input acquisition 402 by the camera 21 of a plurality of digital images of the bowl configured to capture various situations in which different animals eat from the bowl 10 situated in different environments or in different lighting conditions .
  • a step of notating 403 the plurality of digital images acquired is included. Such a notation is performed by associating a suitable label or code to each digital image acquired.
  • the images are divided into two classes: those containing the authorized animal and those showing an animal or animals other than the authorized one. It is apparent that such a number of classes may be arbitrarily changed without altering the meaning of the invention, for example, to authorize more than one animal.
  • the activation level thereof (conveniently normalized to the activation level of the other exit neurons) expressing the confidence that the corresponding animal has been identified.
  • the training method 400 includes the initialization 405 of the neural network 300 through the association thereof with neural connection weights in a random or predefined manner.
  • such a step of training the neural network 300 further comprises a step of increasing 404 the number of images employable for the training by performing further processing operations on the original images acquired. This is accomplished, for example, by performing rotations of each image, by selecting down-samples of the images or by correcting, for each image, at least one color channel.
  • the advantage achieved by such a step of increasing 404 is that of providing the neural network 300 with a greater number of images to be used for the training and, therefore, improving the learning by the network itself.
  • the following step of training 406 the network 300 occurs by means of a back-propagation method of the type known to those skilled in the art.
  • the SGD Spochastic Gradient Descent
  • a loss is calculated at each backpropagation cycle, calculating the error between the classes predicted by the network during the training step and the real ones.
  • the loss is typically a "cross entropy” or a binary " cross entropy " .
  • This type of training is advantageous to distinguish one or more animal species, or the presence of a specific feature which is very common in the animal population.
  • the network is configured to recognize the presence in the access area of a generic cat or a generic dog.
  • the network may be trained so that the activation of the last exit neuron is in the range [0,1] .
  • an activation level close to zero is representative of the fact that the animal has not been identified. Since the activation level may take on any intermediate value, such a value is compared, during the control step, with a threshold stored in the memory of the electronic device.
  • Such a threshold value may be modified by means of the communication interface 27 of the bowl 10 connected in a wireless manner, by means of the Internet 28, to the personal device of the user. The user may therefore make the bowl 10 more selective, raising the threshold, or less selective by lowering it.
  • the present invention provides a method configured to allow an automatic adjustment of the threshold value.
  • the method of the invention involves storing in the memory 24 of the bowl 10: the images relating to the animal identified and the activation level of the neuron describing the presence of the animal (e.g., cat) .
  • the memory 24 of the bowl 10 includes groups of images which contain the animal to be authorized (for example, the cat) , but also the animals which have attempted the access however needing to be blocked (for example, the dog, the rabbit, etc).
  • Such images are sent to the user device, not necessarily in real time, by means of the communication interface 27, possibly through the mediation of a server.
  • the bowl 10 is configured to use a "clustering" algorithm, of the type known to those skilled in the art, to identify the threshold value that best separates the images including the animal from those without animal.
  • a clustering algorithm of the type known to those skilled in the art, to identify the threshold value that best separates the images including the animal from those without animal.
  • the populations of authorized animals and unauthorized animals are usually distributed around the mean value according to a Gaussian curve. It is therefore possible to use a probabilistic model, such as the Gaussian Mixture Model to assign a class of belonging to the new detections.
  • the training method provides that the bowl 10 learns a vector of typical features of animals.
  • the training occurs on several types of different animals so that the network learns to represent ("encoding” or "embedding") the different features with a vector.
  • the "loss” used to update the weights in the backpropagation step it is possible, for example, to use the so-called “triplet loss", i.e., present the network with two different images of the same animal and a third image showing a different animal.
  • the "loss” is calculated by calculating the two vector distances in Euclidean space.
  • the neural network is induced to internally adjust the weights so that the distance between the two images of the same animal is ideally zero and the distance between images of different animals is maximized.
  • the network is induced to understand all those invariant features which denote a specific animal and at the same time abstract from all the circumstantial features which are not useful to characterize it (for example, in another posture, with a different expression, with a different lighting, etc%) .
  • the vector encoding the features is not necessarily interpretable by a human being.
  • the training step is performed on a processing unit (e.g., personal computer) different from the processor of the bowl.
  • a processing unit e.g., personal computer
  • the bowl 10 may be sold to the user with this second embodiment of a trained neural network already loaded. Obviously, such a neural network has not been trained for the specific animal of the user.
  • the method involves a set-up step in which the user instructs the bowl on the authorized animals and a usage step in which the bowl selects the authorized animal .
  • the bowl initially opens for any animal, regardless of the features of the latter and therefore of the neuron vector which has been activated.
  • the neural network calculates the vector of exit neurons and saves it together with the image of the animal in a local or remote database.
  • the user by means of the portable device thereof, receives from the bowl the images of the various openings in the bowl, and identifies the different animals which have accessed.
  • the user enters a unique identifier for each animal, such an identifier is then conveniently associated with the database.
  • specific activations of the neuron vector are associated with specific animal identities or related access privileges.
  • the user still by means of the interface 27, specifies one or more authorized animals and one or more unauthorized animals. Such a list is sent to bowl 10, then stored in the memory 24.
  • the electronic bowl 10 is ready to operate normally: when an animal enters the volume of space in front of the bowl, the image thereof is captured by the camera; sent to the processor 23; processed to obtain a vector which expresses the physical features of the animal; such a vector is compared with the aforesaid vectors stored in the database of the bowl according to a distance criterion; the identity of the animal corresponding to the nearest vector is obtained; the information concerning whether the identity of the animal is authorized to access is obtained and the barrier element 14 is moved according to logics which are detailed below.
  • the user may set, by means of the communication interface 27, the threshold value discriminating the distance (e.g.: cosine of similarity) between neuron vectors within which a certain animal is considered sufficiently similar to itself in other circumstances, so that the user may properly adjust the selectivity of the bowl.
  • the threshold value discriminating the distance (e.g.: cosine of similarity) between neuron vectors within which a certain animal is considered sufficiently similar to itself in other circumstances, so that the user may properly adjust the selectivity of the bowl.
  • the aforesaid at least one descriptive class is a vector of physical features of the animal detected in the image, such a vector being provided as input to a classifier adapted to generate information representative of the presence of the authorized animal.
  • the method comprises the steps of :
  • the step of providing the plurality of first vectors comprises the steps of:
  • the classifier is a further trained convolutional neural network configured to receive as input the vector of physical features of the animal detected in the image and to return as output the recognition of the animal or a relevant access privilege .
  • Such a classification method consists of training the further neural network, smaller in size with respect to the first network, adapted to classify the vector of animal features as an output of the convolutional neural network, in as many classes as there are user animals or in the two access privileges (access allowed, not allowed) .
  • a first convolutional neural network trained to extract, from the image of the animal, a vector of features and a second neural network trained to attribute such a vector of features to a specific animal identity or to a particular access privilege .
  • the first network is given already pre-trained to the user.
  • the second network smaller in size with respect to the first network, may be easily trained by the user by using a few examples for each individual animal .
  • the network may be trained for the specific animal on a remote computer or in the cloud by means of the Internet connection, and then downloaded locally.
  • the simplest control logic is adapted to command the opening of the bowl 10 in the presence of the authorized animal, while it closes the bowl after a predetermined period of time from the last moment in which the authorized animal has been recognized.
  • a more sophisticated control logic may prevent other animals from taking the place of the authorized animal after the bowl 10 has been opened: in this case, the electronic apparatus 10 would recognize that the animal has changed thus imposing the closure of the lid 14 of the bowl 10.
  • the control logic of the bowl 10 is also configured to send to a user notification messages in real time of the moments in which the animal eats, by virtue of the connection to the Internet 28.
  • control logic having acquired information regarding the amount of food resource contained in the bowl 10, may, for example, alert a user of the empty bowl status or operate a filling mechanism, if present. Such an alert may occur with a sound, with a light code placed on the bowl, with a voice assistant or as a notification on the mobile device of the user.
  • control logic may recognize the approaching gesture and open up without the user pressing a button.
  • the information saved in the memory 24 of the bowl 10 may be sent by means of the interface 27 and the Internet 28 to a remote server. This allows the continuous training of the neural network. New versions of neural networks may be downloaded locally by the bowl.
  • the electronic apparatus 10 and the relevant method 100 for enabling or inhibiting the access to a resource by one or more animals by means of image processing of the present invention have several advantages.
  • the electronic bowl 10 enables or inhibits access to the food resource by an animal in a selective manner, distinguishing between animals of different species and even distinguishing animals of the same species.
  • the electronic bowl 10 of the invention opens only following the detection of the authorized animal, not in the presence of unauthorized animals or other living beings in general, and therefore overcomes the limits of the known bowls based on proximity sensors.
  • the electronic bowl 10 does not require applying to the animal chips with subcutaneous RFID tags or fixed to nameplates: the electronic bowl 10 is therefore more practical than the known solutions.
  • Visual identification by means of convolutional neural networks allows to operate with a high degree of freedom: position of the animal in the proximal volume, lighting of the environment, expressions or postures of the animal, shadows and reflections.
  • the application of the convolutional network technology to the specific technical problem, according to the methods described in the method allows a very high degree of abstraction in the recognition of the animal itself.
  • Selective resource access control is important to prevent unauthorized individuals from ingesting food destined for a particular species of pet. This is particularly useful in contexts in which avoiding small children ingesting animal food is required.
  • the suggested methodology may also be applied in the fish industry, to allow only certain fishes to access tanks in which a type of food or a pharmacological treatment is provided.
  • the method of the invention solves the problem of assigning individual animals, which are authorized to access in a rapid manner as it does not require a long and repeated collection of images of the specific individual. Conversely, it allows, by leveraging only one pre-training of the network, to add a new individual by presenting only a few images thereof by means of an interface which is simple for the user.
  • the methodology of the invention it is possible to discriminate the presence of a particular condition of the skin or hair of the animal or the presence of a fish parasite. Furthermore, with the method of the invention it is possible to prevent the access to the resource by a domestic animal with which a foreign body is temporarily associated, such as, for example, in the case of a cat attempting to introduce a prey (e.g., a mouse) within the domestic environment.
  • a prey e.g., a mouse
  • the system improves the safety ensured by the bowl with respect to known solutions.
  • the camera 21 may recognize the status of the food resource contained in the compartment 15 avoiding unsuitable, dangerous or unpleasant closures for the animals.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Birds (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil électronique (10) pour permettre ou inhiber l'accès à une ressource à un ou plusieurs animaux. Un tel appareil électronique comprend une partie (11) pour accéder à la ressource et une partie pour en contrôler (12) l'accès. Le procédé comprend les étapes consistant à : acquérir (101) au moins une image numérique d'un volume à proximité de la partie pour accéder à la ressource et à l'extérieur de l'appareil électronique contenant l'animal ; traiter (102) au moins une image numérique acquise, ladite étape de traitement comprenant une étape de réalisation d'au moins une opération de convolution sur au moins une image numérique au moyen d'un réseau de neurones à convolution entraîné ; contrôler (103) les moyens (13) d'actionnement d'un élément de type barrière (14) de l'appareil électronique sur la base d'un traitement de ladite image numérique acquise pour déplacer l'élément de type barrière entre une première position, dans laquelle l'accès à la ressource est empêché, vers une seconde position, dans laquelle l'accès à la ressource est permis, ou pour bloquer le mouvement de l'élément de type barrière. L'étape de contrôle des moyens d'actionnement de l'élément de type barrière comprend les étapes consistant à : obtenir (104) au moins une catégorie descriptive d'au moins une image d'un volume à proximité de la partie pour accéder à la ressource et à l'extérieur de l'appareil électronique sur la base dudit traitement ; générer (105) au moins un signal de commande des moyens d'actionnement de l'élément de type barrière sur la base, au moins, de ladite classe descriptive de l'image.
EP19786871.4A 2018-09-19 2019-09-19 Procédé et appareil électronique pour permettre l'accès à une ressource à un ou plusieurs animaux par traitement d'image Pending EP3852517A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT201800008722 2018-09-19
PCT/IB2019/057927 WO2020058908A1 (fr) 2018-09-19 2019-09-19 Procédé et appareil électronique pour permettre l'accès à une ressource à un ou plusieurs animaux par traitement d'image

Publications (1)

Publication Number Publication Date
EP3852517A1 true EP3852517A1 (fr) 2021-07-28

Family

ID=65031616

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19786871.4A Pending EP3852517A1 (fr) 2018-09-19 2019-09-19 Procédé et appareil électronique pour permettre l'accès à une ressource à un ou plusieurs animaux par traitement d'image

Country Status (2)

Country Link
EP (1) EP3852517A1 (fr)
WO (1) WO2020058908A1 (fr)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7395782B1 (en) * 2004-03-24 2008-07-08 L.P. Holdings Llc System and method for providing selective access to animal food
US7685966B2 (en) * 2006-11-03 2010-03-30 Goehring Heidi L Lidded pet dish
US10572118B2 (en) * 2013-03-28 2020-02-25 David Michael Priest Pattern-based design system
US20160227737A1 (en) * 2015-02-05 2016-08-11 PetBot Inc. Device and method for dispensing a pet treat
US10292363B2 (en) * 2015-02-10 2019-05-21 Harold G Monk Species specific feeder
US10674703B2 (en) * 2016-03-23 2020-06-09 Harold G. Monk Species specific feeder
KR101889460B1 (ko) * 2016-08-11 2018-09-04 주식회사 한스테크놀로지 센싱모듈에 의해 애완동물의 움직임을 감지하여 사용자에게 알릴 수 있는 애완동물용 자동먹이주기 시스템
KR20180065850A (ko) * 2017-03-23 2018-06-18 송수한 이동형 자동급식 장치, 반려동물 케어 로봇, 이를 포함하는 반려동물 케어 시스템 및 이를 제어하는 방법

Also Published As

Publication number Publication date
WO2020058908A1 (fr) 2020-03-26

Similar Documents

Publication Publication Date Title
Achour et al. Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on Convolutional Neural Networks (CNN)
Alameer et al. Automatic recognition of feeding and foraging behaviour in pigs using deep learning
Chen et al. Recognition of feeding behaviour of pigs and determination of feeding time of each pig by a video-based deep learning method
US20230217903A1 (en) Animal Sensing System
WO2019101720A1 (fr) Procédés de classification de scène d'une image dans un système d'assistance à la conduite
CN112425159A (zh) 用于鉴别动物的设备
CN111134033A (zh) 一种动物智能喂食器及其方法和系统
KR102325259B1 (ko) 반려동물 생애 관리 시스템 및 그 방법
CN110896871A (zh) 一种投放食物的方法、装置和智能投食机
US20200342207A1 (en) 3d biometric identification system for identifying animals
Guo et al. Bigru-attention based cow behavior classification using video data for precision livestock farming
EP3852517A1 (fr) Procédé et appareil électronique pour permettre l'accès à une ressource à un ou plusieurs animaux par traitement d'image
Sajithra Varun et al. DeepAID: a design of smart animal intrusion detection and classification using deep hybrid neural networks
US20230263124A1 (en) Livestock restraining devices, systems for livestock management, and uses thereof
JP7360496B2 (ja) 判定システム
Duraiswami et al. Cattle breed detection and categorization using image processing and machine learning
Hindarto Use ResNet50V2 Deep Learning Model to Classify Five Animal Species
Sayed et al. An automated fish species identification system based on crow search algorithm
KR102655958B1 (ko) 머신러닝을 활용한 다수의 애완견 급식 시스템 및 그 방법
KR20210142226A (ko) 딥러닝 기반 얼굴 인식을 활용한 동물 개체 인식과 식사량 기록 및 배식 조절을 위한 시스템
Rakshith et al. Identification of cattle breeds by segmenting different body parts of the cow using neural network
Alon et al. Machine vision-based automatic lamb identification and drinking activity in a commercial farm
Farah et al. Computing a rodent’s diary
Van der Eijk et al. Seeing is caring–automated assessment of resource use of broilers with computer vision techniques
Humphreys et al. The principle of target-competitor differentiation in object recognition and naming (and its role in category effects in normality and pathology)

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210416

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230530