WO2019138329A1 - Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal - Google Patents

Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal Download PDF

Info

Publication number
WO2019138329A1
WO2019138329A1 PCT/IB2019/050146 IB2019050146W WO2019138329A1 WO 2019138329 A1 WO2019138329 A1 WO 2019138329A1 IB 2019050146 W IB2019050146 W IB 2019050146W WO 2019138329 A1 WO2019138329 A1 WO 2019138329A1
Authority
WO
WIPO (PCT)
Prior art keywords
animal
embeddings
images
domain
univocal
Prior art date
Application number
PCT/IB2019/050146
Other languages
French (fr)
Inventor
Simone CALDERARA
Luca BERGAMINI
Andrea CAPOBIANCO DONDONA
Ercole DEL NEGRO
Francesco DI TONDO
Original Assignee
Farm4Trade S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Farm4Trade S.R.L. filed Critical Farm4Trade S.R.L.
Priority to EP19707867.8A priority Critical patent/EP3738071A1/en
Publication of WO2019138329A1 publication Critical patent/WO2019138329A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present invention relates to the field of image recognition-based technologies for recognising the identity of animals and, in particular, to the field of creating a management system for registration data related to each animal identity.
  • the present invention more particularly relates to the sector of methods and systems for biometric animal recognition which make use of technologies based on artificial intelligence and on computer vision which permit the univocal identification of an animal; more particularly, the present invention uses deep learning techniques to create and manage a biometric database which is capable, when queried, to link to any animal registry details.
  • traceability has been introduced as a tool which ensures the safety and wholesomeness of a food product, that is a tool capable of following the path of a foodstuff from the production step to the subsequent processing and distribution steps; thus, in the case of the meat and dairy industries, the animals are traced from birth and then during the subsequent processing and marketing steps; traceability of each individual has furthermore also become necessary in these industries to contain the spread of any outbreaks of infectious diseases and to be able to identify any positive or suspect animals in diagnostic tests.
  • Another animal husbandry use of technologies directed towards the traceability of a single individual involves use for horses in which high commercial value individuals often take part in competitions and contests and for which there are very frequent risks of theft and fraud; the ability to precisely identify each individual would make it possible to achieve an appreciable reduction in the occurrence of such events.
  • a biometric identification system is a specific type of computer system which has the function and purpose of identifying an individual on the basis of one or more biological and/or behavioural characteristics (biometry) by comparing them, by way of algorithms, with previously acquired data present in the system database.
  • FaceNet One of the best known applications in this context is FaceNet; these networks are capable of analysing millions of images related to the human domain.
  • Document CN107292298A entitled "Convolution-based neural network and the classifier model bovine face recognition method” illustrates a method for identifying the features of a head of cattle by means of a convolutional neural network and a classification model; when a new head of cattle is added, only the image data are collected, input into a convolutional neural network model and the new distinctive features are added to the original classification model in order to be identified without there being any need to retrain the convolutional neural network.
  • Document CN106778902A entitled "Dairy cow individual recognition method based on deep convolutional neural network” provides a method for the recognition of an individual dairy cow on the basis of a deep convolutional neural network by means of image recognition and processing of the image data; according to the method, each individual cow can be efficiently recognised by extracting distinctive features by means of a deep learning convolutional neural network and combining said distinctive features with those typical of dairy cows.
  • said simple neural networks are made up of a small number of nodes
  • the primary aim of the present invention is that of overcoming the majority of the drawbacks present in currently known technologies by means of a new technology for recognising and monitoring individual animals based on a method and system which makes use of the experience gained from human neural networks in order to transfer this "knowledge" to the animal domain.
  • the object of the method and system is that of identifying animals at the level of a single individual: given an image or a series of images portraying an animal, the system, after having computed a low-dimensional numerical representation or embedding, predicts the presence or otherwise of the animal's identity in a dataset containing the identities (in terms of other images) of numerous other heads of livestock.
  • a further object is that of developing an identification and traceability system which can be used at a national/international level.
  • the present invention describes a method and a system, based on the use of computer vision techniques, for the univocal biometric identification of an animal, which method and system are directed towards forming one or more databases of animal images which can provide support to the local, regional, national and international registries commonly used for animal traceability.
  • the present invention combines computer vision techniques with deep learning techniques, or those techniques which are part of the field of machine learning by means of neural networks, the functioning of which mimics human learning techniques.
  • the present invention makes use of the experience gained in a human context in relation to neural networks trained for facial recognition in order to transfer this knowledge to the animal domain.
  • the present invention provides a method which is made up of the following steps:
  • the method according to the present invention comprises an innovative training step which, using images from both domains (human and animal) and using different convolutional neural networks, makes it possible to transform the embeddings of the human domain into chimeric images, or images containing features from both of the domains, which are the starting point for obtaining convolutional neural networks trained to recognise animal images and for produce animal embeddings which maintain the distances between the corresponding human images, so obtaining an animal latent space homologous to that of the human domain.
  • human embeddings which are obtained from convolutional networks trained by means of a catalogue of millions of images from the human domain and, therefore, robust and reliable, in the initial training steps makes it possible to transfer the knowledge from these networks to the animal domain as well, so bypassing a long training step which would present practical difficulties to carry out solely in the animal domain.
  • the principal technical problem solved by the present invention is thus the creation of a system of neural networks which function on the animal domain and prove to be robust, reliable and simpler to train; the training step for the neural networks specialised to the animal domain is simplified precisely by starting from the human embeddings.
  • each embedding will be associated with an image of the animal; simultaneously with this step of storage in a database, a graph structure will also be created which will indicate a set of distances between the embeddings.
  • the present invention also provides a system comprising:
  • a mobile device capable of capturing photographs, which may be mobile phone or a camera;
  • - a server within which the convolutional neural networks are trained and the images acquired by means of the mobile electronic device and/or a camera are processed and where the database is also managed;
  • - a modem and/or an access point and/or a network switch for connecting the server to an external and/or internal network.
  • the mobile and desktop application makes it possible, by using images of the "face” and/or of portions of the body of the individual animals, which can readily be acquired with any electronic device equipped with a camera (smartphone, tablet, laptop), quickly to identify the animal even at a distance.
  • Biometric identification will enable prompt and automatic identification of the individual whereas conventional identification technologies (microchips and passports) necessitate close and extended contact with the animal and often cannot do without experience on the part of the observer. Using biometric technologies will thus reduce the factor of human experience which is fundamental to identification.
  • Fig. 1 - shows schematic view of the step of training the neural networks according to method provided by the invention which makes use of both the human and the animal domains (the prior art training step on the human domain is shown in dashed lines);
  • Fig. 2 - shows a schematic view of the step of creating a database starting from the resultant animal embeddings and the subsequent step of creating a graph structure between the animal embeddings.
  • Fig. 3 - shows a schematic view of the step of recognising an animal identity by means of creating a graph between animal embeddings having similar features.
  • Fig. 4 - shows a schematic view of the system for the univocal biometric identification of an animal.
  • the present invention firstly describes a method, based on deep learning techniques, for the univocal biometric identification of an animal.
  • the method based on the use of deep learning techniques, for the univocal biometric identification nition of an animal and disclosed by the present invention, comprises the following steps:
  • the present invention solves the technical problem of training neural networks of an animal domain and thus of creating a system of neural networks which function on the animal domain and prove to be robust, reliable and simpler to train; the training step for the neural networks specialised to the animal domain is simplified precisely by starting from the human embeddings.
  • the method provided by the invention accordingly starts from human embeddings obtained from networks already trained for this task, the convolutional network A 102; these already thoroughly trained networks make it possible to obtain embeddings with distances which are considered valid; in this way, images from the same person will be a small distance apart, and images from different people will be a large distance apart; this defines a human latent space in which the distances between embeddings will be considered valid.
  • the method provided by the present invention is made up of an innovative training step which uses images from both domains (human and animal) and, by means of different convolutional neural networks, makes it possible to transform the embeddings of the human domain into animal images.
  • the training step a. of the neural networks, shown in Figure 1 comprises the following stages:
  • the inputs to the system are the identities of the two domains, each made up of multiple images acquired; in the first part of the system, the human and animal identities are aligned in the latent space.
  • the human embeddings are transformed by convolutional network B into images from the animal domain, in particular into images from the domain of animal faces; in a preferred embodiment, the invention applies to the human and animal faces images.
  • Convolutional network B behaves in such a manner as to attempt to deceive the subsequent convolutional network C by supplying animal images which have been transformed from the human embeddings and are maximally similar to reality.
  • Convolutional network C is supplied either with the animal images transformed from one domain to the other, such as the false images, or with the original images from the animal domain; the task of network C, 105, will be to identify the false images, or to select the better among the false images, such as those which are closest to the real images from the animal domain.
  • the images transformed and selected by network C, 106.1 are subsequently transformed by convolutional network D 108 into embeddings; these animal embeddings derived from images transformed from one domain to the other maintain the distances between the embeddings of the corresponding images from the human domain; a new animal latent space homologous to the latent space of the human domain is thus obtained.
  • the real images from the animal domain 107 are subjected to convolutional network D, 108.
  • Convolutional network D computes the embeddings of the animal domain 103.2 which will be inserted into the latent space homologous to that of the human domain.
  • the embeddings of the animal domain 103.2 obtained from the real images, 107, will be stored in the database 1 1 1.
  • the training step starting from images from the animal domain makes use of at least four images of the animal viewed from the right-hand side and at least four images from the left-hand side and four images from a front view of the animal and/or video of the animal.
  • the images will represent views of the animal's head.
  • the images may represent the animal's entire body.
  • This first training step a makes use of a cycle-consistency architecture which provides an adversarial network obtained by combining an autoencoder with an MLP (multilayer perceptron) network; the two networks have opposing objectives, while the second has to identify whether an input item of data (image) originates from the first network or is a real image, the first attempts to mislead the second by supplying images which are maximally similar to the real ones; this assists in reconstructing images which are more accurate in comparison with using a single autoencoder.
  • Neural network training carried out according to the method provided by the invention makes use of two autoencoders and two discriminators with antagonistic aims (adversarial training).
  • the two autoencoders work on two different domains:
  • the first receives images from the domain of human faces and synthesises a latent space in which the distances between embeddings of images of the same person must be small, while those between embeddings of different people must be large;
  • the second autoencoder receives images from the domain of animal faces and carries out the same process.
  • two discriminators assist in synthesising reconstructed images which are more realistic, such as images transformed from the human domain into the animal domain.
  • the training step is followed by database creation step b., as shown in Figure 2, made up of the following stages:
  • each node 110.1 is an animal identity
  • This step b. makes use of the trained networks to associate each animal identity with a series of embeddings, each of which corresponds to an image of the animal; this step makes use of the training results to construct a collection of embeddings for the known identities (such as of the animals whose identity is known).
  • the embeddings of the identities of the domain of interest are used to train the second part of the system to find a graph representation which is capable of constructing a rich manifold of information on the basis of few examples.
  • the embeddings of the animal domain are stored in a database; the convolutional network on graphs E 109 creates a graph structure between the stored embeddings, in which an embedding of the animal domain corresponds to each node of the graph.
  • each node which is an embedding
  • each node image corresponds to an image of an animal
  • a registration record located in the same database or in a central, national or international database which may include additional items of information related to the animal (farm, place of birth, age, diseases etc) will then be associated with each animal identity.
  • the embeddings stored for each identity may be retrieved in a testing step for new images; each embedding of the domain of interest is associated with the corresponding identity.
  • Graph convolution architecture is used in this initial database creation step. This architecture accepts embeddings of the domain of interest (animal domain) as input. By means of a graph structure, a series of MLP layers learn distance metrics capable of evaluating rate of reliability of images matching to one or more identities. Despite cycle- consistency architecture techniques being capable of generating new identities starting from the domain of human faces, the problem remains of having a small number of photos for each identity.
  • step c. shown in Figure 3, made up of the following steps:
  • the system inputs in this step c. are one or more images of an animal to be identified.
  • the embeddings will be extracted from these images and will be compared with those present in the database; at this point, the most similar identities will be identified so as to filter the database.
  • the most similar identities will subsequently be processed by means of graph convolution carried out by convolutional network E; more particularly, the most similar embeddings, once selected, will be compared by means of the graph with the embeddings obtained from the images of the unknown identity.
  • a graph structure is then created by means of a series of MLP layers to evaluate rate of reliability of images matching to one or more identities.
  • the embeddings are too close to those of an identity already present in the database, they may be associated with the closest identity; if they are too far away from the already known identities, there will be no recognition and the identity may be considered to be unknown and suitable for inclusion in a database as a new identity.
  • the computed embedding is compared by using graph convolution, with the aim of obtaining a graph of the distances from the other embeddings recorded in the database.
  • the main advantage of the above-stated invention is that of obtaining a robust and reliable neural network capable of processing millions of images and hence of supporting a database extending over the entire national and international territory.
  • One embodiment of the method provided by the invention is applied to a specific animal domain, such as to the domain of horses and cattle, where recognition of an individual animal identity is necessary.
  • the method which is applied generically to the transfer to knowledge from the human domain, which represents an animal species for which a large and valid database is available, to the domain of an animal species other than humans, can in a further embodiment be applied to the transfer of knowledge from the domain of any animal species for which a valid database is available which makes it possible to obtain embeddings originating from already trained neural networks, to the domain of any other animal species for which already trained neural networks are not available; thus, training step a. applies to the domains of at least two animal species by using the embeddings of the domain of one of the two animal species which have been obtained from trained convolutional networks.
  • the method may also be used for identifying any animal species, also including reptiles, and even extending to the recognition of species, applied for example even to insects.
  • the present invention also comprises a system 1 for the univocal biometric identification of an animal which makes use of the previously described method; said system, shown in Figure 4, comprises:
  • At least one mobile electronic device 2 or at least one camera 3 connectable to at least one PC 4, desktop or laptop, for acquiring images 1 12 related to each animal subject to be identified and displaying the related registration records, on which resides at least one software for operationally interacting with,
  • - at least one server 5 on which resides at least one software for training convolutional neural networks and for processing images 1 12 acquired by means of the mobile electronic device and/or a camera and for managing the database; - at least one network device 6 for connecting said server 5 to an external and/or internal network.
  • the system hence provides a device for acquisition of an image of an animal subject which may be a smartphone or a simple camera; one or more servers will monitor the network training, image processing and database management function for recognising identities.
  • the system 1 also comprises a remote server 7 on which resides a software for operationally interacting with said at least one server 5, with said at least one mobile electronic device 2 or at least one camera 3 connectable to at least one PC 4, desktop or laptop, for training the convolutional neural networks and for processing the images acquired by means of the mobile electronic device and/or a camera and for managing a central database.
  • a remote server 7 on which resides a software for operationally interacting with said at least one server 5, with said at least one mobile electronic device 2 or at least one camera 3 connectable to at least one PC 4, desktop or laptop, for training the convolutional neural networks and for processing the images acquired by means of the mobile electronic device and/or a camera and for managing a central database.
  • the system provides several possible server organisation architectures:
  • each server will manage an individual farm or an individual zone and, if recognition is unsuccessful, will interrogate its counterparts;
  • each server will manage an individual farm or an individual zone but will have no need to interrogate the other counterpart servers because they will all be synchronised in real time;
  • one or more servers capable of performing different functions; one for training the neural networks, one for processing the images, one for managing the database and one for checking identities;
  • - a higher level server which is the remote server 7, for managing a national and/or international database and one or more local servers 5 also located on the farm which manage a local database.
  • the network device 6 is a modem router and/or an access point and/or a network switch.
  • this organisation comprising a plurality of servers will make it possible to process specific subsets of animals, such as for example, animals of a specific age, or animals belonging to a specific zone or animals suitable for specific animal husbandry purposes or animals which participate in specific activities such as competitions, shows, etc.
  • the server may comprise at least one processor and at least one graphics card for parallel computing.
  • the servers will carry out the training process of the neural networks, B, C, D, and E as represented in the method of the invention.
  • the system initially based on the biometric recognition of individuals from photos, will subsequently be made usable for real-time recognition on video streams from which embeddings will still be extracted for database storage and for the analysis thereof as previously described.
  • This new system will initially complement and then replace the systems currently used for identifying animals such as, by way of example, cattle and equines; the systems currently in use are ear tags, subcutaneous microchips, RFID boluses; these systems are very limited because of being easily tampered with, of requiring close contact with the animal for fitting and then for reading in the re-identification step, and of generating considerable costs because, although unit costs are relatively low, the overall costs mount up in particular when large numbers of animals are involved.
  • the system provided by the invention in contrast, requires no close contact with the animals, because only a few projections of the animal would be required, it has a zero fitting cost for the end user, which should be limited to acquiring photos or a video using a smartphone, it has a low checking cost because only a smartphone would be required for carrying out acquisition and provides advantages from the standpoint of the operators in terms of time both during the identification step and during the step of applying the various identification systems.
  • - unambiguous, contactless identification of the animals a photograph of the animal is taken or a video acquired (even from a distance of some metres) which the system compares with each of the images present in the database on the server and/or in the cloud until an unambiguous identification is made;
  • the system provides the creation of local databases, for example for farms, in which the images of the animals are present locally and not in the cloud and are limited solely to the farm's animals;
  • an "electronic passport” the actual identity of an animal in a restricted group of known animals is checked; identification is very rapid and requires little computing power and only involves accessing images of an individual animal in the database;
  • the system allows the image database to be interrogated on the basis of the geographic position of the animal, that is on the basis of the location in which the photo of the animal is taken; if the photo is taken in a specific geographic area, the system will, in order to accelerate recognition, initially check the images in databases related to animals which should be present in this geographic area; if there is no match, the system will extend the range of analysis (for example from regional to national, European, international, etc.);
  • - species identification for example of animals as disease vectors or reservoirs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention describes a method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal. The method is characterised by the following steps: a. a training step on a human domain and an animal domain to obtain animal embeddings in a latent space homologous to the human latent space by means of convolutional neural networks; b. storing the resultant animal embeddings in a database; c. identifying an animal identity by means of convolutional neural networks. The present invention also comprises a system for the univocal biometric identification of an animal, which makes use of the previously described method.

Description

METHOD AND SYSTEM, BASED ON THE USE OF DEEP LEARNING TECHNIQUES, FOR THE UNIVOCAL BIOMETRIC IDENTIFICATION OF AN ANIMAL
The present invention relates to the field of image recognition-based technologies for recognising the identity of animals and, in particular, to the field of creating a management system for registration data related to each animal identity.
The present invention more particularly relates to the sector of methods and systems for biometric animal recognition which make use of technologies based on artificial intelligence and on computer vision which permit the univocal identification of an animal; more particularly, the present invention uses deep learning techniques to create and manage a biometric database which is capable, when queried, to link to any animal registry details.
PRIOR ART
Technologies based on artificial intelligence for the recognition of species and individuals have been applied both in a zoological context for recognising populations of wild animals and in an animal husbandry context for identifying individual subjects for the purposes of traceability.
In a zoological context, with the aim of obtaining a better understanding of the complexities of natural ecosystems and improving the management and protection thereof, it has become necessary to have detailed, large-scale knowledge about the number, location and behaviour of animals in natural ecosystems in order to improve biological research directed towards identifying the most varied animal species and estimating population.
In the context of animal husbandry and the distribution of foodstuffs of animal origin, with the aim of ensuring the wholesomeness of food products and of being able to follow the path which has brought foodstuffs to our tables, traceability has been introduced as a tool which ensures the safety and wholesomeness of a food product, that is a tool capable of following the path of a foodstuff from the production step to the subsequent processing and distribution steps; thus, in the case of the meat and dairy industries, the animals are traced from birth and then during the subsequent processing and marketing steps; traceability of each individual has furthermore also become necessary in these industries to contain the spread of any outbreaks of infectious diseases and to be able to identify any positive or suspect animals in diagnostic tests. Another animal husbandry use of technologies directed towards the traceability of a single individual involves use for horses in which high commercial value individuals often take part in competitions and contests and for which there are very frequent risks of theft and fraud; the ability to precisely identify each individual would make it possible to achieve an appreciable reduction in the occurrence of such events.
While these new technologies, which also make use of image acquisition tools such as for example video or cameras, do indeed automatically capture millions of images, but the analysis of said images is conventionally carried out by operators involving major expenditure of time (it takes operators some 2-3 months to evaluate and catalogue a batch of images captured over a period of 6 months) and money as well as loss of material because a great proportion of the material gathered in suitable databases remains unused.
Automating the procedures for extracting, evaluating and cataloguing images related to animals is thus becoming a focal point for easily producing large volumes of valuable information available both to ethologists and other operators in order to assist them in carrying out their tasks; the management procedures for the information derived from images of animals have also been considered to be of interest in an animal husbandry context with the aim of cataloguing animals to ensure traceability of the animals themselves and the products of animal origin derived therefrom.
Technologies based on artificial intelligence and in particular on computer vision permit biometric identification of an animal.
A biometric identification system is a specific type of computer system which has the function and purpose of identifying an individual on the basis of one or more biological and/or behavioural characteristics (biometry) by comparing them, by way of algorithms, with previously acquired data present in the system database.
Today, thanks to deep learning and machine learning, the new frontiers of computer vision allow human beings to be recognised by way of algorithms which identify the unique characteristics of the subject.
Numerous technologies which make use of artificial vision based on neural networks have accordingly been developed in the prior art for automatic detailed detection of faces in a human context (one of the best known applications in this context is FaceNet); these networks are capable of analysing millions of images related to the human domain.
With regard to the application of said technology to the animal domain, studies have in recent years been carried out into the use of neural networks firstly for recognising the species to which an individual belongs (in particular for studying wild fauna) and then for recognising a single individual (both for wild fauna and for domestic animals and livestock).
Various patent documents which already describe aspects relating to the inventive concept which is to be patented are highlighted below.
Document CN107292298A entitled "Convolution-based neural network and the classifier model bovine face recognition method" illustrates a method for identifying the features of a head of cattle by means of a convolutional neural network and a classification model; when a new head of cattle is added, only the image data are collected, input into a convolutional neural network model and the new distinctive features are added to the original classification model in order to be identified without there being any need to retrain the convolutional neural network.
Document CN106778902A entitled "Dairy cow individual recognition method based on deep convolutional neural network" provides a method for the recognition of an individual dairy cow on the basis of a deep convolutional neural network by means of image recognition and processing of the image data; according to the method, each individual cow can be efficiently recognised by extracting distinctive features by means of a deep learning convolutional neural network and combining said distinctive features with those typical of dairy cows.
However, the technologies described in the above-stated patent documents exhibit numerous disadvantages:
- they use simple neural networks capable of processing a small number of images of animal subjects,
- said simple neural networks are made up of a small number of nodes,
- they do not use a low-dimensional numerical representation of an item of data (image, video, audio, etc.), like embeddings,
- they do not use knowledge transfer between convolutional networks operating on different domains, which appreciably reduces the duration of each training step because some steps have already been completed in other applicational contexts,
- they do not have an effective, efficient and permanent system for storing identities. Some further known patent documents which set out various systems and procedures for recognising animals are listed below:
- CN106845512A "Animal body identification method and system based on fractal parameters",
- WO2015176637A1 "Non-invasive multimodal biometrical identification system of animals",
- CN107330472A "A marker-free mode the animal individual automatic identification method",
- CN107256398A "Based on feature fusion of cow of the individual identification method".
It is apparent from an analysis of the prior art that there is a need for a method for the univocal biometric identification of an animal which is capable of processing a large number of images (even million images) in the same manner as the neural networks which process human faces.
The primary aim of the present invention is that of overcoming the majority of the drawbacks present in currently known technologies by means of a new technology for recognising and monitoring individual animals based on a method and system which makes use of the experience gained from human neural networks in order to transfer this "knowledge" to the animal domain.
The object of the method and system is that of identifying animals at the level of a single individual: given an image or a series of images portraying an animal, the system, after having computed a low-dimensional numerical representation or embedding, predicts the presence or otherwise of the animal's identity in a dataset containing the identities (in terms of other images) of numerous other heads of livestock. A further object is that of developing an identification and traceability system which can be used at a national/international level.
The present invention provides the following advantages:
- permitting unambiguous identification of animals at any time; - avoiding identity switches;
- permitting rapid monitoring of animals in a national and international context;
- reducing and averting thefts or frauds involving pedigree and high value animals;
- ensuring animals are reunited with owners in the event of loss or theft;
- facilitating identification and recognition during capture and transport as well as loading and unloading operations;
- establishing whether or not an animal belongs to a species and/or breed;
- monitoring infectious diseases and preventing the spread thereof within a livestock population;
- ensuring animal wellbeing in both an animal husbandry and sporting context;
- averting the use of harmful/toxic substances which can cause public health problems;
- planning agricultural development and commercial strategies for an individual country or a region at a national and supranational level.
SUMMARY OF THE INVENTION
The present invention describes a method and a system, based on the use of computer vision techniques, for the univocal biometric identification of an animal, which method and system are directed towards forming one or more databases of animal images which can provide support to the local, regional, national and international registries commonly used for animal traceability.
In particular, the present invention combines computer vision techniques with deep learning techniques, or those techniques which are part of the field of machine learning by means of neural networks, the functioning of which mimics human learning techniques.
More particularly, the present invention makes use of the experience gained in a human context in relation to neural networks trained for facial recognition in order to transfer this knowledge to the animal domain.
The present invention provides a method which is made up of the following steps:
a. a training step on a human domain and an animal domain to obtain animal embeddings in a latent space homologous to the human latent space by means of convolutional neural networks;
b. storing the resultant animal embeddings in a database;
c. identifying an animal identity by means of convolutional neural networks.
More particularly, the method according to the present invention comprises an innovative training step which, using images from both domains (human and animal) and using different convolutional neural networks, makes it possible to transform the embeddings of the human domain into chimeric images, or images containing features from both of the domains, which are the starting point for obtaining convolutional neural networks trained to recognise animal images and for produce animal embeddings which maintain the distances between the corresponding human images, so obtaining an animal latent space homologous to that of the human domain. Using human embeddings, which are obtained from convolutional networks trained by means of a catalogue of millions of images from the human domain and, therefore, robust and reliable, in the initial training steps makes it possible to transfer the knowledge from these networks to the animal domain as well, so bypassing a long training step which would present practical difficulties to carry out solely in the animal domain.
The principal technical problem solved by the present invention is thus the creation of a system of neural networks which function on the animal domain and prove to be robust, reliable and simpler to train; the training step for the neural networks specialised to the animal domain is simplified precisely by starting from the human embeddings.
The animal embeddings will then be stored to form a database in which each embedding will be associated with an image of the animal; simultaneously with this step of storage in a database, a graph structure will also be created which will indicate a set of distances between the embeddings.
The present invention also provides a system comprising:
- a mobile device capable of capturing photographs, which may be mobile phone or a camera;
- a server within which the convolutional neural networks are trained and the images acquired by means of the mobile electronic device and/or a camera are processed and where the database is also managed;
- a modem and/or an access point and/or a network switch for connecting the server to an external and/or internal network.
The mobile and desktop application makes it possible, by using images of the "face" and/or of portions of the body of the individual animals, which can readily be acquired with any electronic device equipped with a camera (smartphone, tablet, laptop), quickly to identify the animal even at a distance. Biometric identification will enable prompt and automatic identification of the individual whereas conventional identification technologies (microchips and passports) necessitate close and extended contact with the animal and often cannot do without experience on the part of the observer. Using biometric technologies will thus reduce the factor of human experience which is fundamental to identification.
The presence of georeferencing data present in the images acquired by means of smart devices (mobile telephones, tablets, etc.) enable the system to accelerate identification by initially selecting the animal embeddings associated with a specific geographic region and, in the event of failure, also to extend the search to adjoining geographic areas and/or to the complete database. Further features and advantages of the invention will be more readily apparent in the light of the detailed description of some preferred, but non-exclusive, embodiments of the system and method for the univocal identification of an animal which are illustrated by way of non-limiting example with reference to the drawings in which:
Fig. 1 - shows schematic view of the step of training the neural networks according to method provided by the invention which makes use of both the human and the animal domains (the prior art training step on the human domain is shown in dashed lines);
Fig. 2 - shows a schematic view of the step of creating a database starting from the resultant animal embeddings and the subsequent step of creating a graph structure between the animal embeddings.
Fig. 3 - shows a schematic view of the step of recognising an animal identity by means of creating a graph between animal embeddings having similar features.
Fig. 4 - shows a schematic view of the system for the univocal biometric identification of an animal.
LEGEND:
101 - Fluman domain image
102 - Convolutional network A
103.0 - Human embeddings
103.1 - Embeddings of images transformed from one domain to the other and located in a latent space homologous to that of the human domain
103.2 - Animal embeddings
103.3 - Selection of animal embeddings similar to the identity to be tested
104 - Convolutional network B
105 - Convolutional network C
106 - Chimeric image transformed from one domain to the other 106.1 - Selection of chimeric images transformed from one domain to the other
107 - Real animal domain images
108 - Convolutional network D
109 - Graph convolutional network E
1 10 - Graph structure of animal embeddings stored in database
1 10.1 - Nodes of graph 1 10
1 1 1 - Database
1 12 - Image of an animal identity to be identified
1 13 - Graph structure of animal embeddings related to the identity to be tested
1 13.1 - Nodes of graph 1 13
1 - System for the univocal biometric identification of an animal
2 - Mobile electronic device
3 - Camera
4 - PC
5 - Server
6 - Network device (modem router/access point/network switch)
7 - Remote server
DETAILED DESCRIPTION OF THE INVENTION
The present invention firstly describes a method, based on deep learning techniques, for the univocal biometric identification of an animal.
The method, based on the use of deep learning techniques, for the univocal biometric identification nition of an animal and disclosed by the present invention, comprises the following steps:
a. a training step on a human domain and an animal domain to obtain animal embeddings in a latent space homologous to the human latent space by means of convolutional neural networks;
b. storing the resultant animal embeddings in a database;
c. identifying an animal identity by means of convolutional neural networks.
The present invention solves the technical problem of training neural networks of an animal domain and thus of creating a system of neural networks which function on the animal domain and prove to be robust, reliable and simpler to train; the training step for the neural networks specialised to the animal domain is simplified precisely by starting from the human embeddings.
As is well known, the domain of human faces has been thoroughly explored in the literature, giving access to full-bodied datasets; the same is not always feasible for the animal domain, where it is often only possible to collect a few images per individual, said images furthermore being obtained from a limited number of subjects (in contrast with the human domain).
This has led to attempts to transform the images between the two domains, in such a manner as to be able to generate new samples for the animal domain, the principal subject matter of the present invention.
Thus the method provided by the invention accordingly starts from human embeddings obtained from networks already trained for this task, the convolutional network A 102; these already thoroughly trained networks make it possible to obtain embeddings with distances which are considered valid; in this way, images from the same person will be a small distance apart, and images from different people will be a large distance apart; this defines a human latent space in which the distances between embeddings will be considered valid. The method provided by the present invention is made up of an innovative training step which uses images from both domains (human and animal) and, by means of different convolutional neural networks, makes it possible to transform the embeddings of the human domain into animal images.
The training step a. of the neural networks, shown in Figure 1 , comprises the following stages:
- exploiting embeddings 103.0 present in a latent space related to the human domain obtained by means of trained convolutional networks;
- transforming human embeddings 103.0 into chimeric animal images 106 by means of a convolutional network B, 104;
- refining the chimeric images 106 generated by B by means of a convolutional network C, 105, by comparison with real images, 107, from the animal domain by way of "adversarial training";
- carrying out a training step of a convolutional network D, 108, on the selected images, 106.1 , or on the chimeric images refined by convolutional network C, in order to obtain embeddings 103.1 of the images transformed from one domain to the other, located in a new latent space homologous to that of the human domain;
- obtaining one or more embeddings, 103.2, for each individual related to the animal domain by means of processing real animal images 107 by means of convolutional network D, 108;
- inserting the animal embeddings, 103.2, into the new latent space homologous to that of the human domain.
The inputs to the system are the identities of the two domains, each made up of multiple images acquired; in the first part of the system, the human and animal identities are aligned in the latent space.
In particular, the human embeddings are transformed by convolutional network B into images from the animal domain, in particular into images from the domain of animal faces; in a preferred embodiment, the invention applies to the human and animal faces images.
Convolutional network B behaves in such a manner as to attempt to deceive the subsequent convolutional network C by supplying animal images which have been transformed from the human embeddings and are maximally similar to reality.
Convolutional network C is supplied either with the animal images transformed from one domain to the other, such as the false images, or with the original images from the animal domain; the task of network C, 105, will be to identify the false images, or to select the better among the false images, such as those which are closest to the real images from the animal domain.
The images transformed and selected by network C, 106.1 , are subsequently transformed by convolutional network D 108 into embeddings; these animal embeddings derived from images transformed from one domain to the other maintain the distances between the embeddings of the corresponding images from the human domain; a new animal latent space homologous to the latent space of the human domain is thus obtained.
Once a latent space of the animal domain which is homologous to that of the human domain has been obtained, the real images from the animal domain 107 are subjected to convolutional network D, 108.
Convolutional network D computes the embeddings of the animal domain 103.2 which will be inserted into the latent space homologous to that of the human domain.
The embeddings of the animal domain 103.2 obtained from the real images, 107, will be stored in the database 1 1 1.
The training step starting from images from the animal domain makes use of at least four images of the animal viewed from the right-hand side and at least four images from the left-hand side and four images from a front view of the animal and/or video of the animal.
In a preferred embodiment, the images will represent views of the animal's head.
In a further embodiment, the images may represent the animal's entire body.
This first training step a. makes use of a cycle-consistency architecture which provides an adversarial network obtained by combining an autoencoder with an MLP (multilayer perceptron) network; the two networks have opposing objectives, while the second has to identify whether an input item of data (image) originates from the first network or is a real image, the first attempts to mislead the second by supplying images which are maximally similar to the real ones; this assists in reconstructing images which are more accurate in comparison with using a single autoencoder. Neural network training carried out according to the method provided by the invention makes use of two autoencoders and two discriminators with antagonistic aims (adversarial training).
The two autoencoders work on two different domains:
- the first receives images from the domain of human faces and synthesises a latent space in which the distances between embeddings of images of the same person must be small, while those between embeddings of different people must be large;
- the second autoencoder, in contrast, receives images from the domain of animal faces and carries out the same process.
In addition to these autoencoders, two discriminators assist in synthesising reconstructed images which are more realistic, such as images transformed from the human domain into the animal domain.
The training step is followed by database creation step b., as shown in Figure 2, made up of the following stages:
- storing the resultant animal embeddings, 103.2, in a database 11 1 ; - creating a graph structure 1 10 by means of a convolutional network E, 109, which identifies the specific relationship between the animal embeddings obtained from processing the images, in which each node 110.1 is an animal identity;
- defining a threshold for evaluating the metrics between the nodes 1 10.1 of the graph 1 10.
This step b. makes use of the trained networks to associate each animal identity with a series of embeddings, each of which corresponds to an image of the animal; this step makes use of the training results to construct a collection of embeddings for the known identities (such as of the animals whose identity is known).
The embeddings of the identities of the domain of interest (animal) are used to train the second part of the system to find a graph representation which is capable of constructing a rich manifold of information on the basis of few examples.
The embeddings of the animal domain are stored in a database; the convolutional network on graphs E 109 creates a graph structure between the stored embeddings, in which an embedding of the animal domain corresponds to each node of the graph.
In particular each node, which is an embedding, corresponds to an image of an animal; a series of embeddings, each corresponding to an animal image, will thus be associated with each animal identity.
A registration record, located in the same database or in a central, national or international database which may include additional items of information related to the animal (farm, place of birth, age, diseases etc) will then be associated with each animal identity.
The embeddings stored for each identity may be retrieved in a testing step for new images; each embedding of the domain of interest is associated with the corresponding identity.
Graph convolution architecture is used in this initial database creation step. This architecture accepts embeddings of the domain of interest (animal domain) as input. By means of a graph structure, a series of MLP layers learn distance metrics capable of evaluating rate of reliability of images matching to one or more identities. Despite cycle- consistency architecture techniques being capable of generating new identities starting from the domain of human faces, the problem remains of having a small number of photos for each identity.
This problem is solved by applying techniques of few-show learning on graphs which make it possible to compare a few photos of each individual in order to obtain similarity metrics between embeddings.
These training and database creation steps are followed by identity testing, step c., shown in Figure 3, made up of the following steps:
- processing of one or more images of an animal identity, 1 12, by means of convolutional network D;
- computing one or more embeddings of the animal identity, 103.2;
- selecting animal embeddings from the database which are more similar to those of the identity 103.3;
- creating at least one graph structure, 113, of the embeddings related to the identity and of the animal embeddings which are most similar to those of the identity by means of convolutional network E, 109;
- comparing by means of graph convolution in order to obtain the distance between the nodes 1 13.1 of said graph 1 13 in relation to the comparison of said identity with those selected from the database 1 11 ;
- applying the threshold to the distances between the nodes of the graph. The system inputs in this step c. are one or more images of an animal to be identified. The embeddings will be extracted from these images and will be compared with those present in the database; at this point, the most similar identities will be identified so as to filter the database. The most similar identities will subsequently be processed by means of graph convolution carried out by convolutional network E; more particularly, the most similar embeddings, once selected, will be compared by means of the graph with the embeddings obtained from the images of the unknown identity.
A graph structure is then created by means of a series of MLP layers to evaluate rate of reliability of images matching to one or more identities.
At this point, after application of the established threshold, two conditions arise: if the evaluated distance is less than the threshold value, the new node is associated with a known identity, if it is greater than the threshold value, the new node is not recognised.
In particular, if the embeddings are too close to those of an identity already present in the database, they may be associated with the closest identity; if they are too far away from the already known identities, there will be no recognition and the identity may be considered to be unknown and suitable for inclusion in a database as a new identity.
In the representation of the testing process of Figure 3, the computed embedding is compared by using graph convolution, with the aim of obtaining a graph of the distances from the other embeddings recorded in the database.
The main advantage of the above-stated invention is that of obtaining a robust and reliable neural network capable of processing millions of images and hence of supporting a database extending over the entire national and international territory.
Further advantages of the method provided by the invention are set out below:
- generating new samples of the domain of interest (animal) starting from the (wider) domain of human faces; - ensuring the preservation of some characteristics between the latent spaces, and hence images of the same person can be transformed into images of the same animal;
- obtaining a latent space in the significant domain of interest as a basis for identification.
One embodiment of the method provided by the invention is applied to a specific animal domain, such as to the domain of horses and cattle, where recognition of an individual animal identity is necessary.
This is because the method, which is applied generically to the transfer to knowledge from the human domain, which represents an animal species for which a large and valid database is available, to the domain of an animal species other than humans, can in a further embodiment be applied to the transfer of knowledge from the domain of any animal species for which a valid database is available which makes it possible to obtain embeddings originating from already trained neural networks, to the domain of any other animal species for which already trained neural networks are not available; thus, training step a. applies to the domains of at least two animal species by using the embeddings of the domain of one of the two animal species which have been obtained from trained convolutional networks.
The method may also be used for identifying any animal species, also including reptiles, and even extending to the recognition of species, applied for example even to insects. The present invention also comprises a system 1 for the univocal biometric identification of an animal which makes use of the previously described method; said system, shown in Figure 4, comprises:
- at least one mobile electronic device 2 or at least one camera 3 connectable to at least one PC 4, desktop or laptop, for acquiring images 1 12 related to each animal subject to be identified and displaying the related registration records, on which resides at least one software for operationally interacting with,
- at least one server 5 on which resides at least one software for training convolutional neural networks and for processing images 1 12 acquired by means of the mobile electronic device and/or a camera and for managing the database; - at least one network device 6 for connecting said server 5 to an external and/or internal network.
The system hence provides a device for acquisition of an image of an animal subject which may be a smartphone or a simple camera; one or more servers will monitor the network training, image processing and database management function for recognising identities.
The system 1 also comprises a remote server 7 on which resides a software for operationally interacting with said at least one server 5, with said at least one mobile electronic device 2 or at least one camera 3 connectable to at least one PC 4, desktop or laptop, for training the convolutional neural networks and for processing the images acquired by means of the mobile electronic device and/or a camera and for managing a central database.
The system provides several possible server organisation architectures:
- a plurality of local servers, equivalent to one another, each capable of performing the same training, image processing and local database creation functions, each containing a subset of the database and which interact with one another, sharing information ; each server will manage an individual farm or an individual zone and, if recognition is unsuccessful, will interrogate its counterparts;
- a plurality of local servers, equivalent to one another, each capable of performing the same functions and of backing one another up; each server will manage an individual farm or an individual zone but will have no need to interrogate the other counterpart servers because they will all be synchronised in real time;
- one or more servers capable of performing different functions; one for training the neural networks, one for processing the images, one for managing the database and one for checking identities;
- a higher level server which is the remote server 7, for managing a national and/or international database and one or more local servers 5 also located on the farm which manage a local database.
In the event of a user takes a picture to identify an identity and query a local server, maybe on the farm, there will be no need in this case for an internet connection, since an access point or LAN link will allow these acquisition devices or the PC to connect to the server.
In the event of the acquisition tools being connected to a non-local server, an internet connection by means of a modem router or WiFi connection will be necessary.
The network device 6 is a modem router and/or an access point and/or a network switch.
In a further embodiment, this organisation comprising a plurality of servers will make it possible to process specific subsets of animals, such as for example, animals of a specific age, or animals belonging to a specific zone or animals suitable for specific animal husbandry purposes or animals which participate in specific activities such as competitions, shows, etc.
The server may comprise at least one processor and at least one graphics card for parallel computing.
The servers will carry out the training process of the neural networks, B, C, D, and E as represented in the method of the invention.
The system, initially based on the biometric recognition of individuals from photos, will subsequently be made usable for real-time recognition on video streams from which embeddings will still be extracted for database storage and for the analysis thereof as previously described.
This new system will initially complement and then replace the systems currently used for identifying animals such as, by way of example, cattle and equines; the systems currently in use are ear tags, subcutaneous microchips, RFID boluses; these systems are very limited because of being easily tampered with, of requiring close contact with the animal for fitting and then for reading in the re-identification step, and of generating considerable costs because, although unit costs are relatively low, the overall costs mount up in particular when large numbers of animals are involved.
The system provided by the invention, in contrast, requires no close contact with the animals, because only a few projections of the animal would be required, it has a zero fitting cost for the end user, which should be limited to acquiring photos or a video using a smartphone, it has a low checking cost because only a smartphone would be required for carrying out acquisition and provides advantages from the standpoint of the operators in terms of time both during the identification step and during the step of applying the various identification systems.
The main advantages of the method and system provided by the present invention over conventional identification methods are set out below:
- time and cost savings for registering and identifying individual animals;
- unambiguous, contactless identification of the animals: a photograph of the animal is taken or a video acquired (even from a distance of some metres) which the system compares with each of the images present in the database on the server and/or in the cloud until an unambiguous identification is made;
- standardisation of recognition procedures; - identification of an unknown animal in a restricted context: the system provides the creation of local databases, for example for farms, in which the images of the animals are present locally and not in the cloud and are limited solely to the farm's animals;
- provision of an "electronic passport": the actual identity of an animal in a restricted group of known animals is checked; identification is very rapid and requires little computing power and only involves accessing images of an individual animal in the database;
- identification of an animal by georeferencing: the system allows the image database to be interrogated on the basis of the geographic position of the animal, that is on the basis of the location in which the photo of the animal is taken; if the photo is taken in a specific geographic area, the system will, in order to accelerate recognition, initially check the images in databases related to animals which should be present in this geographic area; if there is no match, the system will extend the range of analysis (for example from regional to national, European, international, etc.);
- checking of livestock thefts, cheating in shows, exchanging of animals in sporting competitions;
- species identification: for example of animals as disease vectors or reservoirs.
The subject matter of the invention can be modified and varied in many ways, all of which fall within the inventive concept set out in the attached claims.
Any details can be replaced with other technically equivalent components as required without extending beyond the scope of protection of the present invention.
Although the subject matter has been described with particular reference to the attached figures, the reference numerals used in the description and claims are used to facilitate understanding of the invention and do not in any way limit the claimed scope of protection.

Claims

1. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal, characterised by the following steps:
a. training on a human domain and an animal domain to obtain animal embeddings in a latent space homologous to the human latent space by means of convolutional neural networks;
b. storing the resultant animal embeddings in a database;
c. identifying an animal identity by means of convolutional neural networks.
2. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal according to claim 1 , in which step a. is characterised by the following further steps:
- exploiting embeddings (103.0) present in a latent space related to the human domain, obtained by means of trained convolutional networks;
- transforming human embeddings (103.0) into chimeric animal images (106) by means of a convolutional network B (104);
- refining the chimeric images (106) generated by convolutional network B (104) by means of a convolutional network C (105), by comparison with real images of the animal domain (107), by means of adversarial training;
- carrying out a training step of a convolutional network D (108) on refined chimeric images (106.1 ) by convolutional network C, in order to obtain embeddings (103.1 ) of the images transformed from one domain to the other, located in a new latent space homologous to that of the human domain;
- obtaining one or more animal embeddings (103.2) for each individual related to the animal domain by means of processing real animal images (107) by means of convolutional network D (108);
- including the animal embeddings (103.2) into the new latent space homologous to that of the human domain.
3. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal according to claim 1 , in which step b. is characterised by the following further steps:
- storing the resultant animal embeddings (103.2) in a database (1 1 1 );
- creating a graph structure (1 10) by means of a convolutional network E (109) which identifies the specific relationship between the animal embeddings (103.2) obtained from processing the animal images (107), in which each node (110.1 ) of the graph (110) is an animal identity;
- defining a threshold for evaluating the distances between the nodes (1 10.1 ) of the graph (110).
4. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal according to claim 3, characterised in that:
- each node (110.1 ) of the graph (1 10) is associated with an embedding, that represents an image of the animal;
- each image is associated with a registration record of the animal.
5. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal according to claim 1 , in which step c. is characterised by the following further steps: - processing of one or more images of an animal identity (1 12) to be identified, by means of convolutional network D (108);
- computing one or more embeddings of the identity (103.2);
- selecting animal embeddings from the database which are more similar to those of the identity (103.3);
- creating at least one graph structure (113) of the embeddings related to the identity (103.2) and of the animal embeddings which are most similar to those of the identity, by means of convolutional network E (109);
- comparing by means of graph convolution in order to obtain the distance between the nodes (113.1 ) of said graph (113) in relation to the comparison of said identity with those selected from the database (1 1 1 );
- applying the threshold to the distances between the nodes (1 13.1 ) of the graph (1 13).
6. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal according to any one of the preceding claims, characterised in that, if the evaluated distance is less than the threshold value, the new node is associated with a known identity, if it is greater than the threshold value, the new node is not recognised.
7. A method, based on the use of deep learning techniques, for the univocal biometric identification of an animal according to claims 1 and 2, characterised in that training step a. applies to the domains of at least two animal species by using the embeddings of the domain of one of the two animal species which have been obtained from trained convolutional networks.
8. A system (1 ) for the univocal biometric identification of an animal which uses the method according to claims 1 to 7, characterised in that it comprises:
- at least one mobile electronic device (2) or at least one camera (3) connectable to at least one PC (4), desktop or laptop, for acquiring images (1 12) related to each animal subject to be identified and for displaying the related registration records, on which resides at least one software for operationally interacting with,
- at least one server (5) on which resides at least one software for training convolutional neural networks and for processing images (112) acquired by means of the mobile electronic device (2) and/or a camera (3) and for managing the database (1 1 1 );
- at least one network device (6) for connecting said server (5) to an external and/or internal network.
9. A system (1 ) for the univocal biometric identification of an animal according to claim
8, characterised in that said network device (6) is a modem router and/or an access point and/or a network switch.
10. A system (1 ) for the univocal biometric identification of an animal according to claims 8 and 9, characterised in that it comprises a remote server (7) on which resides a software for operationally interacting with said at least one server (5), with said at least one mobile electronic device (2) or with at least one camera (3) connectable to at least one PC (4), desktop or laptop, for training the convolutional neural networks, for processing the images (1 12) acquired by means of the mobile electronic device (2) and/or a camera (3) and for managing a central database.
1 1. A system (1 ) for the univocal biometric identification of an animal according to any one of the preceding claims, characterised in that the convolutional neural networks trained by the software of said at least one server (5; 7) are B (104), C (105), D (108) and E (109).
12. A system (1 ) for the univocal biometric identification of an animal according to any one of the preceding claims, characterised in that said mobile electronic device (2) is a notebook, tablet or smartphone.
13. A system (1 ) for the univocal biometric identification of an animal according to any one of the preceding claims, characterised in that said at least one server (5; 7) comprises at least one processor and at least one graphics card for parallel computing.
PCT/IB2019/050146 2018-01-09 2019-01-09 Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal WO2019138329A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19707867.8A EP3738071A1 (en) 2018-01-10 2019-01-09 Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102018000000640 2018-01-09
IT201800000640A IT201800000640A1 (en) 2018-01-10 2018-01-10 METHOD AND SYSTEM FOR THE UNIQUE BIOMETRIC RECOGNITION OF AN ANIMAL, BASED ON THE USE OF DEEP LEARNING TECHNIQUES

Publications (1)

Publication Number Publication Date
WO2019138329A1 true WO2019138329A1 (en) 2019-07-18

Family

ID=62089843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/050146 WO2019138329A1 (en) 2018-01-09 2019-01-09 Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal

Country Status (3)

Country Link
EP (1) EP3738071A1 (en)
IT (1) IT201800000640A1 (en)
WO (1) WO2019138329A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10757914B1 (en) * 2019-04-17 2020-09-01 National Taiwan University Feeding analysis system
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN111768792A (en) * 2020-05-15 2020-10-13 宁波大学 Audio steganalysis method based on convolutional neural network and domain confrontation learning
CN112069877A (en) * 2020-07-21 2020-12-11 北京大学 Face information identification method based on edge information and attention mechanism
US11910784B2 (en) 2020-10-14 2024-02-27 One Cup Productions Ltd. Animal visual identification, tracking, monitoring and assessment systems and methods thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778902A (en) * 2017-01-03 2017-05-31 河北工业大学 Milk cow individual discrimination method based on depth convolutional neural networks
CN107292298A (en) * 2017-08-09 2017-10-24 北方民族大学 Ox face recognition method based on convolutional neural networks and sorter model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778902A (en) * 2017-01-03 2017-05-31 河北工业大学 Milk cow individual discrimination method based on depth convolutional neural networks
CN107292298A (en) * 2017-08-09 2017-10-24 北方民族大学 Ox face recognition method based on convolutional neural networks and sorter model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANDREW WILLIAM ET AL: "Visual Localisation and Individual Identification of Holstein Friesian Cattle via Deep Learning", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), IEEE, 22 October 2017 (2017-10-22), pages 2850 - 2859, XP033303763, DOI: 10.1109/ICCVW.2017.336 *
ARAM TER-SARKISOV ET AL: "Bootstrapping Labelled Dataset Construction for Cow Tracking and Behavior Analysis", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 30 March 2017 (2017-03-30), XP080753267, DOI: 10.1109/CRV.2017.25 *
MAHEEN RASHID ET AL: "Interspecies Knowledge Transfer for Facial Keypoint Detection", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 13 April 2017 (2017-04-13), XP080762748, DOI: 10.1109/CVPR.2017.174 *
PETER SKVARENINA: "Detecting facial features using Deep Learning", 2 August 2017 (2017-08-02), XP002784342, Retrieved from the Internet <URL:https://towardsdatascience.com/detecting-facial-features-using-deep-learning-2e23c8660a7a> [retrieved on 20180831] *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10757914B1 (en) * 2019-04-17 2020-09-01 National Taiwan University Feeding analysis system
CN111768792A (en) * 2020-05-15 2020-10-13 宁波大学 Audio steganalysis method based on convolutional neural network and domain confrontation learning
CN111768792B (en) * 2020-05-15 2024-02-09 天翼安全科技有限公司 Audio steganalysis method based on convolutional neural network and domain countermeasure learning
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN112069877A (en) * 2020-07-21 2020-12-11 北京大学 Face information identification method based on edge information and attention mechanism
CN112069877B (en) * 2020-07-21 2022-05-03 北京大学 Face information identification method based on edge information and attention mechanism
US11910784B2 (en) 2020-10-14 2024-02-27 One Cup Productions Ltd. Animal visual identification, tracking, monitoring and assessment systems and methods thereof

Also Published As

Publication number Publication date
IT201800000640A1 (en) 2019-07-10
EP3738071A1 (en) 2020-11-18

Similar Documents

Publication Publication Date Title
WO2019138329A1 (en) Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal
Crouse et al. LemurFaceID: A face recognition system to facilitate individual identification of lemurs
Hou et al. Identification of animal individuals using deep learning: A case study of giant panda
Kumar et al. Real-time recognition of cattle using animal biometrics
US9521829B2 (en) Livestock identification and monitoring
Kumar et al. Face recognition for cattle
Bergamini et al. Multi-views embedding for cattle re-identification
CN108563675B (en) Electronic file automatic generation method and device based on target body characteristics
Nipko et al. Identifying individual jaguars and ocelots via pattern‐recognition software: comparing HotSpotter and Wild‐ID
Varadarajan et al. Joint estimation of human pose and conversational groups from social scenes
Lai et al. Dog identification using soft biometrics and neural networks
Bhole et al. CORF3D contour maps with application to Holstein cattle recognition from RGB and thermal images
Kaur et al. A review of local binary pattern based texture feature extraction
Bello et al. Deep belief network approach for recognition of cow using cow nose image pattern
Kuan et al. An imaging system based on deep learning for monitoring the feeding behavior of dairy cows
Kumar et al. Animal Biometrics
Kaur et al. Cattle identification with muzzle pattern using computer vision technology: a critical review and prospective
Goyal et al. Plant species identification using leaf image retrieval: A study
Bello et al. Mask YOLOv7-Based Drone Vision System for Automated Cattle Detection and Counting
CN114758356A (en) Method and system for recognizing cow lip prints based on local invariant features
Aishwarya et al. Aquatic plant disease detection using deep learning
Danish Beef Cattle Instance Segmentation Using Mask R-Convolutional Neural Network
Singh et al. Muzzle pattern based cattle identification using generative adversarial networks
McDaid et al. Using deep learning for facial recognition: Implementing a convolutional siamese neural network in a facial recognition system for cattle
Kumar et al. Animal biometrics: Concepts and recent application

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19707867

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019707867

Country of ref document: EP

Effective date: 20200810