US11847849B2 - System and method for companion animal identification based on artificial intelligence - Google Patents
System and method for companion animal identification based on artificial intelligence Download PDFInfo
- Publication number
- US11847849B2 US11847849B2 US17/388,013 US202117388013A US11847849B2 US 11847849 B2 US11847849 B2 US 11847849B2 US 202117388013 A US202117388013 A US 202117388013A US 11847849 B2 US11847849 B2 US 11847849B2
- Authority
- US
- United States
- Prior art keywords
- companion animal
- face
- target
- image
- patch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/22—Social work or social welfare, e.g. community support activities or counselling services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/243—Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- the view correction unit may be further configured to recognize an animal species of the target companion animal in the preview image of the target companion animal, and recognize physical features of the target companion animal in the preview image of the target companion animal.
- the physical features may include dolichocephalic information or brachycephalic information when the corresponding animal is a dog.
- the view correction unit may include a third model to which the preview image of the target companion animal is applied.
- the third model is a machine learning model configured to extract third features from an input image and detect a face region and a feature point in the preview image of the target companion animal based on the extracted features.
- the third model may be trained using a third training dataset including a plurality of third training samples.
- Each of the plurality of third training samples includes at least one of a face image and face region information or feature point information of the companion animal, and the face region information includes a location or size of the face region, and the feature point information includes a face component corresponding to the feature point and a location of the feature point.
- the feature point may include at least one of a first feature point corresponding to a left eye, a second feature point corresponding to a right eye, or a third feature point corresponding to a nose.
- the view correction unit may be further configured to check if a first alignment condition indicating alignment of the face of the target companion animal with respect to a yaw axis is satisfied, check if a second alignment condition indicating alignment of the face of the target companion animal with respect to a pitch axis is satisfied, and align the preview image of the target companion animal to align the face of the target companion animal with respect to a roll axis when the first alignment condition and the second alignment condition are satisfied.
- a face view direction is the roll axis
- an up or down direction is the pitch axis
- a lateral direction is the yaw axis.
- the first alignment condition may include at least one of locations of left and right eyes disposed at a central part of an image frame or a location of a nose disposed at a center between the left eye and the right eye.
- the second alignment condition may include alignment of the face such that an area of the face region in the image is equal or similar to a front view area of the face.
- the view correction unit may be further configured to provide a user with a guide for an unsatisfied alignment condition when the face of the target companion animal fans to satisfy at least one of the first alignment condition or the second alignment condition.
- the view correction unit may be further configured to enable the image acquisition unit to capture the facial images of the target companion animal in response to the face of the target companion animal satisfying the first alignment condition and the second alignment condition.
- the view correction unit may be configured to calculate a connection vector including first and second feature points corresponding to both eyes, calculate a rotation matrix (T) of the connection vector, and align the connection vector into a non-rotated state based on the calculated rotation matrix (T) of the connection vector.
- the identification unit may be configured to identify the target companion animal by applying the entire facial image and the at least one sub-patch to a fourth model.
- the fourth model may include a feature extraction part to extract features from the entire facial image and the at least one sub-patch, a matching part to classify the entire facial image and each patch to a class of an identifier that matches the target companion animal of based on the corresponding patch extracted features and a determination part to determine a final identifier that matches the target companion animal based on the extracted features for the entire facial image and each patch or the matching results for the entire facial image and each patch.
- the determination part may determine the final identifier of the target companion animal by voting, bagging or boosting the acquired matching results for each patch and the entire facial image, when the matching results indicating the identifier of the class that matches the target companion animal for each patch and the entire facial image are acquired from the matching part.
- the determination part may calculate matching scores of the target companion animal for the class for each patch and the entire facial image, the matching scores indicating an extent to which the target companion animal matches the class for each input, combine the matching scores of the target companion animal for the class for each input, and determine the final identifier that matches the target companion animal based on the combination of the matching scores of the target companion animal for the class.
- a rule for combining the matching scores may include at least one of product rule, sum rule, minimax rule, median rule or weighted sum rule.
- a computer-readable recording medium stores program instructions which are readable and executable by a computing device.
- the program instructions are executed by a processor of the computing device to acquire a preview image for capturing a face of a target companion animal, check if the face of the target companion animal is aligned according to a preset criterion, capture the face of the target companion animal when it is determined that the face of the target companion animal is aligned, and identify the target companion animal by extracting features from a face image of the target companion animal having an aligned face view.
- the companion animal identification system may provide a guide to users while the users are making attempts to acquire images of target companion animals to identify the companion animals.
- the guide is associated with a condition for face views of the target companion animals with improved companion animal identification performance.
- the companion animal identification system may automatically acquire images. Accordingly, it is possible to avoid inconvenience of the users who do activities for attracting the companion animals' attention to capture front view images of the companion animals or have to continue attempts until suitable images are acquired.
- the companion animal identification system identifies companion animals based on artificial intelligence. Accordingly, since it does not cause direct harm to companion animals, it is possible to reduce reluctance that the companion animals and owners may feel, save the time and cost required for registration and simplify the procedure, thereby reducing consumers' burden and increasing convenience, and further, promoting the registration of the companion animals. As a result, it is possible to reduce the social costs associated with abandoned/lost pets and provide opportunities for activation of related services, for example, pet insurance products, and profit improvement of the existing related industry.
- FIG. 2 is a conceptual diagram of the operation of the companion animal identification system of FIG. 1 .
- FIG. 5 is a diagram illustrating a process of checking a second alignment condition according to an embodiment of the present disclosure.
- FIG. 6 is a diagram illustrating the provision of alignment condition checking results according to an embodiment of the present disclosure.
- FIGS. 7 A and 7 B are diagrams showing a guide of a first alignment condition according to an embodiment of the present disclosure.
- FIGS. 8 A and 8 B are diagrams showing a guide of a second alignment condition according to an embodiment of the present disclosure.
- FIG. 9 is a diagram illustrating roll-wise alignment according to an embodiment of the present disclosure.
- FIG. 10 is a flowchart of a companion animal identification method according to an embodiment of the present disclosure.
- a companion animal refers to a variety of animals including pets and livestock.
- the companion animal is not limited to dogs, and may include cats, hamsters, parrots, lizards, cow, horses, sheep, pigs, etc.
- FIG. 1 is a schematic block diagram of a companion animal identification system based on artificial intelligence according to an embodiment of the present disclosure
- FIG. 2 is a conceptual diagram of the operation of the companion animal identification system of FIG. 1 .
- the companion animal identification system based on artificial intelligence may include an image acquisition unit 100 , a display unit 200 , a view correction unit 300 and an identification unit 500 .
- the companion animal identification system 1 may have aspects of entirely hardware, entirely software, or partly hardware and partly software.
- the system may refer collectively to hardware capable of processing data and software that manages the hardware.
- the term “unit”, “module”, “device” or “system” as used herein is intended to refer to a combination of hardware and software that runs by the corresponding hardware.
- the hardware may be a data processing device including a Central Processing Unit (CPU), a Graphic Processing Unit (GPU) or other processor.
- the software may refer to a process being executed, an object, an executable, a thread of execution and a program.
- the image acquisition unit 100 is configured to acquire a face image of a companion animal by capturing the face of the companion animal.
- the face image of the companion animal includes the entire face of the companion animal or a part of the face.
- the image acquisition unit 100 is configured to capture the entire facial image of the companion animal. The entire facial image will be described in more detail below.
- the image acquisition unit 100 may include a camera module of a smartphone, but is not limited thereto, and may include a variety of devices capable of capturing an object and generating and transmitting image data, for example, digital cameras.
- the image acquisition unit 100 may acquire at least one face image for one target companion animal.
- the image acquisition unit 100 may capture a plurality of frames for one target companion animal.
- the image acquisition unit 100 acquires the face image of the target companion animal, and provides face image data of the target companion animal to the view correction unit 300 or the identification unit 500 .
- the display unit 200 displays and outputs information processed by the companion animal identification system 1 .
- the display unit 200 may display a screen including a preview image of the image acquisition unit 100 .
- the companion animal identification system 1 displays the screen including the preview image corresponding to the input image of the image acquisition unit 100 on the entire screen of the display unit 200 or a part of the screen. A user may see an image of the target companion animal through the preview screen before the image is acquired.
- the display unit 200 may display a guide for a suitable image having a face view satisfying a preset alignment condition as described below, or an identification result.
- the display unit 200 may provide the preview image to the view correction unit 300 .
- the view correction unit 300 may provide a guide to capture the facial images of the target companion animal by the image acquisition unit 100 . To this end, the view correction unit 300 may check if the face of the target companion animal is currently aligned to satisfy the preset alignment condition based on the preview image received from the display unit 200 .
- the view correction unit 300 may correct the face image into the image suitable for recognition of the companion animal. For example, since there is no need to provide the guide, the view correction unit 300 may directly perform a roll axis alignment operation as described below in the acquired image.
- the image suitable for identification is acquired by the view correction operation of the view correction unit 300 such that the face of the target companion animal is aligned for a view with high recognition performance.
- the view correction operation will be described in more detail with reference to FIGS. 3 to 9 .
- the view correction unit 300 is configured to extract features from the face in the original image of the target companion animal.
- the view correction unit 300 is further configured to recognize an animal species of the companion animal based on the extracted features.
- the view correction unit 300 may include an animal species recognition model (or referred to as a first model).
- the animal species recognition model is a pre-trained machine learning model to extract the features from the input image and recognize the animal species of the companion animal in the input image based on the extracted features.
- the features extracted by the first model are features for recognizing the animal species of the companion animal and may be referred to as first features.
- the animal species recognition model may have a variety of neural network structures.
- the animal species recognition model may have a Convolutional Neural Network (CNN) structure, but is not limited thereto.
- CNN Convolutional Neural Network
- the animal species recognition model is trained using a first training dataset.
- the first training dataset may be split into subsets for each animal species. Additionally, the subset for each animal species may be, in turn, split into subsets for each subspecies.
- the first training dataset includes a plurality of first training samples.
- Each first training sample may include a face image of an animal. Additionally, each first training sample may further include a species of the corresponding animal and/or a subspecies of the corresponding animal.
- the first training dataset may include a plurality of first training samples associated with a dog.
- the plurality of first training samples may be split into subsets for each dog breed.
- Each of the plurality of first training samples may include a face image of the dog, species information indicating the dog and subspecies information indicating the dog breed. Then, in the above example, when the dog's face is inputted to the trained animal species recognition model using the face image of the target companion animal, the dog breed is recognized, thereby yielding dog breed information.
- the view correction unit 300 is further configured to recognize physical features of the target companion animal.
- the physical features affect the recognition of the image, and in the case of a dog, the physical features may include a dolichocephalic or a brachycephalic.
- the view correction unit 300 may include a physical feature recognition model (or referred to as a second model).
- the physical feature recognition model is a pre-trained machine learning model to extract the features from the input image, and recognize the physical features of the companion animal in the input image based on the extracted features.
- the features extracted by the second model are features for recognizing the physical features of the companion animal and may be referred to as second features.
- the physical feature recognition model may have a variety of neural network structures.
- the physical feature recognition model may have a CNN structure, but is not limited thereto.
- the physical feature recognition model is trained using a second training dataset.
- the second training dataset may be split into subsets for each object.
- the second training dataset may share the same image with the first training dataset.
- the second training dataset includes a plurality of second training samples.
- Each second training sample includes a face image of an animal and physical features of the corresponding animal. The physical features may be different for each animal species.
- each of the plurality of second training samples may further include an animal species associated with the physical features.
- the second training dataset may include a plurality of second training samples associated with a dog.
- Each of the plurality of second training samples includes a face image of the dog, and physical feature information of the dog.
- the physical feature information may include information indicating whether the corresponding dog has a dolichocephalic or a brachycephalic (for example, dolichocephalic information or brachycephalic information). Then, in the above example, when the dog's face is inputted to the trained physical feature recognition model using the face image of the target companion animal, a dolichocephalic or a brachycephalic is recognized, thereby yielding dolichocephalic information or a brachycephalic information.
- the view correction unit 300 may recognize the animal species or physical features using any of the plurality of face images.
- the first one of the plurality of received face images may be used.
- the present disclosure is not limited thereto.
- the view correction unit 300 is further configured to determine a face region and a feature point in the face image of the target companion animal.
- the view correction unit 300 may include a face analysis model (or referred to as a third model).
- the view correction unit 300 may extract the face region and the feature point in the original input image using the face analysis model.
- the face analysis model is configured to extract the features from the input image, and determine the face region including the face part of the object in the input image based on the extracted features. Additionally, the face analysis model is further configured to extract the features from the input image, and determine the feature point disposed at the face part of the object in the input image based on the extracted features.
- the features extracted by the third model are features for determining the feature point of the face and may be referred to as a third feature.
- the feature point may include a first feature point corresponding to the left eye, a second feature point corresponding to the right eye and a third feature point corresponding to the nose.
- the feature point is not limited thereto, and may include an identifiable point in the face region required to recognize through at least part of the face of the animal.
- the face analysis model may be a trained machine learning model using a third training dataset.
- the face analysis model may have a variety of neural network structures.
- the face analysis model may have a CNN structure, but is not limited thereto.
- the face analysis model is trained using the third training dataset including a plurality of third training samples.
- Each of the plurality of third training samples includes a face image of an object, face region information of the object and feature point information.
- the face region information may include the location, region size and boundary of the face region in the image.
- the feature point information may include the location of the feature point, the size of the feature point and boundary in the image.
- the third training dataset may include a plurality of third training samples associated with a dog.
- Each of the plurality of third training samples may include a face image of the dog, and a face region and a feature point of the dog.
- FIG. 3 is a diagram showing face region/feature point extraction results according to an embodiment of the present disclosure.
- the face of the dog is inputted to the trained face analysis model using the face image of the target companion animal, the face region including the face part and the feature point are determined as shown in FIG. 3 , and finally, the face region and the feature point may be extracted.
- the feature point may be formed with a smaller area than the face region.
- the location of the feature point may be a location of a point (for example, a center point) in a region having the corresponding area.
- the view correction unit 300 is further configured to check if the face image of the target companion animal satisfies a first alignment condition.
- the first alignment condition is a condition for determining if the face of the companion animal is aligned with respect to the yaw axis.
- a face view direction is the roll axis
- an up/down direction is the pitch axis
- a lateral direction is the yaw axis.
- the first alignment condition includes the locations of the left/right eyes disposed at the central part of the image frame and/or the location of the nose disposed at the center between the left eye and the right eye.
- the view correction unit 300 may check if the face in the face image of the target companion animal satisfies the first alignment condition based on at least one of the first to third feature points.
- the view correction unit 300 may determine if the locations of the left/right eyes are disposed at the central part of the image frame by calculating if the locations of the first and second feature points corresponding to the left/right eyes are disposed at the central part of the image frame.
- the view correction unit 300 may determine if the location of the nose is disposed at the center between the left eye and the right eye using a connecting line between the first and second feature points corresponding to the left/right eyes and the third feature point corresponding to the nose.
- the location of the nose is disposed at the center between the left eye and the right eye. For example, when the distance between the projection point and the first feature point of the left eye and the distance between the projection point and the second feature point of the right eye are equal or similar (within a predetermined tolerance interval), it is determined that the location of the nose is disposed at the center.
- FIG. 4 is a diagram illustrating a process of checking the first alignment condition according to an embodiment of the present disclosure.
- the first to third feature points are extracted from the face image of the dog. Subsequently, a connecting line between the first feature point A and the second feature point B is formed, and when a projected location of the third feature point C onto the connecting line is included at the center of the connecting line, it is determined that the location of the nose is disposed at the center between the left eye and the right eye in the dog's face.
- a strong pose alignment criterion which determines that the first alignment condition is satisfied when the projected location of the third feature point C onto the connecting line is disposed at the center of the connecting line between the first feature point A and the second feature point B may be used.
- a moderate pose alignment criterion which determines that the first alignment condition is satisfied when the third feature point C is disposed on the connecting line between the first feature point A and the second feature point B may be used.
- a threshold for how much yaw axial pose change is allowed as the moderate criterion condition may be set and used.
- the threshold refers to an allowable range of points in which the projected location of the third feature point onto the connecting line is allowed to be disposed on the connecting line between the first feature point A and the second feature point B. For example, when the threshold is set to 20%, it means that the projected location of the third feature point C should be disposed within the range of 20% from the center of the connecting line to each of the first feature point A and the second feature point B.
- the threshold may be determined according to breed or an individual's snout shape.
- the view correction unit 300 checks if the face image of the target companion animal satisfies a second alignment condition.
- the second alignment condition is a condition for determining if the face of the companion animal is aligned with respect to the pitch axis.
- the second alignment condition may include the face aligned to match the area of the face region in the image with the front view area of the face.
- alignment to match the front view area of the face indicates a situation in which the face points toward the image acquisition unit 100 so it is possible to capture the entire front part or almost the entire front part.
- the face is captured with a smaller area than the front view area.
- the view correction unit 300 may check if the face in the face image of the target companion animal satisfies the second alignment condition based on at least one of the first to third feature points.
- the view correction unit 300 may check if the face in the face image of the target companion animal satisfies the second alignment condition by comparing an angle between a first connecting line connecting the first feature point to the second feature point and a second connecting line connecting any one of the first feature point and the second feature point to the third feature point with a preset reference angle.
- the reference angle may be set according to the physical features.
- the view correction unit 300 checks if the second alignment condition is satisfied using the preset reference angle for the recognized physical features.
- the reference angle may include a first reference angle for the dolichocephalic and a second reference angle for the brachycephalic.
- the second reference angle is larger than the first reference angle.
- the first reference angle may range from 40° to 50° (for example, about 45°).
- the second reference angle may range from 50° to 70° (for example, about 60°).
- FIG. 5 is a diagram illustrating a process of checking the second alignment condition according to an embodiment of the present disclosure.
- the face image of the dog of FIG. 5 may be acquired by the image acquisition unit 100 .
- the face image of the dog may be acquired as a preview image.
- the view correction unit 300 may recognize that the dog breed of the dog of FIG. 5 is Pomeranian based on the face image (for example, the preview image) of the dog of FIG. 5 .
- the view correction unit 300 may recognize that the physical features of the dog of FIG. 5 have a brachycephalic.
- the view correction unit 300 may check if the face of Pomeranian of FIG. 5 is aligned to satisfy the second alignment condition. Since the physical features of the face image of FIG. 5 have the brachycephalic, the second reference angle for the brachycephalic is used to check if the second alignment condition is satisfied. When the angle of FIG. 5 matches the second reference angle (for example, 45°, the view correction unit 300 determines that the face of the target dog is aligned to satisfy the second alignment condition.
- the second reference angle for example, 45°
- the view correction unit 300 provides the user with a result of checking if the face of the target companion animal is aligned to satisfy the first alignment condition and the second alignment condition.
- FIG. 6 is a diagram illustrating the provision of the alignment condition checking results according to an embodiment of the present disclosure.
- the companion animal identification system 1 when the face of the target companion animal is aligned to satisfy the first alignment condition and the second alignment condition, the companion animal identification system 1 provides a positive feedback. On the contrary, when the face of the target companion animal does not satisfy at least one of the first alignment condition or the second alignment condition, the companion animal identification system 1 provides a negative feedback for the unsatisfied alignment condition to the user.
- the positive or negative feedback may be represented in different colors and provided to the user.
- the positive feedback may be represented in green and the negative feedback may be represented in red as shown in FIG. 6 .
- FIGS. 7 A and 7 B are diagrams showing the guide of the first alignment condition according to an embodiment of the present disclosure.
- the guide includes a first guide to aligning the face of the target companion animal with respect to the yaw axis.
- the first guide may be a guide to placing the nose between both eyes of the target companion animal.
- the first guide may be provided when the nose of the target companion animal is not projected onto the center of the connecting line between both eyes in the process of checking the first alignment condition.
- the view correction unit 300 provides the first guide by guiding a direction for placing the nose between both eyes to the user.
- the direction may be represented as a camera movement direction as shown in FIG. 7 A or a head movement direction as shown in FIG. 7 B .
- FIGS. 8 A and 8 B are diagrams showing the guide of the second alignment condition according to an embodiment of the present disclosure.
- the view correction unit 300 when the face of the target companion animal satisfies the first alignment condition and the second alignment condition, the view correction unit 300 enables the image acquisition unit 100 to capture the target companion animal to acquire an image having a suitable face view, in response to the conditions being satisfied.
- the companion animal identification system 1 automatically captures an image and registers it in a database to provide the user with registration convenience.
- the alignment or mis-alignment with respect to the pitch/yaw axis may be determined in the preview image or the image, and the alignment with respect to the roll axis may be performed in the image acquired for recognition.
- the image alignment with respect to the roll axis does not cause image distortion such as warping.
- the view correction unit 300 may rotate the image at a predetermined angle for roll-wise alignment.
- the view correction unit 300 may calculate a connection vector including the first and second feature points corresponding to both eyes, calculate a rotation matrix T of the connection vector, and align the connection vector into a non-rotated state based on the calculated rotation matrix T of the connection vector.
- the identification unit 500 is configured to extract the features from the image including the entire face region of the target companion animal and having the aligned face view, and identify the target companion animal based on the extracted features.
- the identification unit 500 may extract the sub-patch including a first sub-patch, a second sub-patch and/or a third sub-patch from the entire facial image.
- the first sub-patch includes a first sub-region including the first and second feature points corresponding to both eyes.
- the second sub-patch includes a second sub-region including the third feature point corresponding to the nose.
- the third sub-patch includes a third sub-region including at least one of the first feature point or the second feature point and the third feature point.
- the identification unit 500 may extract the sub-region and generate the sub-patch using the feature point information determined by the view correction unit 300 .
- the identification unit 500 may generate the first sub-patch including both eyes, the second sub-patch including the nose and the third sub-patch including both eyes and the nose from the entire facial image.
- the third identifier ID: 3 is acquired using the matching results of the entire facial image, the first sub-patch and the third sub-patch, and the first identifier ID: 1 is acquired using the matching results of the second sub-patch.
- the voting based ensemble technique is used, the most frequently matched identifier is determined as the final identifier.
- the identification unit 500 identifies the target companion animal using the final identifier.
- the matching scores indicates the matching extent between the target companion animal and the identifier of the class.
- the matching scores may be calculated through a variety of geometric similarities (for example, cosine similarity, Euclidean similarity) or a classifier of a machine learning model (for example, a CNN classifier).
- the determination part is configured to calculate the final score of the target companion animal for a specific class based on the matching scores calculated for each patch.
- the weight in the weighted sum rule may be set based on the relative information amount included in the patch.
- the patch from which the features are extracted is the entire facial image
- the second sub-patch including the nose and the third sub-patch including the eyes and the nose the entire facial image includes all the three feature points and has the widest patch region, and thus has the largest information amount
- the second sub-patch has one feature point and the narrowest patch region, and thus has the smallest information amount.
- the determination part determines the final identifier of the target companion animal based on the calculated final score (or the set of final scores). For example, the identifier of the target companion animal is determined as an identifier of a class associated with the highest matching scores.
- the companion animal identification system 1 may be implemented as a server; and a client device disposed at a remote location.
- the client device includes the image acquisition unit 100
- the server includes the identification unit 500
- the view correction unit 300 may be included in the server or the client device.
- the client device may include a variety of computing devices including a camera module and a processor, for example, smart phones, tablets, etc.
- the companion animal identification method includes acquiring a preview image for capturing the face of a target companion animal (S 100 ).
- the preview image includes the face of the target companion animal.
- the step S 300 includes recognizing a species and/or physical features of the target companion animal (S 305 ).
- the step S 305 may be performed based on any of a plurality of images when the plurality of images is acquired for the same object. For example, when a plurality of image frames is captured in the preview image of the dog shown in FIG. 2 , dog breed and/or dolichocephalic/brachycephalic may be recognized based on any one of the plurality of image frames (S 305 ).
- the step S 300 further includes determining if it is the initially acquired image among the plurality of images for the same object (S 301 ). Then, the step S 305 is performed on the initially acquired image.
- the operation by the system 1 and method for companion animal identification according to the embodiments as described above may be, at least in part, implemented in a computer program and recorded in a computer-readable recording medium.
- it may be implemented with a program product on the computer-readable medium including program code, and may be executed by the processor for performing any or all of the above-described steps, operations or processes.
- the computer may be a computing device such as a desktop computer, a laptop computer, a notebook computer, a smart phone or like, and may be any integrated device.
- the computer is a device having at least one alternative and specialized processor, memory, storage and networking component (either wireless or wired).
- the computer may run, for example, Microsoft Windows-compatible operating systems (OSs), and OSs such as Apple OS X or iOS, Linux distribution, or Google's Android OS.
- OSs Microsoft Windows-compatible operating systems
- OSs such as Apple OS X or iOS, Linux distribution, or Google's Android OS.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Tourism & Hospitality (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Primary Health Care (AREA)
- Child & Adolescent Psychology (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Marketing (AREA)
- Mathematical Physics (AREA)
- Human Resources & Organizations (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (19)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020200096258A KR102497805B1 (en) | 2020-07-31 | 2020-07-31 | System and method for companion animal identification based on artificial intelligence |
| KR10-2020-0096258 | 2020-07-31 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220036054A1 US20220036054A1 (en) | 2022-02-03 |
| US11847849B2 true US11847849B2 (en) | 2023-12-19 |
Family
ID=80004453
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/388,013 Active 2042-02-03 US11847849B2 (en) | 2020-07-31 | 2021-07-29 | System and method for companion animal identification based on artificial intelligence |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US11847849B2 (en) |
| KR (1) | KR102497805B1 (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4322120A4 (en) * | 2021-06-28 | 2025-03-19 | Petnow Inc. | METHOD FOR PHOTOGRAPHING AN OBJECT TO IDENTIFY A PET, AND ELECTRONIC DEVICE |
| CN114639141B (en) * | 2022-02-23 | 2024-10-22 | 支付宝(杭州)信息技术有限公司 | Nose print image acquisition method, device, storage medium and electronic device |
| CN115546845B (en) * | 2022-11-24 | 2023-06-06 | 中国平安财产保险股份有限公司 | Multi-view cow face recognition method and device, computer equipment and storage medium |
Citations (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100795360B1 (en) | 2006-01-23 | 2008-01-17 | (주)씨아이피시스템 | How to judge yourself by face recognition |
| US20110317874A1 (en) * | 2009-02-19 | 2011-12-29 | Sony Computer Entertainment Inc. | Information Processing Device And Information Processing Method |
| US8396262B2 (en) * | 2007-04-04 | 2013-03-12 | Sony Corporation | Apparatus and method for face recognition and computer program |
| US8538044B2 (en) * | 2008-09-26 | 2013-09-17 | Panasonic Corporation | Line-of-sight direction determination device and line-of-sight direction determination method |
| KR20140138103A (en) | 2014-10-31 | 2014-12-03 | 주식회사 아이싸이랩 | Apparatus of Animal Recognition System using nose patterns |
| US20150131868A1 (en) * | 2013-11-14 | 2015-05-14 | VISAGE The Global Pet Recognition Company Inc. | System and method for matching an animal to existing animal profiles |
| US9762791B2 (en) * | 2014-11-07 | 2017-09-12 | Intel Corporation | Production of face images having preferred perspective angles |
| KR101788272B1 (en) | 2016-11-01 | 2017-10-19 | 오승호 | Animal Identification and Registration Method based on Biometrics |
| US20180082304A1 (en) * | 2016-09-21 | 2018-03-22 | PINN Technologies | System for user identification and authentication |
| KR20180070057A (en) | 2016-12-16 | 2018-06-26 | 주식회사 아렌네 | Method and system for matching lost companion animal and found companion animal and computer-readable recording medium having a program |
| US20180365858A1 (en) * | 2017-06-14 | 2018-12-20 | Hyundai Mobis Co., Ltd. | Calibration method and apparatus |
| KR102088206B1 (en) | 2019-08-05 | 2020-03-13 | 주식회사 블록펫 | Method for recognizing object based on image and apparatus for performing the method |
| CN110909618A (en) * | 2019-10-29 | 2020-03-24 | 泰康保险集团股份有限公司 | Pet identity recognition method and device |
| US20200104568A1 (en) | 2017-06-30 | 2020-04-02 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and Device for Face Image Processing, Storage Medium, and Electronic Device |
| KR20200041296A (en) | 2018-10-11 | 2020-04-21 | 주식회사 핏펫 | Computer program and theminal for providing individual animal information based on the facial and nose pattern imanges of the animal |
| KR20200043268A (en) | 2018-10-17 | 2020-04-27 | 김효식 | Method and apparatus for biometric identification of animals |
| KR20200068351A (en) | 2018-12-05 | 2020-06-15 | 주식회사 호윤 | System for identification of companion animals using biometric and identification method using the same |
| KR20200072853A (en) | 2018-12-13 | 2020-06-23 | 대한민국(농촌진흥청장) | An acquiring apparatus for biotelemetry of dogs |
| KR20200073888A (en) | 2018-12-15 | 2020-06-24 | 박지혜 | An artificial intelligence based image recognition for searching lost animal's notice |
| KR20200095356A (en) | 2019-01-31 | 2020-08-10 | 주식회사 스트라드비젼 | Method for recognizing face using multiple patch combination based on deep neural network with fault tolerance and fluctuation robustness in extreme situation |
| US20210049355A1 (en) * | 2019-08-16 | 2021-02-18 | Stephanie Sujin CHOI | Method for clustering and identifying animals based on the shapes, relative positions and other features of body parts |
| US20210232817A1 (en) * | 2018-10-12 | 2021-07-29 | Huawei Technologies Co., Ltd. | Image recognition method, apparatus, and system, and computing device |
-
2020
- 2020-07-31 KR KR1020200096258A patent/KR102497805B1/en active Active
-
2021
- 2021-07-29 US US17/388,013 patent/US11847849B2/en active Active
Patent Citations (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100795360B1 (en) | 2006-01-23 | 2008-01-17 | (주)씨아이피시스템 | How to judge yourself by face recognition |
| US8396262B2 (en) * | 2007-04-04 | 2013-03-12 | Sony Corporation | Apparatus and method for face recognition and computer program |
| US8538044B2 (en) * | 2008-09-26 | 2013-09-17 | Panasonic Corporation | Line-of-sight direction determination device and line-of-sight direction determination method |
| US20110317874A1 (en) * | 2009-02-19 | 2011-12-29 | Sony Computer Entertainment Inc. | Information Processing Device And Information Processing Method |
| US20150131868A1 (en) * | 2013-11-14 | 2015-05-14 | VISAGE The Global Pet Recognition Company Inc. | System and method for matching an animal to existing animal profiles |
| KR20140138103A (en) | 2014-10-31 | 2014-12-03 | 주식회사 아이싸이랩 | Apparatus of Animal Recognition System using nose patterns |
| US9762791B2 (en) * | 2014-11-07 | 2017-09-12 | Intel Corporation | Production of face images having preferred perspective angles |
| US20180082304A1 (en) * | 2016-09-21 | 2018-03-22 | PINN Technologies | System for user identification and authentication |
| KR101788272B1 (en) | 2016-11-01 | 2017-10-19 | 오승호 | Animal Identification and Registration Method based on Biometrics |
| KR20180070057A (en) | 2016-12-16 | 2018-06-26 | 주식회사 아렌네 | Method and system for matching lost companion animal and found companion animal and computer-readable recording medium having a program |
| US20180365858A1 (en) * | 2017-06-14 | 2018-12-20 | Hyundai Mobis Co., Ltd. | Calibration method and apparatus |
| US20200104568A1 (en) | 2017-06-30 | 2020-04-02 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and Device for Face Image Processing, Storage Medium, and Electronic Device |
| KR20200041296A (en) | 2018-10-11 | 2020-04-21 | 주식회사 핏펫 | Computer program and theminal for providing individual animal information based on the facial and nose pattern imanges of the animal |
| US20210232817A1 (en) * | 2018-10-12 | 2021-07-29 | Huawei Technologies Co., Ltd. | Image recognition method, apparatus, and system, and computing device |
| KR20200043268A (en) | 2018-10-17 | 2020-04-27 | 김효식 | Method and apparatus for biometric identification of animals |
| KR20200068351A (en) | 2018-12-05 | 2020-06-15 | 주식회사 호윤 | System for identification of companion animals using biometric and identification method using the same |
| KR20200072853A (en) | 2018-12-13 | 2020-06-23 | 대한민국(농촌진흥청장) | An acquiring apparatus for biotelemetry of dogs |
| KR20200073888A (en) | 2018-12-15 | 2020-06-24 | 박지혜 | An artificial intelligence based image recognition for searching lost animal's notice |
| KR20200095356A (en) | 2019-01-31 | 2020-08-10 | 주식회사 스트라드비젼 | Method for recognizing face using multiple patch combination based on deep neural network with fault tolerance and fluctuation robustness in extreme situation |
| KR102088206B1 (en) | 2019-08-05 | 2020-03-13 | 주식회사 블록펫 | Method for recognizing object based on image and apparatus for performing the method |
| US20210049355A1 (en) * | 2019-08-16 | 2021-02-18 | Stephanie Sujin CHOI | Method for clustering and identifying animals based on the shapes, relative positions and other features of body parts |
| CN110909618A (en) * | 2019-10-29 | 2020-03-24 | 泰康保险集团股份有限公司 | Pet identity recognition method and device |
Non-Patent Citations (5)
| Title |
|---|
| Christian Szegedy et al., "Going Deeper with Convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9. |
| Jie Hu et al., "Squeeze-and-Excitation Networks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132-7141. |
| Kaiming He et al., "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778. |
| Karen Simonyan et al., "Very deep convolutional networks for large-scale image recognition," International Conference on Learning Representations, 2015, 14 pages. |
| Muneeb Ul Hassan, "VGG16—Convolutional Network for Classification and Detection," Neurohive, Nov. 2018, 6 pages (https://neurohive.io/en/popular-networks/vgg16/). |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220036054A1 (en) | 2022-02-03 |
| KR102497805B1 (en) | 2023-02-10 |
| KR20220015774A (en) | 2022-02-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11847849B2 (en) | System and method for companion animal identification based on artificial intelligence | |
| Kumar et al. | Real-time recognition of cattle using animal biometrics | |
| CN111753697B (en) | Intelligent pet management system and management method thereof | |
| US10445562B2 (en) | AU feature recognition method and device, and storage medium | |
| US12511944B2 (en) | Companion animal life management system and method therefor | |
| JP6692045B2 (en) | Authentication device, authentication system, authentication method, and program | |
| WO2019109526A1 (en) | Method and device for age recognition of face image, storage medium | |
| CN111191567A (en) | Identity data processing method and device, computer equipment and storage medium | |
| US12424016B2 (en) | System for identifying companion animal and method therefor | |
| Lu et al. | Algorithm for cattle identification based on locating key area | |
| US11126827B2 (en) | Method and system for image identification | |
| US12236701B2 (en) | Animal object identification apparatus based on image and method thereof | |
| JP2022048464A (en) | Nose print collation device, method and program | |
| Kawagoe et al. | Individual identification of cow using image processing techniques | |
| CN114698399A (en) | Face recognition method and device and readable storage medium | |
| CN111144378A (en) | Target object identification method and device | |
| CN110929555A (en) | Face recognition method and electronic device using same | |
| CN113239804A (en) | Image recognition method, readable storage medium, and image recognition system | |
| US20230337630A1 (en) | Systems and methods of individual animal identification | |
| KR20230104969A (en) | System and method for nose-based companion animal identification | |
| CN113780052A (en) | Lactation identification method, related device, equipment and medium | |
| CN114998575B (en) | Method and device for training and using target detection model | |
| JP7541609B1 (en) | Information processing system, program, and information processing method | |
| CN117894037A (en) | Method and computing device for identity authentication of cattle | |
| CN119671746A (en) | Livestock insurance claims method, electronic device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, IG JAE;HONG, YU-JIN;PARK, HYEONJUNG;AND OTHERS;REEL/FRAME:057015/0397 Effective date: 20210716 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |