WO2023095174A1 - A process for identification of a snouted animal - Google Patents

A process for identification of a snouted animal Download PDF

Info

Publication number
WO2023095174A1
WO2023095174A1 PCT/IN2022/051036 IN2022051036W WO2023095174A1 WO 2023095174 A1 WO2023095174 A1 WO 2023095174A1 IN 2022051036 W IN2022051036 W IN 2022051036W WO 2023095174 A1 WO2023095174 A1 WO 2023095174A1
Authority
WO
WIPO (PCT)
Prior art keywords
animal
image
snouted
server
video
Prior art date
Application number
PCT/IN2022/051036
Other languages
French (fr)
Inventor
Prasad Krishna DESAI
Sujit A. HUKKERIKAR
Original Assignee
Adis Technologies Pvt. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adis Technologies Pvt. Ltd. filed Critical Adis Technologies Pvt. Ltd.
Publication of WO2023095174A1 publication Critical patent/WO2023095174A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • the present subject matter relates to the field of identification using image processing.
  • the present subject matter relates to a process for identifying a snouted animal or an animal having a muzzle mouth, such as, for example, cows, horses, donkeys, dogs, sheep, goat, and the like.
  • Cattle identification is an important requirement for animal husbandry. In today’s age of digitization, there is still a requirement for a fool proof and cost friendly animal tagging system that is reliable and easy to operate.
  • Conventional animal tagging systems employ the usage of tags that are GPS enabled and RFID compatible. It is to be noted that one disadvantageous aspect of such conventional animal tagging systems is that these are very expensive. Another disadvantageous aspect is that the tags that are tagged onto the animal body may be easily lost. Yet another disadvantageous aspect of the conventional animal tagging systems is the requirement of specialized equipment that are compatible for operation with the aforementioned tags. Tagging the animals may also give rise to other problems such as duplicate tagging or tag exchanges, which is not desired.
  • the present disclosure envisages a process for identification of a snouted animal.
  • the process comprises capturing, via an image capturing unit of a smart device, at least one image of a snout portion of the snouted animal; uploading, via the smart device, the at least one image on a server; performing, image processing of the at least one image on the server; detecting, at the server subsequent to the image processing, the identity of the snouted animal; notifying, by the server to the smart device, the identity of the snouted animal.
  • the step of performing image processing further comprises identifying the snout portion in the at least one image; cropping the snout portion of the at least one image; deleting, from cropped images of the at least one image, blur images of the snout portion; performing edge detection of the remaining cropped images of the at least one image; and detecting, via a machine learning module of the server, the animal based on the edge detection of the remaining cropped images of the at least one image.
  • the edge detection is performed by the server in accordance with the CANNY edge detection method.
  • the smart device is a smart phone or a tablet.
  • FIG. 1 illustrates a block diagram depicting the process of collecting cattle identity data, in accordance with an embodiment of the present disclosure.
  • FIG. 2 illustrates a block diagram depicting a process for identifying a cattle, in accordance with an embodiment of the present disclosure.
  • FIG. 3 illustrates an exemplary muzzle pattern, in accordance with an embodiment of the present disclosure
  • FIG. 4 illustrates an image proving smaller the kernel less visible is the blur, in accordance with an embodiment of the present disclosure.
  • FIG. 5 illustrates edge detection of the images, in accordance with an embodiment of the present disclosure.
  • Fig. 6 shows an image depicting that some pixels seem to be brighter than others, in accordance with an embodiment of the present disclosure.
  • Fig. 7 illustrates an image with only 2 pixel intensity values. , in accordance with an embodiment of the present disclosure.
  • FIG. 8 illustrates the results of the hysteresis process, in accordance with an embodiment of the present disclosure.
  • the present disclosure envisages a process of identifying any kind of snouted animal.
  • the process envisaged in the present disclosure does not need to employ in expensive and specially designed hardware, and the process can be performed using the commonly used smart devices, such as a smart phone, a tablet, or a laptop.
  • the process 100 comprises capturing one or more images of a cow, which is a snouted animal. It should be noted that the process 100 may be used to collect identity data of any kind of snouted animal, including but not limited to cows, buffaloes, pigs, goats, dogs, sheep, camel, and the like.
  • the process 100 includes capturing 102 the image of the of the snouted animals. In one embodiment, the step of capturing 102 may include capturing the image of the snout portion on the animal or cattle. In another embodiment, the step of capturing 102 may include capturing the image of the entire body of the animal or cattle.
  • capturing the image of the snout portion of the snouted animal may be used to collect identity data of the snouted animal, while the images of the entire body of the snouted animal or cattle may be used in identifying the breed of the snouted animal.
  • the process 100 further comprises uploading 104 the captured video on the server 106.
  • the server 106 may be a remote server, and the uploading of the captured images on the server 106 may be performed by the user using the smart device used by the user to capture the video.
  • the uploading may be done using a smartphone.
  • the uploading may be done using a tablet.
  • the process of uploading may be performed using the internet.
  • the process 100 further comprises the steps associated with of image processing of the video captured in the step of capturing 102.
  • the processing of the captured video is performed by the server 106 after the video has been uploaded to the server 106.
  • image frames are extracted from the video at step 107.
  • a set of extracted imaging frames is processed thereafter.
  • the step of image processing comprises identifying or detecting 108 the snout portion in the at least one image. More specifically, the snout portion of the animal contains a muzzle pattern formed on thereon that is unique to each animal.
  • An exemplary muzzle pattern 302 is depicted in FIG. 3.
  • the step of image processing further comprises cropping 110 the snout portion of the at least one image.
  • the operation of cropping may be performed by the server 106, in accordance with one embodiment of the present subject matter. More specifically, the server 106 may be trained to crop the captured images to obtain a focused view of the snout portions of the animal, as shown exemplarily in FIG. 3. Furthermore, cropping 110 is performed of all the images captured, in accordance with one embodiment, for obtaining numerous focused views of the snout portion of the animal.
  • the step of image processing further comprises deletion 112 of the blurred images from the cropped images obtained in the above step.
  • all the blurred images that cannot be used for identification purposes are deleted by the server. More specifically, all the images in which a clear muzzle pattern of the snouted animal cannot be observed are deleted, in accordance with one embodiment of the present subject matter.
  • the step of image processing further comprises the step of edge detection 114 of the remaining images, wherein the term remaining images refers to the cropped images of the snouted portion of the animal being identified.
  • ⁇ xl,x2,yl,y2 are the crop points found by our muzzle detection model. This value is in tuple data type. Then that will convert into an array using numpy.array() method, and that array value is sent to cv2.sobel () method.
  • test_image it is image that is converted in array come from numpy.array() method
  • cv2.CV_64F it is a datatype. if you want to detect other edges, better option is to keep the output data type to some higher forms, like cv2.CV_16S, cv2.CV_64F etc
  • Np.uint8 is a numpy data type that is unsigned integer
  • kernel np.ones((5,5),np.float32)/25
  • new.array cv2.resize(dst, (IMG.SIZE, IMG.SIZE))
  • ⁇ IMG_SIZE is size of image that is height and width
  • edges cv2.Canny(new_array, 100,200)
  • Second and third arguments are our minVal and maxVal respectively.
  • the optimal detector must minimize the probability of false positive as well as false negative, which is achieved using canny edge detection algorithm.
  • the first step is noise reduction, which includes:
  • the second step is gradient calculation, which includes: a. Edges correspond to a change of pixels’ intensity. To detect it, the easiest way is to apply filters that highlight this intensity change in both directions: horizontal (x) and vertical (y) b.
  • the derivatives lx and ly w.r.t. x and y are calculated. It can be implemented by convolving / with Sobel kernels Kx and Ky, respectively c. Sobel filters for both direction (horizontal and vertical) d. Then, the magnitude G and the slope 0 of the gradient are calculated as follow: e. As shown in FIG. 5, the result is almost the expected one, but we can see that some of the edges are thick and others are thin. Non-Max Suppression steps will help us mitigate the thick ones.
  • the third step includes non-maximum suppression , which includes:
  • the final image should have thin edges. Thus, we must perform non-maximum suppression to thin out the edges.
  • the fourth step includes double threshold.
  • the double threshold step aims at identifying 3 kinds of pixels: strong, weak, and non-relevant:
  • Weak pixels are pixels that have an intensity value that is not enough to be considered as strong ones, but yet not small enough to be considered as non-relevant for the edge detection.
  • High threshold is used to identify the strong pixels (intensity higher than the high threshold)
  • Low threshold is used to identity the non-relevant pixels (intensity lower than the low threshold)
  • the fifth step includes Edge tracking by hysteresis. Based on the threshold results, the hysteresis consists of transforming weak pixels into strong ones, if and only if at least one of the pixels around the one being processed is a strong one, as illustrated in FIG. 8.
  • FIG. 8 illustrates the results of the hysteresis process.
  • the step of image processing further comprises training the server 106 on the images on which the unique muzzle patterns have been identified. More specifically, in this step, unique identity tokens are assigned to the images on which the unique muzzle pattern has been identified, subsequent to which the images are then detected based on the identity tokens in step 118.
  • the process 100 explained above is the process for assigning identity to a snouted animal based on the unique identity patterns.
  • the present disclosure further envisages a process 200 for identifying the snouted animal.
  • the process 200 may be brought into operation once a database of the uniquely identified snouted animals has been generated and stored at the server 106.
  • the process 100 and process 200 are largely similar and employ similar steps. As such, like reference numerals are used to denote like process steps for the sake of simplicity and easy readability.
  • the process 200 comprises capturing 102 a video of a cow, which is a snouted animal. It should be noted that the process 200 may be used to identify any kind of snouted animal, including but not limited to cows, buffaloes, pigs, goats, dogs, sheep, camel, and the like.
  • the process 200 includes capturing 102 the video of the of the snouted animals.
  • the step of capturing 102 may include capturing the video of the snout portion on the animal or cattle.
  • the step of capturing 102 may include capturing the video of the entire body of the animal or cattle.
  • capturing the video of the snout portion of the snouted animal is used to identify the snouted animal, while the video of the entire body of the snouted animal or cattle may be used in identifying the breed of the snouted animal.
  • the process 200 further comprises uploading 104 the captured video on the server 106.
  • the server 106 may be a remote server, and the uploading of the captured video on the server 106 may be performed by the user using the smart device used by the user to capture the images.
  • the uploading may be done using a smartphone.
  • the uploading may be done using a tablet.
  • the process of uploading may be performed using the internet.
  • the process 100 further comprises the steps associated with of image processing of the video captured in the step of capturing 102.
  • the processing of the captured images is performed by the server 106 after the videos has been uploaded to the server 106.
  • image frames are extracted from the video at step 107.
  • a set of extracted imaging frames is processed thereafter.
  • the step of image processing comprises identifying or detecting 108 the snout portion or muzzle in the at least one image. More specifically, the snout portion of the animal contains a muzzle pattern formed on thereon that is unique to each animal.
  • An exemplary muzzle pattern 302 is depicted in FIG. 3.
  • the step of image processing further comprises cropping 110 the snout portion of the at least one image.
  • the operation of cropping may be performed by the server 106, in accordance with one embodiment of the present subject matter. More specifically, the server 106 may be trained to crop the captured images to obtain a focused view of the snout portions of the animal, as shown exemplarily in FIG. 3. Furthermore, cropping 110 is performed of all the images captured, in accordance with one embodiment, for obtaining numerous focused views of the snout portion of the animal.
  • the step of image processing further comprises checking 202 the presence of the blurred images from the cropped images obtained in the above step. If the blurred images are present, then the user is prompted to recapture clearer images of the snouted animal.
  • the step of image processing further comprises the step of edge detection 114 of the cropped images.
  • edge detection refers to building an algorithm that can sketch the edges of any object present on a picture, using the canny edge detection algorithm.
  • the implementation of the canny edge detection algorithm has been previously described in the present disclosure, and the same is not repeated again for the sake of brevity of the present disclosure.
  • the process 200 includes the step of checking 204 the processed images for the purpose of identification of the snouted animal. More specifically, the checking 204 includes mapping the processed image with the images present in the database at the server 106. If a match is found, information is displayed on the smart device that the user is using. If the match is not found, the user may be prompted to assign a new identity token to the processed image, since the absence of a match may indicate that the animal is new and unregistered at the server 106.

Abstract

The present disclosure envisages a process for identification of a snouted animal. The process comprises capturing, via an image capturing unit of a smart device, at least one image of a snout portion of the snouted animal; uploading, via the smart device, the at least one video a server; performing, image processing of the at least one video on the server; extracting a set of image frames from the video, detecting, at the server subsequent to the image processing, the identity of the snouted animal; notifying, by the server to the smart device, the identity of the snouted animal.

Description

A PROCESS FOR IDENTIFICATION OF A SNOUTED ANIMAL
TECHNICAL FIELD
[0001] The present subject matter relates to the field of identification using image processing. In particular, the present subject matter relates to a process for identifying a snouted animal or an animal having a muzzle mouth, such as, for example, cows, horses, donkeys, dogs, sheep, goat, and the like.
BACKGROUND
[0002] Cattle identification is an important requirement for animal husbandry. In today’s age of digitization, there is still a requirement for a fool proof and cost friendly animal tagging system that is reliable and easy to operate. Conventional animal tagging systems employ the usage of tags that are GPS enabled and RFID compatible. It is to be noted that one disadvantageous aspect of such conventional animal tagging systems is that these are very expensive. Another disadvantageous aspect is that the tags that are tagged onto the animal body may be easily lost. Yet another disadvantageous aspect of the conventional animal tagging systems is the requirement of specialized equipment that are compatible for operation with the aforementioned tags. Tagging the animals may also give rise to other problems such as duplicate tagging or tag exchanges, which is not desired.
[0003] There is, therefore, felt a need of a process for identifying animals, specifically snouted or muzzle faced animals, which does not require any special equipment. There is also felt a need of a process for identifying animals, specifically snouted or muzzle faced animals, which is cost friendly. There is also felt a need of a process for identifying animals, specifically snouted or muzzle faced animals, wherein the process takes into account a biometric criteria for identification of the animal instead of using tags, so that unique identities associated with each animal is generated. [0004] Therefore, there exists a previously unappreciated need for a new process for identifying animals that facilitates the functionalities mentioned above and addresses the shortcomings of the prior art. It is to these ends that the present invention has been developed.
SUMMARY
[0005] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features of essential features of the
[0006] claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
[0007] The present disclosure envisages a process for identification of a snouted animal. The process comprises capturing, via an image capturing unit of a smart device, at least one image of a snout portion of the snouted animal; uploading, via the smart device, the at least one image on a server; performing, image processing of the at least one image on the server; detecting, at the server subsequent to the image processing, the identity of the snouted animal; notifying, by the server to the smart device, the identity of the snouted animal.
[0008] In one embodiment, the step of performing image processing further comprises identifying the snout portion in the at least one image; cropping the snout portion of the at least one image; deleting, from cropped images of the at least one image, blur images of the snout portion; performing edge detection of the remaining cropped images of the at least one image; and detecting, via a machine learning module of the server, the animal based on the edge detection of the remaining cropped images of the at least one image. [0009] In another embodiment, the edge detection is performed by the server in accordance with the CANNY edge detection method.
[0010] In yet another embodiment, the smart device is a smart phone or a tablet.
[0011] These and other features and advantages of the present subject matter will become more readily apparent from the attached drawings and the detailed description of the preferred embodiments, which follow.
BRIEF DESCRIPTION OF DRAWING
[0012] The present subject matter is hereinafter described with reference to nonlimiting accompanying drawing in which:
[0013] FIG. 1 illustrates a block diagram depicting the process of collecting cattle identity data, in accordance with an embodiment of the present disclosure.
[0014] FIG. 2 illustrates a block diagram depicting a process for identifying a cattle, in accordance with an embodiment of the present disclosure.
[0015] Fig. 3 illustrates an exemplary muzzle pattern, in accordance with an embodiment of the present disclosure
[0016] FIG. 4 illustrates an image proving smaller the kernel less visible is the blur, in accordance with an embodiment of the present disclosure.
[0017] FIG. 5, illustrates edge detection of the images, in accordance with an embodiment of the present disclosure.
[0018] Fig. 6 shows an image depicting that some pixels seem to be brighter than others, in accordance with an embodiment of the present disclosure. [0019] Fig. 7 illustrates an image with only 2 pixel intensity values. , in accordance with an embodiment of the present disclosure.
[0020] FIG. 8 illustrates the results of the hysteresis process, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0021] The present disclosure envisages a process of identifying any kind of snouted animal. The process envisaged in the present disclosure does not need to employ in expensive and specially designed hardware, and the process can be performed using the commonly used smart devices, such as a smart phone, a tablet, or a laptop.
[0022] Referring to FIG. 1, a process 100 for collecting cattle identity data in accordance with the present disclosure, is illustrated. The process 100 comprises capturing one or more images of a cow, which is a snouted animal. It should be noted that the process 100 may be used to collect identity data of any kind of snouted animal, including but not limited to cows, buffaloes, pigs, goats, dogs, sheep, camel, and the like. The process 100 includes capturing 102 the image of the of the snouted animals. In one embodiment, the step of capturing 102 may include capturing the image of the snout portion on the animal or cattle. In another embodiment, the step of capturing 102 may include capturing the image of the entire body of the animal or cattle. In one example, capturing the image of the snout portion of the snouted animal may be used to collect identity data of the snouted animal, while the images of the entire body of the snouted animal or cattle may be used in identifying the breed of the snouted animal.
[0023] The process 100 further comprises uploading 104 the captured video on the server 106. The server 106 may be a remote server, and the uploading of the captured images on the server 106 may be performed by the user using the smart device used by the user to capture the video. In one embodiment, the uploading may be done using a smartphone. In another embodiment, the uploading may be done using a tablet. The process of uploading may be performed using the internet.
[0024] The process 100 further comprises the steps associated with of image processing of the video captured in the step of capturing 102. The processing of the captured video is performed by the server 106 after the video has been uploaded to the server 106. In one embodiment image frames are extracted from the video at step 107. A set of extracted imaging frames is processed thereafter. In one embodiment, the step of image processing comprises identifying or detecting 108 the snout portion in the at least one image. More specifically, the snout portion of the animal contains a muzzle pattern formed on thereon that is unique to each animal. An exemplary muzzle pattern 302 is depicted in FIG. 3.
[0025] In one embodiment, the step of image processing further comprises cropping 110 the snout portion of the at least one image. The operation of cropping may be performed by the server 106, in accordance with one embodiment of the present subject matter. More specifically, the server 106 may be trained to crop the captured images to obtain a focused view of the snout portions of the animal, as shown exemplarily in FIG. 3. Furthermore, cropping 110 is performed of all the images captured, in accordance with one embodiment, for obtaining numerous focused views of the snout portion of the animal.
[0026] In one embodiment, the step of image processing further comprises deletion 112 of the blurred images from the cropped images obtained in the above step. In this step, all the blurred images that cannot be used for identification purposes are deleted by the server. More specifically, all the images in which a clear muzzle pattern of the snouted animal cannot be observed are deleted, in accordance with one embodiment of the present subject matter.
[0027] In one embodiment, the step of image processing further comprises the step of edge detection 114 of the remaining images, wherein the term remaining images refers to the cropped images of the snouted portion of the animal being identified. The term edge detection as used herein refers to building an algorithm that can sketch the edges of any object present on a picture, using the canny edge detection algorithm. The steps employed by the canny edge detection are explained below: [0028] → test_image = np.array(test_image)
♦ An Images xl,x2,yl,y2 (tuple data type) values are converted into array. This gives array value.
♦ xl,x2,yl,y2 are the crop points found by our muzzle detection model. this value is in tuple data type. Then that will convert into an array using numpy.array() method, and that array value is sent to cv2.sobel () method.
[0029] → sobalx=cv2.Sobel(test_image,cv2.CV_64F,l,0)
♦ Using cv2.sobel() function we can find image gradients, edges etc.it will take 4 attributes
• test_image=it is image that is converted in array come from numpy.array() method
• cv2.CV_64F=it is a datatype. if you want to detect other edges, better option is to keep the output data type to some higher forms, like cv2.CV_16S, cv2.CV_64F etc
• 1 and 0 are the X Direction and Y direction value
• This function will give output in float array values (ex. [1.0,20.2,33.3])
[0030] sobalx=np.uint8(np.absolute(sobalx))
♦ It converted all values are in absolute value
• Np.uint8 is a numpy data type that is unsigned integer
• np.absolute()it will give us absolute value of any floating value (ex. 2.2 absolute value will be a 2)
[0031] kernel = np.ones((5,5),np.float32)/25
♦ For each pixel, a 5x5 window is centered on this pixel, all pixels falling within this window are summed up, and the result is then divided by 25. This equates to computing the average of the pixel values inside that window. This operation is performed for all the pixels in the image to produce the output filtered image [0032] dst = cv2.filter2D(sobalx,-l, kernel)
♦ dst gives us a soft image.- 1 is a depth. It will blur image
[0033] new.array = cv2.resize(dst, (IMG.SIZE, IMG.SIZE))
♦ cv2.resize() method will resize the image
♦ IMG_SIZE is size of image that is height and width
[0034] edges = cv2.Canny(new_array, 100,200)
♦ First argument is our input image. Second and third arguments are our minVal and maxVal respectively.
[0035] Some advantageous aspects of using the Canny edge detection algorithm are enlisted below:
> Good detection: The optimal detector must minimize the probability of false positive as well as false negative, which is achieved using canny edge detection algorithm.
> Good localization: The edges detected are as close as possible to the true edges
> Single response constraint: The detector returns one point only for each edge point.
> Canny found a linear , continuous filter.
> There is no closed form for the optimal filter.
> However it looks very similar to the derivative of a gaussian.
> Practical issues for edge detection
■ Thinning and linking
■ Choosing a magnitude threshold
[0036] The steps are explained in more detail herein. [0037] The first step is noise reduction, which includes:
• Smooth image with gaussian filter
• One way to get rid of the noise on the image, is by applying Gaussian blur to smooth it. To do so, image convolution technique is applied with a Gaussian Kernel (3x3, 5x5, 7x7 etc.) The kernel size depends on the expected blurring effect. Basically, the smallest the kernel, the less visible is the blur. In our example, we will use a 5 by 5 Gaussian kernel.
• The equation for a Gaussian filter kernel of size (2k+1 )x(2,k+1 ) is given by:
Figure imgf000011_0001
• As shown in FIG. 4, smaller the kernel less visible is the blur.
[0038] The second step is gradient calculation, which includes: a. Edges correspond to a change of pixels’ intensity. To detect it, the easiest way is to apply filters that highlight this intensity change in both directions: horizontal (x) and vertical (y) b. When the image is smoothed, the derivatives lx and ly w.r.t. x and y are calculated. It can be implemented by convolving / with Sobel kernels Kx and Ky, respectively
Figure imgf000011_0002
c. Sobel filters for both direction (horizontal and vertical) d. Then, the magnitude G and the slope 0 of the gradient are calculated as follow:
Figure imgf000012_0001
e. As shown in FIG. 5, the result is almost the expected one, but we can see that some of the edges are thick and others are thin. Non-Max Suppression steps will help us mitigate the thick ones.
[0039] The third step includes non-maximum suppression , which includes:
• Ideally, the final image should have thin edges. Thus, we must perform non-maximum suppression to thin out the edges.
• The principle is simple: the algorithm goes through all the points on the gradient intensity matrix and finds the pixels with the maximum value in the edge directions.
• The result is the same image with thinner edges. We can however still notice some variation regarding the
• intensity: some pixels seem to be brighter than others, and we will try to cover this shortcoming with the two final steps, as shown in FIG. 6.
[0040] The fourth step includes double threshold. The double threshold step aims at identifying 3 kinds of pixels: strong, weak, and non-relevant:
• Strong pixels are pixels that have an intensity so high that we are sure they contribute to the final edge.
• Weak pixels are pixels that have an intensity value that is not enough to be considered as strong ones, but yet not small enough to be considered as non-relevant for the edge detection.
• Other pixels are considered as non-relevant for the edge.
• Now you can see what the double thresholds holds for:
• High threshold is used to identify the strong pixels (intensity higher than the high threshold) • Low threshold is used to identity the non-relevant pixels (intensity lower than the low threshold)
• All pixels having intensity between both thresholds are flagged as weak and the Hysteresis mechanism (next step) will help us identify the ones that could be considered as strong and the ones that are considered as non-relevant.
•The result of this step is an image with only 2 pixel intensity values (strong and weak), as shown in FIG. 7.
[0041] The fifth step includes Edge tracking by hysteresis. Based on the threshold results, the hysteresis consists of transforming weak pixels into strong ones, if and only if at least one of the pixels around the one being processed is a strong one, as illustrated in FIG. 8. FIG. 8 illustrates the results of the hysteresis process.
[0042] Referring back to FIG. 1, the step of image processing further comprises training the server 106 on the images on which the unique muzzle patterns have been identified. More specifically, in this step, unique identity tokens are assigned to the images on which the unique muzzle pattern has been identified, subsequent to which the images are then detected based on the identity tokens in step 118.
[0043] The process 100 explained above is the process for assigning identity to a snouted animal based on the unique identity patterns. The present disclosure further envisages a process 200 for identifying the snouted animal. The process 200 may be brought into operation once a database of the uniquely identified snouted animals has been generated and stored at the server 106. The process 100 and process 200 are largely similar and employ similar steps. As such, like reference numerals are used to denote like process steps for the sake of simplicity and easy readability.
[0044] Referring to FIG. 2, a process 100 for identifying a snouted animal, in accordance with the present disclosure, is illustrated. The process 200 comprises capturing 102 a video of a cow, which is a snouted animal. It should be noted that the process 200 may be used to identify any kind of snouted animal, including but not limited to cows, buffaloes, pigs, goats, dogs, sheep, camel, and the like. The process 200 includes capturing 102 the video of the of the snouted animals. In one embodiment, the step of capturing 102 may include capturing the video of the snout portion on the animal or cattle. In another embodiment, the step of capturing 102 may include capturing the video of the entire body of the animal or cattle. In one example, capturing the video of the snout portion of the snouted animal is used to identify the snouted animal, while the video of the entire body of the snouted animal or cattle may be used in identifying the breed of the snouted animal.
[0045] The process 200 further comprises uploading 104 the captured video on the server 106. The server 106 may be a remote server, and the uploading of the captured video on the server 106 may be performed by the user using the smart device used by the user to capture the images. In one embodiment, the uploading may be done using a smartphone. In another embodiment, the uploading may be done using a tablet. The process of uploading may be performed using the internet.
[0046] The process 100 further comprises the steps associated with of image processing of the video captured in the step of capturing 102. The processing of the captured images is performed by the server 106 after the videos has been uploaded to the server 106. In one embodiment image frames are extracted from the video at step 107. A set of extracted imaging frames is processed thereafter. In one embodiment, the step of image processing comprises identifying or detecting 108 the snout portion or muzzle in the at least one image. More specifically, the snout portion of the animal contains a muzzle pattern formed on thereon that is unique to each animal. An exemplary muzzle pattern 302 is depicted in FIG. 3.
[0047] In one embodiment, the step of image processing further comprises cropping 110 the snout portion of the at least one image. The operation of cropping may be performed by the server 106, in accordance with one embodiment of the present subject matter. More specifically, the server 106 may be trained to crop the captured images to obtain a focused view of the snout portions of the animal, as shown exemplarily in FIG. 3. Furthermore, cropping 110 is performed of all the images captured, in accordance with one embodiment, for obtaining numerous focused views of the snout portion of the animal.
[0048] In one embodiment, the step of image processing further comprises checking 202 the presence of the blurred images from the cropped images obtained in the above step. If the blurred images are present, then the user is prompted to recapture clearer images of the snouted animal.
[0049] In one embodiment, the step of image processing further comprises the step of edge detection 114 of the cropped images. The term edge detection as used herein refers to building an algorithm that can sketch the edges of any object present on a picture, using the canny edge detection algorithm. The implementation of the canny edge detection algorithm has been previously described in the present disclosure, and the same is not repeated again for the sake of brevity of the present disclosure.
[0050] Subsequent to edge detection 114, the process 200 includes the step of checking 204 the processed images for the purpose of identification of the snouted animal. More specifically, the checking 204 includes mapping the processed image with the images present in the database at the server 106. If a match is found, information is displayed on the smart device that the user is using. If the match is not found, the user may be prompted to assign a new identity token to the processed image, since the absence of a match may indicate that the animal is new and unregistered at the server 106.
[0051] In the present specification the word "comprise", or variations thereof, such as "comprises" or "comprising", imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
[0052] Further, the use of the expression "at least" or "at least one" suggests the use of one or more elements or ingredients or quantities, as the use can be in the embodiment of the present subject matter to achieve one or more of the desired objects or results.
[0053] Different characteristics and beneficial particulars are unfolded fully with reference to the embodiments/aspects which are exemplified in the accompanying drawing and detailed in the preceding description. Descriptions of techniques, methods, components, and equipment that a person skilled in the art is well aware of or those form common general knowledge in the field pertaining to the present subject matter is not described and/or introduced for the purpose of focusing on the present subject matter and not to obscure the present subject matter and advantageous features thereof. At the same time the present subject matter and its features that are explained herein in the detailed description and the specific examples, are given by way of illustration only, and not by way of limitation. It is to be understood that a person skilled in the art may and can think of various alternative substitutions, modifications, additions, and/or rearrangements which are considered to be within the spirit and/or scope of the underlying inventive concept.

Claims

We claim:
1. A process for identification of a snouted animal, the process comprising:
Capturing a video 102 of a snout portion of the snouted animal using a video capturing unit of a smart device,
Uploading 104, via the smart device, the video on a server;
Performing 106, image processing of the video on the server;
Extracting 107 at least a set of image frames from the captured video of a snout portion of the snouted animal;
Detecting 108, at the server subsequent to the image processing, the identity of the snouted animal; and notifying, by the server to the smart device, the identity of the snouted animal.
2. The process as claimed in claim 1, wherein the set of image frames may include a range of 30 to 50 image frames.
3. The process as claimed in claim 1, wherein the step of performing video processing further comprises: extracting 107 a set of image frames from the video; detecting 108 the snout portion in the video at least one image; cropping 110 the snout portion of the at least one image; deleting 112, from cropped images of the at least one image, blur images of the snout portion; performing 114 edge detection of the remaining cropped images of the at least one image; and detecting 116, via a machine learning module of the server, the animal based on the edge detection of the remaining cropped images of the at least one image.
4. The process as claimed in claim 3, wherein the edge detection is performed by the server in accordance with the CANNY edge detection method.
PCT/IN2022/051036 2021-11-28 2022-11-28 A process for identification of a snouted animal WO2023095174A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202141055002 2021-11-28
IN202141055002 2021-11-28

Publications (1)

Publication Number Publication Date
WO2023095174A1 true WO2023095174A1 (en) 2023-06-01

Family

ID=86539016

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2022/051036 WO2023095174A1 (en) 2021-11-28 2022-11-28 A process for identification of a snouted animal

Country Status (1)

Country Link
WO (1) WO2023095174A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3029603A2 (en) * 2013-05-22 2016-06-08 Iscilab Corporation Device and method for recognizing animal's identity by using animal nose prints
US20210089763A1 (en) * 2019-09-25 2021-03-25 Pal Universe, Inc. Animal identification based on unique nose patterns

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3029603A2 (en) * 2013-05-22 2016-06-08 Iscilab Corporation Device and method for recognizing animal's identity by using animal nose prints
US20210089763A1 (en) * 2019-09-25 2021-03-25 Pal Universe, Inc. Animal identification based on unique nose patterns

Similar Documents

Publication Publication Date Title
Zin et al. Image technology based cow identification system using deep learning
Khan et al. Flatnet: Towards photorealistic scene reconstruction from lensless measurements
Ravanbakhsh et al. Automated Fish Detection in Underwater Images Using Shape‐Based Level Sets
US10452922B2 (en) IR or thermal image enhancement method based on background information for video analysis
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
CN110059666B (en) Attention detection method and device
EP3674973A1 (en) Method and apparatus with liveness detection and object recognition
CN111401215B (en) Multi-class target detection method and system
CN104408780A (en) Face recognition attendance system
KR101732815B1 (en) Method and apparatus for extracting feature point of entity, system for identifying entity using the method and apparatus
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
CN111368698A (en) Subject recognition method, subject recognition device, electronic device, and medium
CN111753775B (en) Fish growth assessment method, device, equipment and storage medium
CN108875477B (en) Exposure control method, device and system and storage medium
CN107798292B (en) Object recognition method, computer program, storage medium, and electronic device
Chen et al. Hybrid saliency detection for images
CN110263753B (en) Object statistical method and device
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
WO2023095174A1 (en) A process for identification of a snouted animal
CN112053382A (en) Access & exit monitoring method, equipment and computer readable storage medium
CN106611417B (en) Method and device for classifying visual elements into foreground or background
Wirthgen et al. Level-set based infrared image segmentation for automatic veterinary health monitoring
CN115546845A (en) Multi-view cow face identification method and device, computer equipment and storage medium
Tiwari et al. Blur Classification Using Wavelet Transform and Feed Forward Neural Network
CN112861587B (en) Living body detection method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22898135

Country of ref document: EP

Kind code of ref document: A1