CN115546830A - Missing dog searching method, device and equipment - Google Patents

Missing dog searching method, device and equipment Download PDF

Info

Publication number
CN115546830A
CN115546830A CN202211211425.8A CN202211211425A CN115546830A CN 115546830 A CN115546830 A CN 115546830A CN 202211211425 A CN202211211425 A CN 202211211425A CN 115546830 A CN115546830 A CN 115546830A
Authority
CN
China
Prior art keywords
dog
identified
lost
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211211425.8A
Other languages
Chinese (zh)
Inventor
宋程
刘保国
胡金有
吴浩
梁开岩
郭玮鹏
李海
巩京京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingchong Kingdom Beijing Technology Co ltd
Original Assignee
Xingchong Kingdom Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingchong Kingdom Beijing Technology Co ltd filed Critical Xingchong Kingdom Beijing Technology Co ltd
Priority to CN202211211425.8A priority Critical patent/CN115546830A/en
Publication of CN115546830A publication Critical patent/CN115546830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of dog management, in particular to a management method for a lost dog, and specifically relates to a method, a device and equipment for searching the lost dog; according to the embodiment of the application, the searching strategy is configured, the region with the higher probability of occurrence of the lost dog can be obtained based on the searching strategy, and the searching cost for searching the dog is reduced. And the type information of the to-be-identified dog and the characteristics of a plurality of to-be-identified dogs in the same type information are realized based on the configured first detection model and the second detection model, the similarity between the to-be-identified dog and the lost dog is realized based on the comparison of the characteristics, and the relationship between the to-be-identified dog and the lost dog is determined based on a preset similarity threshold and the optimal similarity, so that the lost dog is determined in the plurality of to-be-identified dogs. The detection model is constructed in a convolutional neural network mode, so that the identification accuracy and the identification speed are improved, and the obtained result is more accurate and faster.

Description

Missing dog searching method, device and equipment
Technical Field
The application relates to the technical field of dog management, in particular to a management method for a lost dog, and specifically relates to a method, a device and equipment for searching the lost dog.
Background
With the development of Chinese economy and the improvement of the living standard of materials, the mental culture requirements of people are rapidly increased, and the pet raising becomes an embodiment of pursuing the mental level. Data survey reports show that whether domestic or foreign, the animal with the largest number of pets is a dog, and the dog has natural affinity with human, so that the dog can bring happiness to people and can protect the owner at key moment. But the lost dog message and the dog searching hint are continuously heard in daily life. The loss of the dog brings huge impact to the dog owner, the dog owner can spend a great deal of time and energy to find the dog, and the probability of finding the dog is very little; the lost dog is likely to fall into a roadside wandering dog, which threatens the living, transportation, sanitation and personal safety of citizens.
Therefore, the lost dog needs to be managed intelligently and searched and determined only for the lost dog, in the prior art, the searching method for the lost dog mainly comprises the steps of configuring an ID identification device on the lost dog, arranging a GPS positioning chip in the ID identification device, and searching for the lost dog based on the GPS positioning chip. However, in practical use, the number of dogs wearing the ID identification device is small, and identification of a lost dog not equipped with the ID identification device cannot be achieved.
To solve this problem, it is necessary to provide a lost dog finding method that does not require configuring peripheral equipment on the dog.
Disclosure of Invention
In order to solve the technical problems, the application provides a lost dog searching method, a lost dog searching device and lost dog searching equipment, which can be used for comparing image information of a lost dog with a plurality of images of the lost dog to be identified in an outdoor space environment through a computer technology, and searching the lost dog based on a comparison result.
In order to achieve the above purpose, the embodiments of the present application adopt the following technical solutions:
in a first aspect, a method for finding a lost dog comprises the following steps: acquiring basic information of a lost dog; configuring a corresponding searching strategy based on the basic information, wherein the searching strategy is used for determining the searching range of the lost dog; determining the image data of the lost dog as a target image based on the basic information; acquiring a plurality of dog image data in the searching range as a plurality of dog images to be identified, and comparing the plurality of dog images to be identified based on a preset dog detection model to obtain a comparison result; the dog-only detection model comprises a first detection model and a second detection model, the first detection model is used for identifying the types of the dogs in the multiple dog-only images to be identified, and the second detection model is used for identifying the features of the dogs in the multiple dog-only images to be identified; acquiring image data of a plurality of dogs in the searching range, wherein the method comprises the following steps: acquiring a plurality of targets to be detected in the searching range based on the video frames arranged according to the time sequence; acquiring pixel coordinates of video frames corresponding to a plurality of targets to be detected; converting the pixel coordinates into world coordinates corresponding to the target to be detected based on internal parameters and external parameters of an image acquisition device; comparing the world coordinates with basic information of a lost dog, and screening target frames in video frames corresponding to a plurality of targets to be detected based on a comparison threshold, wherein the target frames are image data of the dog, and the basic information of the dog comprises the length and the height of the dog; comparing the plurality of to-be-identified dog images based on a preset dog detection model to obtain a comparison result, wherein the method specifically comprises the following steps: identifying a plurality of to-be-identified dog images based on the first detection model to obtain dog variety information in the to-be-identified dog images, and comparing the variety information with preset basic variety information in the lost dog basic information to obtain a comparison result; and determining a second to-be-identified dog image based on the comparison result, and comparing the second to-be-identified dog image with the lost dog based on the second detection model to obtain the comparison result.
In a first implementation manner of the first aspect, determining a second to-be-identified dog image based on a comparison result, and comparing the second to-be-identified dog image with the lost dog image based on the second detection model to obtain the comparison result, the method includes the following steps: performing feature extraction on the second dog image to be recognized based on the second detection model to obtain features to be recognized; comparing the characteristics to be identified with the characteristics of the lost dog to obtain similarity; comparing the similarity with a similarity threshold value of a preset value to obtain a comparison result, specifically comprising the following steps: and when the similarity is not smaller than the similarity threshold value, determining that the second image to be recognized is a target image.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, when the similarity is not smaller than the similarity threshold, it is determined that the second image to be recognized is a target image, including the following steps: and when the similarity not smaller than the similarity threshold is multiple, determining the maximum similarity in the multiple similarities as a final similarity, and determining the corresponding second image to be recognized as a target image based on the final target similarity.
In a third implementation manner of the first aspect, acquiring basic information of a lost dog, and configuring a corresponding search strategy based on the basic information includes the following steps: acquiring time information and corresponding position information of the lost dog in the basic information; and determining a first searching range of the lost dog based on the time information and the corresponding position information.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner, the method further includes: when the information of the dog to be identified in the first searching range cannot be matched with the information of the lost dog, extending the first searching range by taking the position information of the lost dog as a circle center to obtain a second searching range; the extension of the first search range comprises the following method: obtaining historical movement data of the lost dog, and obtaining a movement range based on the historical movement data and time information in basic information, wherein the difference value between the movement range and the first searching range is the second searching range.
In a fifth implementation manner of the first aspect, identifying a plurality of to-be-identified dog images based on the first detection model to obtain dog breed information in the to-be-identified dog images includes the following steps: extracting a plurality of first features of the images of the dogs to be identified based on the first detection model, and comparing the first features with a preset variety feature database to obtain the variety information of the breeds of the dogs to be identified, wherein the method specifically comprises the following steps: acquiring face image information of a dog to be identified; acquiring a target feature map based on the face image information; acquiring a plurality of target detection points in the target feature map; acquiring coordinate parameters of a plurality of target detection points, and acquiring relative distances of the plurality of target detection points based on the plurality of coordinate parameters; the method comprises the following steps of comparing relative distances of a plurality of target detection points with relative distances of a plurality of target detection points in a preset dog variety database to obtain the variety information of the dog to be identified, and specifically comprises the following steps: comparing the similarity of the relative distance of any one target detection point with the relative distance of the corresponding target detection point in a preset canine variety database to obtain the similarity of the relative distances of a plurality of target detection points; fusing the similarity of the relative distances of the plurality of target detection points to obtain final similarity; and determining the corresponding category information of the dog to be identified based on the final similarity.
In a sixth implementation manner of the first aspect, the first detection model and the second detection model are convolutional neural network structures, and the convolutional neural network structures include a feature extraction network and a classification network; the feature extraction network comprises a data input layer, a convolution layer and a pooling layer, and the feature mapping layer comprises a full connection layer and an output layer.
In a second aspect, a lost dog finding device comprises: the basic information acquisition module is used for acquiring the basic information of the lost dog; a searching strategy determining module, configured to configure a corresponding searching strategy based on the basic information; the target image determining module is used for determining the image data of the lost dog as a target image based on the basic information; and the comparison result determining module is used for comparing the plurality of to-be-identified dog images based on a preset dog detection model to obtain a comparison result.
In a third aspect, a terminal device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method as described in any one of the above when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the method of any of the above.
According to the technical scheme, the searching strategy is configured, the region with the high probability of occurrence of the lost dog can be obtained based on the searching strategy, and the searching cost for searching the dog is reduced. And realizing the type information of the to-be-identified dog and the characteristics of a plurality of to-be-identified dogs in the same type information based on the configured first detection model and the second detection model, realizing the similarity between the to-be-identified dog and the lost dog based on the comparison of the characteristics, and determining the relationship between the to-be-identified dog and the lost dog based on a preset similarity threshold and the optimal similarity, thereby determining the lost dog among the plurality of to-be-identified dogs. The first detection model and the second detection model in the embodiment are constructed in a convolutional neural network mode, so that the identification accuracy and the identification speed are improved, and the obtained result is more accurate and faster.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
The methods, systems, and/or processes of the figures are further described in accordance with the exemplary embodiments. These exemplary embodiments will be described in detail with reference to the drawings. These exemplary embodiments are non-limiting exemplary embodiments in which example numbers represent similar mechanisms throughout the various views of the drawings.
Fig. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
Fig. 2 is a flow chart of a lost dog finding method according to some embodiments of the present application.
Fig. 3 is a block diagram of an apparatus provided according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant guidance. It will be apparent, however, to one skilled in the art that the present application may be practiced without these specific details. In other instances, well-known methods, procedures, systems, compositions, and/or circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present application.
Flowcharts are used herein to illustrate the implementations performed by systems according to embodiments of the present application. It should be expressly understood that the processes performed by the flowcharts may be performed out of order. Rather, these implementations may be performed in the reverse order or simultaneously. In addition, at least one other implementation may be added to the flowchart. One or more implementations may be deleted from the flowchart.
Before further detailed description of the embodiments of the present invention, terms and expressions referred to in the embodiments of the present invention are described, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations.
(1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
(2) Based on the condition or state on which the operation to be performed depends, when the condition or state on which the operation depends is satisfied, the operation or operations to be performed may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
(3) Convolutional neural networks, which are mathematical or computational models that mimic the structure and function of biological neural networks (the central nervous system of animals, particularly the brain) are used to estimate or approximate functions.
According to the technical scheme, the main application scene is to the search of losing the dog, mainly look for through artificial mode at present for the search of losing the dog, and the search of losing the dog based on wisdom management formula is mainly based on the GPS signal device of configuration at dog only health and is carried out the acquisition to losing the position information of dog. For this approach, a device configuration that is implemented only for dogs in advance is required, and dogs are required to wear only GPS signal devices outdoors. In the practical application, because the dog is not controlled, the dog frequently bites and breaks the GPS signal device worn on the dog, so that the GPS signal device has the possibility of failure due to breakage even if the GPS signal device is worn on the dog. Furthermore, the GPS signal device may malfunction in an outdoor environment due to outdoor environment characteristics, such as water bubbles and collisions. Therefore, the method aims at the situation that the prior art has higher searching cost and lower searching accuracy for searching the lost dog.
In view of this situation, the method provided by this embodiment determines the target missing dog based on the image recognition technology. The method comprises the following steps of obtaining image data of a lost dog and image data of a dog to be identified, namely a wandering dog, in an outdoor environment, comparing the image data of the dog to be identified with the image data of the lost dog based on a preset detection model, determining the similarity of the image data of a plurality of dogs to be identified based on a comparison result, and determining the dog to be identified, which has the highest matching degree with the image data of the lost dog, as a target dog based on the similarity. And searching for the target dog based on the position information of the target dog.
Based on the above technical background, the embodiment of the present application provides a terminal device 100, which includes a memory 110, a processor 120, and a computer program stored in the memory and executable on the processor, wherein the processor executes a missing dog finding method, and compares image data of a missing dog with image data of a plurality of dogs to be identified to obtain a target dog. In this embodiment, the terminal device communicates with the user side, issues the acquired target dog information to the corresponding user side, and implements sending of the target dog information on hardware. The method for sending the information is realized based on a network, and before the terminal device applies, an association relation needs to be established between the user terminal and the terminal device, and the association between the terminal device and the user terminal can be realized through a registration method. The terminal device can be aimed at a plurality of user terminals or one user terminal, and the user terminal communicates with the terminal device through passwords and other encryption modes. In this embodiment, when image information of a lost dog is input, not only the identity information of the dog but also information of a dog owner needs to be input and a user side needs to be configured for the dog owner, and the dog owner can receive the information in time through the user side.
In this embodiment, the terminal may be a server, and includes a memory, a processor, and a communication unit with respect to a physical structure of the server. The memory, processor and communication unit components are electrically connected to each other, directly or indirectly, to enable data transfer or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory is used for storing specific information and programs, and the communication unit is used for sending the processed information to the corresponding user side.
In the embodiment, the storage module is divided into two storage areas, wherein one storage area is a program storage unit, and the other storage area is a data storage unit. The program storage unit is equivalent to a firmware area, the read-write authority of the area is set to be a read-only mode, and data stored in the area cannot be erased and changed. The data in the data storage unit can be erased or read and written, and when the capacity of the data storage area is full, the newly written data can overwrite the earliest historical data.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (Ele ultrasonic erase Read-Only Memory, EEPROM), and the like.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP)), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 2, in the embodiment, for the missing dog finding method, the working logic is to obtain image information of the missing dog, obtain image information of a plurality of dogs to be identified within the finding range of the missing dog, compare the image information of the dogs to be identified with the characteristics of the image information of the missing dog based on the detection model, obtain dogs to be identified which meet a preset similarity threshold in the comparison result, and determine that the dog to be identified corresponding to the maximum similarity among the dogs to be identified which meet the similarity threshold requirement is the target dog.
The dog medical insurance management method is provided aiming at the working logic provided by the embodiment, and specifically comprises the following steps:
and step S210, acquiring basic information of the lost dog.
In this embodiment, this process is the first step of the process. The basic information for the lost dog is obtained based on the information provided by the user, and in this embodiment, the basic information includes the image information of the lost dog, the lost time information of the lost dog, and the corresponding lost position information. And for the acquisition of the above information, the configuration is performed based on the user. The method includes that time information and position information of a lost dog are used for a computer to obtain a corresponding searching strategy, a searching range of the lost dog is determined based on the searching strategy, and the searching range is used for determining an image of the dog to be identified in the corresponding searching range.
The position information in the basic information can be set to be first position information and second position information, wherein the first position information and the second position information are reasoning information, and the subsequent processing speed and the subsequent processing accuracy are improved by setting at least two pieces of position information for increasing the subsequent searching range and acquiring more image information of the dog to be recognized based on the searching range.
And S220, configuring a corresponding searching strategy based on the basic information.
In this embodiment, the search strategy is used to determine the search range of the lost dog, where the search range is already described in step S210, and the search range is used to determine the image information of the corresponding dog to be identified.
The search for the lost dog species cannot be disordered and logically searched, otherwise, the number of the image information of the corresponding dog to be identified is large, and the image information of the large number of the dog to be identified has more noise image data, so the time cost of image information processing is reduced through the configuration of the step, and the processing speed is increased.
Specifically included for this step are the following methods;
and acquiring the time information and the corresponding position information of the lost dog in the basic information.
And determining a first searching range of the lost dog based on the time information and the corresponding position information.
In this embodiment, for the first search range that is not the final search range, the extension and expansion of the first search range for the subsequent search results may specifically be:
and when the to-be-identified dog in the first searching range cannot be matched with the lost dog information, extending the first searching range by taking the position information of the lost dog as the center of a circle to obtain a second searching range.
The extension of the first search range comprises the following method:
obtaining historical movement data of the lost dog, and obtaining a movement range based on the historical movement data and time information in basic information, wherein the difference value between the movement range and the first searching range is the second searching range.
And step S230, determining the image data of the lost dog as a target image based on the basic information.
And S240, acquiring a plurality of dog image data in the searching range as a plurality of dog images to be identified, and comparing the plurality of dog images to be identified based on a preset dog detection model to obtain a comparison result.
In this embodiment, the finding of the dog is based on an image acquisition device, that is, a video acquisition device arranged in a public place, extracting video frames containing information of the dog in real-time video data, and comparing the video frames with a dog detection model to obtain an association relationship between the video frames and the lost dog, so as to identify whether the lost dog appears.
For the method, in order to improve the identification efficiency, the image acquisition device firstly extracts the information of the dogs in the acquired video frames arranged according to the time sequence, namely, performs classification detection, and respectively identifies people, objects, dogs and cats in the video. The classification detection model comprises a deep learning mode, namely a mode of inputting a large amount of sample data through a convolutional neural network to train and extract the characteristics with specific classification in a video frame to realize classification, and the other mode is realized based on a support vector machine. The two modes are accurate in identification result obtained based on the size of the training set, and various kinds of dogs can be obtained. However, this method is not suitable for this embodiment because all dogs with dog characteristics can be identified in the identified dogs due to the large identification range, but it causes complexity of subsequent processing and processing cost. For example, although the missing dog is a large dog labrador, the classification model can identify all dogs from a small dog to a large dog, and therefore, the amount of data that can be identified in the image data including the dog is large, which is disadvantageous for the subsequent processing. Therefore, for the initial acquisition of the image of the dog, all the image data of the dog does not need to be acquired, and only the image with high similarity to the lost dog is required to be acquired. In order to solve the above problem, it is necessary to provide a method for acquiring image data of a dog, wherein the logic of the method is to find a dog similar to the lost dog by losing height and length information of the dog. In order to improve the recognition efficiency and enrich the use scenarios of the model, the method is implemented in a machine vision-based manner in the embodiment, specifically:
and acquiring a plurality of targets to be detected in the searching range based on the video frames arranged according to the time sequence.
In this embodiment, it is first determined that, for a historical conventional image in an image capturing device, the process obtains a dynamic object in an acquired video frame by configuring an image template, where the image template may be understood as a background image, that is, by removing the background image in the acquired video frame, the acquired video frame is a real-time dynamic image. The video frame in the real-time dynamic image is subjected to image segmentation processing by the conventional image segmentation method, the image segmentation processing can be performed by binarization, gray level processing is performed on the video frame corresponding to the real-time dynamic image, and then the segmented image is obtained by noise reduction and expansion processing.
And acquiring pixel coordinates of the video frames corresponding to the targets to be detected.
The obtaining of the pixel coordinates can be obtained based on a conventional processing method in machine vision, and is not described in this embodiment.
And converting the pixel coordinates into world coordinates corresponding to the target to be detected based on the internal reference and the external reference of the image acquisition device.
The process mainly converts the pixel coordinates of the object to be detected in the video frame into the real coordinates of the object to be detected based on the standard parameters of the image acquisition device. In order to enable the accuracy of the world coordinate, the distance between the object to be detected and the image acquisition device can be introduced to obtain the world coordinate, specifically, the world coordinate is estimated through the focal length of the image acquisition device, wherein the focal length is a relative focal length, and the relative focal length can be obtained based on empirical parameters, that is, a plurality of focal length reference points are arranged in the image template, the corresponding relative focal length is determined based on the relative position relationship between the object to be detected and the focal length reference points in the image template, and the target world coordinate is obtained by multiplying the relative focal length and the obtained world coordinate.
Comparing the target world coordinates with the basic information of the lost dog, and screening target frames in the video frames corresponding to the targets to be detected based on a comparison threshold, wherein the target frames are image data of the dog, and the basic information of the dog comprises the length and the height of the dog.
Because the distortion of world coordinates can be caused by the problem of angles in an image acquisition device, the setting aiming at the threshold value is obtained based on the angles, when the figure information of a lost dog is converted with the angles to obtain a range under the angle range, the range is compared with the target world coordinates of the object to be detected, the target world coordinates in accordance with the range are the object to be detected, namely the dog, and the video frame of the object to be detected is obtained to be the image of the dog to be identified.
For the process, as the number of the video frames is multiple, the video frame with the highest definition is selected as the optimal dog image to be recognized, the selection method is based on the contrast ratio of the video frame after gray processing, and the dog image to be recognized with the higher definition is the high contrast ratio.
In this embodiment, the dog-only detection model includes a first detection model and a second detection model, the first detection model is used for a plurality of types of dogs in the dog-only image to be recognized, and the second detection model is used for recognizing the features of the dogs in the plurality of dog-only images to be recognized.
And aiming at comparing a plurality of to-be-identified dog images based on a preset dog detection model to obtain a comparison result, the method specifically comprises the following steps:
and identifying a plurality of to-be-identified dog images based on the first detection model to obtain dog variety information in the to-be-identified dog images, and comparing the variety information with preset basic variety information in the lost dog basic information to obtain a comparison result.
And determining a second to-be-identified dog image based on the comparison result, and comparing the second to-be-identified dog image with the lost dog based on the second detection model to obtain the comparison result.
In this embodiment, for setting the first detection model, the to-be-identified dog of the same breed as the lost dog is determined first, and the comparison process performed by a large amount of useless data can be reduced by setting the process. After the breed of the dog is determined, the similarity between a plurality of dogs to be identified under the same breed and the lost dog is determined, and the dog to be identified is determined to be the target dog based on the similarity.
In the embodiment, the acquisition of the breed of the lost dog is determined based on the information provided by the user in the basic data, and the image under the corresponding breed and the image characteristic of the corresponding breed are acquired by traversing the breed database based on the information provided by the user. Therefore, a corresponding dog breed characteristic database needs to be configured in advance for the method.
In this embodiment, identifying only the dog species of the dog to be identified with respect to the first detection model includes the following steps:
the method comprises the steps of obtaining face image information of a dog to be recognized, obtaining a target feature map based on the face image information, obtaining a plurality of target detection points in the target feature map, obtaining coordinate parameters of the plurality of target detection points, and obtaining relative distances of the plurality of target detection points based on the plurality of coordinate parameters.
The method comprises the following steps of comparing relative distances of a plurality of target detection points with relative distances of a plurality of target detection points in a dog variety database to obtain the category information of the dog to be identified, and specifically comprises the following steps: comparing the similarity of the relative distance of any one target detection point with the relative distance of the corresponding target detection point in a preset canine variety database to obtain the similarity of the relative distances of a plurality of target detection points: the similarity of the relative distances of the target detection points is fused to obtain final similarity; and determining the category information of the corresponding dog to be identified based on the final similarity.
In the present embodiment, the category information for the dog to be identified is mainly identified based on the distance positions between the plurality of target detection points, and the plurality of target detection points are determined based on the characteristic points of the face of the dog, it can be known that the characteristic points for the face of the dog include, but are not limited to, eyes, nose, and ears, and in the present embodiment, the identification for the category of the dog is realized by the distance between the above characteristic points. The training method for the first identity recognition model is implemented by constructing a training method based on a neural network model and mainly based on obtaining a plurality of sample data sets and training the sample data sets after classifying and labeling the sample data sets. For example, the first detection model may be trained based on, for example, a sample data set corresponding to a dog class, labeling a sufficient number of target detection points of a dog image in the sample data set corresponding to the dog class to obtain a detection submodel corresponding to the dog class, and fusing a plurality of detection submodels to obtain a final first detection model, where the above training of the neural network includes a scenario in which the related art is applied as the class of the dog.
The actual use scene of the process comprises the steps of obtaining coordinate values between two eyes, obtaining coordinate values between the eyes and the nose, obtaining coordinate values between the eyes and the mouth, and coordinate values between the nose and the mouth, comparing the coordinate values with corresponding coordinate values in a preset dog breed database based on the coordinate values to obtain similarity, obtaining the similarity of relative distances of a plurality of target detection points, fusing the similarity of the relative distances of the plurality of target detection points to obtain final similarity, and determining the breed information of the corresponding dog to be identified based on the final similarity.
In this embodiment, the image data of the dog to be recognized may be acquired based on an image capturing device configured in a public space, or may be acquired based on a manual method for capturing a picture.
In this embodiment, a secondary feature comparison is performed on the obtained same dog breed, and the purpose of this comparison is to obtain the dog to be identified, which has the greatest feature similarity with the lost dog among a plurality of same dog breeds.
The following methods are included for this process: the method comprises the steps of obtaining a target feature map of a plurality of dogs to be identified, extracting key target detection points in the target feature map, extracting features to be identified of the key target detection points, comparing the features to be identified with the features of the lost dogs based on the features to be identified, comparing the similarity of the features to be identified and the features of the lost dogs, configuring a similarity threshold value to judge the similarity, and screening based on the judged similarity to obtain the final similarity.
In this embodiment, the key target detection points are key detection points in the image of the dog face, such as detection points of eyes, noses, and the like, which can represent specific features of the dog, wherein the second detection model for feature extraction is a convolutional neural network structure, and the convolutional neural network structure includes a feature extraction network and a classification network; the feature extraction network comprises a data input layer, a convolution layer and a pooling layer, and the feature mapping layer comprises a full connection layer and an output layer.
In this embodiment, the method specifically includes the following steps of extracting features of a plurality of dogs to be identified, comparing the extracted features with features of a lost dog to obtain similarity, and comparing the similarity with a similarity threshold to obtain a comparison result:
and when the similarity is not smaller than the similarity threshold value, determining that the second image to be recognized is a target image.
And when the similarity not smaller than the similarity threshold is multiple, determining the maximum similarity in the multiple similarities as a final similarity, and determining the corresponding second image to be recognized as a target image based on the final target similarity.
In this embodiment, the feature extraction of the to-be-identified dog and the comparison with the feature of the lost dog can be realized through the step S240 to obtain the similarity, whether the comparison principle is satisfied in the plurality of to-be-identified dogs is determined based on the comparison principle of the similarity, and the to-be-identified dog conforming to the comparison principle is determined as the target dog only to search. In this embodiment, the feature acquisition for the lost dog may be obtained based on a feature extraction model, and the feature extraction model may be obtained based on a convolutional neural network, which is not described in this embodiment again.
Referring to fig. 3, the embodiment further provides a lost dog finding apparatus 300, including: and a basic information obtaining module 310, configured to obtain basic information of the lost dog. And a search policy determining module 320, configured to configure a corresponding search policy based on the basic information. And the target image determining module 330 is configured to determine the image data of the lost dog as a target image based on the basic information. The comparison result determining module 340 is configured to compare the plurality of to-be-identified dog images based on a preset dog detection model, so as to obtain a comparison result.
In this embodiment, the search strategy is used to determine the search range of the lost dog.
According to the technical scheme, the searching strategy is configured, the region with the high probability of occurrence of the lost dog can be obtained based on the searching strategy, and the searching cost for searching the dog is reduced. And realizing the type information of the to-be-identified dog and the characteristics of a plurality of to-be-identified dogs in the same type information based on the configured first detection model and the second detection model, realizing the similarity between the to-be-identified dog and the lost dog based on the comparison of the characteristics, and determining the relationship between the to-be-identified dog and the lost dog based on a preset similarity threshold and the optimal similarity, thereby determining the lost dog among the plurality of to-be-identified dogs. The first detection model and the second detection model in the embodiment are constructed in a convolutional neural network mode, so that the identification accuracy and the identification speed are improved, and the obtained result is more accurate and quicker.
It should be understood that the technical terms which are not noun-nounced in the above-mentioned contents are not limited to the meanings which can be clearly determined by those skilled in the art from the above-mentioned disclosures.
The skilled person can determine some preset, reference, predetermined, set and preference labels of technical features/technical terms, such as threshold, threshold interval, threshold range, etc., without any doubt according to the above disclosure. For some technical characteristic terms which are not explained, the technical solution can be clearly and completely implemented by those skilled in the art by reasonably and unambiguously deriving the technical solution based on the logical relations in the previous and following paragraphs. The prefixes of unexplained technical feature terms, such as "first," "second," "example," "target," and the like, may be unambiguously derived and determined from the context. Suffixes of technical feature terms not explained, such as "set", "list", etc., can also be derived and determined unambiguously from the preceding and following text.
The above disclosure of the embodiments of the present application will be apparent to those skilled in the art from the above disclosure. It should be understood that the process of deriving and analyzing technical terms, which are not explained, by those skilled in the art based on the above disclosure is based on the contents described in the present application, and thus the above contents are not an inventive judgment of the overall scheme.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific terminology to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of at least one embodiment of the present application may be combined as appropriate.
In addition, those skilled in the art will recognize that the various aspects of the present application may be illustrated and described in terms of any number of patentable categories or situations, including any new and useful combination of procedures, machines, products, or materials, or any new and useful modifications thereof. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as a "unit", "component", or "system". Furthermore, aspects of the present application may be embodied as a computer product, located in at least one computer readable medium, which includes computer readable program code.
A computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the execution of aspects of the present application may be written in any combination of one or more programming languages, including object oriented programming, such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, or similar conventional programming languages, such as the "C" programming language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, dynamic programming languages, such as Python, ruby, and Groovy, or other programming languages. The programming code may execute entirely on the user's computer, as a stand-alone software package, partly on the user's computer, partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order of the process elements and sequences described herein, the use of numerical letters, or other designations are not intended to limit the order of the processes and methods unless otherwise indicated in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it should be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware means, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
It should also be appreciated that in the foregoing description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of at least one embodiment of the invention. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.

Claims (10)

1. A lost dog searching method is characterized by comprising the following steps:
acquiring basic information of a lost dog;
configuring a corresponding searching strategy based on the basic information, wherein the searching strategy is used for determining the searching range of the lost dog;
determining the image data of the lost dog as a target image based on the basic information;
acquiring a plurality of dog image data in the searching range as a plurality of dog images to be identified, and comparing the plurality of dog images to be identified based on a preset dog detection model to obtain a comparison result;
the dog detection model comprises a first detection model and a second detection model, the first detection model is used for identifying the types of dogs in the plurality of dog images to be identified, and the second detection model is used for identifying the characteristics of dogs in the plurality of dog images to be identified;
acquiring image data of a plurality of dogs in the searching range, wherein the method comprises the following steps:
acquiring a plurality of targets to be detected in the searching range based on the video frames arranged according to the time sequence;
acquiring pixel coordinates of video frames corresponding to a plurality of targets to be detected;
converting the pixel coordinates into world coordinates corresponding to the target to be detected based on internal parameters and external parameters of an image acquisition device;
comparing the world coordinates with basic information of a lost dog, and screening target frames in video frames corresponding to a plurality of targets to be detected based on a comparison threshold, wherein the target frames are image data of the dog, and the basic information of the dog comprises the length and the height of the dog;
comparing the plurality of to-be-identified dog images based on a preset dog detection model to obtain a comparison result, wherein the method specifically comprises the following steps:
identifying a plurality of to-be-identified dog images based on the first detection model to obtain dog variety information in the to-be-identified dog images, and comparing the variety information with preset basic variety information in the lost dog basic information to obtain a comparison result;
and determining a second to-be-identified dog image based on the comparison result, and comparing the second to-be-identified dog image with the lost dog based on the second detection model to obtain the comparison result.
2. The method for searching the lost dog according to claim 1, wherein a second image of the dog to be identified is determined based on the comparison result, and the second image of the dog to be identified is compared with the lost dog based on the second detection model to obtain the comparison result, comprising the following steps:
performing feature extraction on the second dog image to be recognized based on the second detection model to obtain features to be recognized;
comparing the features to be identified with the features of the lost dog to obtain similarity;
comparing the similarity with a similarity threshold value of a preset value to obtain a comparison result, specifically comprising the following steps:
and when the similarity is not smaller than the similarity threshold value, determining that the second image to be recognized is a target image.
3. The method for finding the lost dog according to claim 2, wherein the second image to be recognized is determined as a target image when the similarity is not less than the similarity threshold, and the method comprises the following steps:
and when the similarity not smaller than the similarity threshold is multiple, determining the maximum similarity in the multiple similarities as a final similarity, and determining the corresponding second image to be recognized as a target image based on the final target similarity.
4. The method for searching the lost dog according to claim 1, wherein the method for obtaining the basic information of the lost dog and configuring the corresponding searching strategy based on the basic information comprises the following steps:
acquiring time information and corresponding position information of the lost dog in the basic information;
and determining a first searching range of the lost dog based on the time information and the corresponding position information.
5. The method for finding lost dogs according to claim 4, further comprising the steps of:
when the information of the to-be-identified dog and the lost dog in the first searching range cannot be matched, extending the first searching range by taking the position information of the lost dog as a circle center to obtain a second searching range;
the extension of the first search range comprises the following methods:
obtaining historical movement data of the lost dog, and obtaining a movement range based on the historical movement data and time information in basic information, wherein the difference value between the movement range and the first searching range is the second searching range.
6. The method for finding the lost dog according to claim 1, wherein the identification of the plurality of images of the dog to be identified based on the first detection model to obtain the information of the breed of the dog in the image of the dog to be identified comprises the following steps:
extracting a plurality of first features of the images of the dogs to be identified based on the first detection model, and comparing the first features with a preset variety feature database to obtain the variety information of the breeds of the dogs to be identified, wherein the method specifically comprises the following steps:
acquiring face image information of a dog to be identified;
acquiring a target feature map based on the face image information;
acquiring a plurality of target detection points in the target feature map;
acquiring coordinate parameters of a plurality of target detection points, and acquiring relative distances of the plurality of target detection points based on the plurality of coordinate parameters;
the method comprises the following steps of comparing relative distances of a plurality of target detection points with relative distances of a plurality of target detection points in a preset dog variety database to obtain the variety information of the dog to be identified, and specifically comprises the following steps:
comparing the similarity of the relative distance of any one target detection point with the relative distance of the corresponding target detection point in a preset canine variety database to obtain the similarity of the relative distances of a plurality of target detection points;
fusing the similarity of the relative distances of the plurality of target detection points to obtain final similarity;
and determining the corresponding category information of the dog to be identified based on the final similarity.
7. The missing dog finding method according to claim 1, wherein the first detection model and the second detection model are convolutional neural network structures, the convolutional neural network structures comprising a feature extraction network and a classification network; the feature extraction network comprises a data input layer, a convolution layer and a pooling layer, and the feature mapping layer comprises a full connection layer and an output layer.
8. A lost dog finding device, comprising:
the basic information acquisition module is used for acquiring the basic information of the lost dog;
a search strategy determination module for configuring corresponding search strategy based on the basic information
The target image determining module is used for determining the image data of the lost dog as a target image based on the basic information;
and the comparison result determining module is used for comparing the plurality of to-be-identified dog images based on a preset dog detection model to obtain a comparison result.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211211425.8A 2022-09-30 2022-09-30 Missing dog searching method, device and equipment Pending CN115546830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211211425.8A CN115546830A (en) 2022-09-30 2022-09-30 Missing dog searching method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211211425.8A CN115546830A (en) 2022-09-30 2022-09-30 Missing dog searching method, device and equipment

Publications (1)

Publication Number Publication Date
CN115546830A true CN115546830A (en) 2022-12-30

Family

ID=84731344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211211425.8A Pending CN115546830A (en) 2022-09-30 2022-09-30 Missing dog searching method, device and equipment

Country Status (1)

Country Link
CN (1) CN115546830A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016057998A (en) * 2014-09-12 2016-04-21 株式会社日立国際電気 Object identification method
CN107370989A (en) * 2017-07-31 2017-11-21 上海与德科技有限公司 Target seeking method and server
CN108073577A (en) * 2016-11-08 2018-05-25 中国电信股份有限公司 A kind of alarm method and system based on recognition of face
US20200026949A1 (en) * 2018-07-17 2020-01-23 Avigilon Corporation Hash-based appearance search
CN110751022A (en) * 2019-09-03 2020-02-04 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment
CN110929770A (en) * 2019-11-15 2020-03-27 云从科技集团股份有限公司 Intelligent tracking method, system and equipment based on image processing and readable medium
CN110991465A (en) * 2019-11-15 2020-04-10 泰康保险集团股份有限公司 Object identification method and device, computing equipment and storage medium
CN111666837A (en) * 2020-05-24 2020-09-15 哈尔滨理工大学 Method, terminal, upper computer and system for acquiring information of wandering animals
CN111770310A (en) * 2020-07-02 2020-10-13 广州博冠智能科技有限公司 Lost child identification and positioning method and device
CN114677627A (en) * 2022-03-23 2022-06-28 重庆紫光华山智安科技有限公司 Target clue finding method, device, equipment and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016057998A (en) * 2014-09-12 2016-04-21 株式会社日立国際電気 Object identification method
CN108073577A (en) * 2016-11-08 2018-05-25 中国电信股份有限公司 A kind of alarm method and system based on recognition of face
CN107370989A (en) * 2017-07-31 2017-11-21 上海与德科技有限公司 Target seeking method and server
US20200026949A1 (en) * 2018-07-17 2020-01-23 Avigilon Corporation Hash-based appearance search
CN110751022A (en) * 2019-09-03 2020-02-04 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment
CN110929770A (en) * 2019-11-15 2020-03-27 云从科技集团股份有限公司 Intelligent tracking method, system and equipment based on image processing and readable medium
CN110991465A (en) * 2019-11-15 2020-04-10 泰康保险集团股份有限公司 Object identification method and device, computing equipment and storage medium
CN111666837A (en) * 2020-05-24 2020-09-15 哈尔滨理工大学 Method, terminal, upper computer and system for acquiring information of wandering animals
CN111770310A (en) * 2020-07-02 2020-10-13 广州博冠智能科技有限公司 Lost child identification and positioning method and device
CN114677627A (en) * 2022-03-23 2022-06-28 重庆紫光华山智安科技有限公司 Target clue finding method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN110751022B (en) Urban pet activity track monitoring method based on image recognition and related equipment
CN107690659B (en) Image recognition system and image recognition method
CN110135231B (en) Animal face recognition method and device, computer equipment and storage medium
WO2019033525A1 (en) Au feature recognition method, device and storage medium
US20210089763A1 (en) Animal identification based on unique nose patterns
US20230177796A1 (en) Methods and systems for video processing
KR102325259B1 (en) companion animal life management system and method therefor
CN111539317A (en) Vehicle illegal driving detection method and device, computer equipment and storage medium
CN109242000B (en) Image processing method, device, equipment and computer readable storage medium
Kim et al. Thermal sensor-based multiple object tracking for intelligent livestock breeding
Xue et al. Open set sheep face recognition based on Euclidean space metric
CN113792603A (en) Livestock body identification system based on artificial intelligence and use method
CN114120090A (en) Image processing method, device, equipment and storage medium
Mar et al. Cow detection and tracking system utilizing multi-feature tracking algorithm
Ahmad et al. AI-Driven livestock identification and insurance management system
Kaur et al. Cattle identification with muzzle pattern using computer vision technology: a critical review and prospective
CN115546830A (en) Missing dog searching method, device and equipment
CN117079339A (en) Animal iris recognition method, prediction model training method, electronic equipment and medium
CN116783632A (en) System and method for identifying pets based on nose
CN115830078A (en) Live pig multi-target tracking and behavior recognition method, computer equipment and storage medium
US11847849B2 (en) System and method for companion animal identification based on artificial intelligence
CN113762089A (en) Artificial intelligence-based livestock left face identification system and use method
CN115587896B (en) Method, device and equipment for processing canine medical insurance data
CN115424211B (en) Civilized dog raising terminal operation method and device based on big data and terminal
US20230337630A1 (en) Systems and methods of individual animal identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination