CN110751675B - Urban pet activity track monitoring method based on image recognition and related equipment - Google Patents

Urban pet activity track monitoring method based on image recognition and related equipment Download PDF

Info

Publication number
CN110751675B
CN110751675B CN201910829499.XA CN201910829499A CN110751675B CN 110751675 B CN110751675 B CN 110751675B CN 201910829499 A CN201910829499 A CN 201910829499A CN 110751675 B CN110751675 B CN 110751675B
Authority
CN
China
Prior art keywords
pet
probability
image
category
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910829499.XA
Other languages
Chinese (zh)
Other versions
CN110751675A (en
Inventor
金晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910829499.XA priority Critical patent/CN110751675B/en
Publication of CN110751675A publication Critical patent/CN110751675A/en
Priority to PCT/CN2020/111880 priority patent/WO2021043074A1/en
Application granted granted Critical
Publication of CN110751675B publication Critical patent/CN110751675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image recognition-based urban pet activity track monitoring method, which comprises the following steps: initializing the category probability of the pet category; acquiring a pet image and the geographic position, the equipment identification number and the acquisition time of image acquisition equipment; identifying the identification information of the pet and storing the identification information in association with the pet image; when the geographic position, the equipment identification number and the acquisition time are the same, correcting the category probability by adopting a first correction model; when the geographic position and the equipment identification number are the same but the acquisition time is different, correcting the category probability by adopting a second correction model; when the acquisition time is the same but the geographic position and the equipment identification number are different, correcting the category probability by adopting a third correction model; and determining the activity track of the pet based on the corrected category probability. The invention also provides an urban pet activity track monitoring device based on image recognition, a terminal and a storage medium. The method and the system can monitor the activity track of the pets in the city based on the probability.

Description

Urban pet activity track monitoring method based on image recognition and related equipment
Technical Field
The invention relates to the technical field of video monitoring, in particular to an image recognition-based urban pet activity track monitoring method, device, terminal and storage medium.
Background
In recent years, with the improvement of living standards of people, urban residents raise pets increasingly, people should wait for the pets when enjoying the substance and spirit brought by the pets, and the harmony between people and the pets is promoted, and the concept of building a smart city is also agreed.
In the prior art, the moving track of the urban pet is tracked mainly by analyzing the video monitoring to identify the moving target, so that the moving process of the target is recorded, and the tracking and the analysis are convenient. However, most pets are cats and dogs, the pets are relatively good, the running speed is relatively high, and the data acquired by a plurality of cameras are analyzed by using video monitoring, so that a static picture is obtained, and the time continuity is not provided. Each camera stores video data monitored so far, and along with the movement of a monitored target, the moving track appears in the monitoring range of different cameras, so that the data of the moving track of the monitored target are recorded in different camera files, and the tracking and analysis of the target are very difficult, and the tracking and analysis of the moving track of the pet in the later stage are influenced.
Therefore, there is a need to provide a new solution for monitoring the activity area of urban pets.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, an apparatus, a terminal and a storage medium for monitoring the activity track of a pet in a city based on image recognition, which can monitor the activity track of the pet in the city based on probability.
The first aspect of the invention provides an image recognition-based urban pet activity track monitoring method, which comprises the following steps:
initializing class probabilities of each pet class;
acquiring a pet image and acquisition information acquired by image acquisition equipment, wherein the acquisition information comprises the geographic position, the equipment identification number and the acquisition time of the image acquisition equipment;
identifying the identification information of the pets in the pet image and storing the identification information and the pet image in association with the acquisition information;
judging whether the acquired information of any two pet images is the same or not;
when the geographic position, the equipment identification number and the acquisition time are the same, correcting the category probability by adopting a first correction model to obtain a first category probability;
when the geographic position and the equipment identification number are the same but the acquisition time is different, correcting the category probability by adopting a second correction model to obtain a second category probability;
When the acquisition time is the same but the geographic position and the equipment identification number are different, correcting the category probability by adopting a third correction model to obtain a third category probability;
and determining the activity track of the pet based on the corrected category probability.
In an optional embodiment, the correcting the class probability with the first correction model to obtain a first class probability includes:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a first category probability;
wherein, gamma is a correction factor coefficient, and />The initial category probabilities of two pets acquired by the same image acquisition device at the same time are respectively +.>For the first class probability.
In an alternative embodiment, the correcting the class probability using the second correction model obtains a second class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected class probability by adopting the following formula to obtain a second class probability;
wherein ,for penalty factor, γ is the correction factor coefficient, t is time, < > >Andthe initial category probabilities of two pets are acquired by the same image acquisition device at different moments respectively, < + >>And the second class probability.
In an alternative embodiment, the correcting the class probability using the third correction model obtains a third class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a third category probability;
wherein ,for the penalty factor, γ is the correction factor coefficient, l is distance-> and />The initial category probabilities of two pets collected by different image collecting devices at the same time are respectively +.> and />The initial category probabilities of two pets collected by different image collecting devices at the same time are respectively +.>And probability for the third category.
In an alternative embodiment, the identifying the pet in the pet image and storing the identification information in association with the pet image and the collection information includes:
inputting the pet image into a pre-trained pet identification model;
acquiring the recognition result of the pet identification recognition model;
and determining the identification information of the pet according to the identification result.
In an alternative embodiment, said inputting said pet image into a pre-trained pet identification recognition model comprises:
and detecting a target area in the pet image.
Clipping the target area in the pet image;
and inputting the cut target area as an input image into a pre-trained gesture recognition model.
In an alternative embodiment, the determining the activity trajectory of the pet based on the modified class probability includes:
acquiring all corrected category probabilities corresponding to each pet image;
screening out the maximum category probability from all the corrected category probabilities to be used as the target category probability of the pet image;
acquiring acquisition information corresponding to pet images with the same target class probability;
and determining the activity track of the pet according to the acquired information.
A second aspect of the present invention provides an image recognition-based urban pet activity trail monitoring device, the device comprising:
the probability initialization module is used for initializing the category probability of each pet category;
the information acquisition module is used for acquiring the pet image and acquisition information acquired by the image acquisition equipment, wherein the acquisition information comprises the geographic position, the equipment identification number and the acquisition time of the image acquisition equipment;
The identification module is used for identifying the identification information of the pets in the pet image and storing the identification information, the pet image and the acquisition information in an associated mode;
the information judging module is used for judging whether the acquired information of any two pet images is the same or not;
the first correction module is used for correcting the category probability by adopting a first correction model to obtain a first category probability when the geographic position, the equipment identification number and the acquisition time are the same;
the second correction module is used for correcting the category probability by adopting a second correction model to obtain a second category probability when the geographic position and the equipment identification number are the same but the acquisition time is different;
the third correction module is used for correcting the class probability by adopting a third correction model to obtain a third class probability when the acquisition time is the same but the geographic position and the equipment identification number are different;
and the track determining module is used for determining the activity track of the pet based on the corrected category probability.
A third aspect of the present invention provides a terminal comprising a processor for implementing the image recognition-based urban pet activity trail monitoring method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image recognition-based urban pet activity trail monitoring method.
In summary, according to the urban pet activity track monitoring method, device, terminal and storage medium based on image recognition, the initialized category probability is corrected through the plurality of parameter information in the collected information, so that pets in pet images are more and more close to real pet categories, particularly, the category probabilities of pets in different image collecting devices are corrected, finally, the pet images, the identification information and the collected information are related based on the corrected category probabilities, and the activity track of the pets is determined based on the related information. The specific category of the pet is not required to be identified in the whole process, and the problem that the feature vector of the pet image extracted by the traditional algorithm is inaccurate can be avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for monitoring a moving track of an urban pet based on image recognition according to an embodiment of the present invention.
Fig. 2 is a block diagram of an urban pet activity trail monitoring device based on image recognition according to a second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention.
The invention will be further described in the following detailed description in conjunction with the above-described figures.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and the described embodiments are merely some, rather than all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Example 1
Fig. 1 is a flowchart of a method for monitoring a moving track of an urban pet based on image recognition according to an embodiment of the present invention.
In this embodiment, for the terminal that needs to perform image recognition-based urban pet activity trail monitoring, the image recognition-based urban pet activity trail monitoring function provided by the method of the present invention may be directly integrated on the terminal, or may be run in the terminal in the form of a software development kit (Software Development Kit, SKD).
As shown in fig. 1, the method for monitoring the activity track of the urban pet based on image recognition specifically includes the following steps, the sequence of the steps in the flowchart may be changed according to different requirements, and some may be omitted.
S11, initializing the category probability of each pet category.
In this embodiment, the class probability refers to the probability that a pet belongs to a certain class, the class probability is initialized first, the same initial value is given to the class probabilities of all the pet classes, and it is assumed that the initial class probability that a pet belongs to each class is the same.
The categories of pets that may occur in the city may be enumerated and then the category probabilities are initialized based on the enumerated categories such that each category probability is the same and the sum is 1.
By way of example, assume that pets that may occur in a city are: gold wool, samoyer, halftime, demansism, ma Quan and the like, 5 categories can be correspondingly arranged, and the category probability of each category is 1/5. The class probability can be initialized or modified according to actual requirements.
After initializing the category probability of each category, storing the category probability and the identification information of each category.
S12, acquiring a pet image and acquisition information acquired by image acquisition equipment, wherein the acquisition information comprises the geographic position, the equipment identification number and the acquisition time of the image acquisition equipment.
In this embodiment, a plurality of high-definition digital image capturing devices may be preset according to relevant policy specifications or actual scene requirements, so as to capture images of pets.
The presetting of the plurality of image acquisition devices comprises presetting positions of the plurality of image acquisition devices and heights of the image acquisition devices. For example, assuming that a park prohibits a pet from entering, an image capturing apparatus may be installed at an entrance or an open place of the park. When the installation position of the image acquisition equipment is determined, the installation height of the image acquisition equipment is determined, so that the pet image acquired by the image acquisition equipment is free from shielding, and the identification precision of the pet image is convenient to improve.
In this embodiment, a unique device identification number may be set for each high-definition digital image capturing device, so as to indicate the identity of the high-definition digital image capturing device.
The information collection means information when the image collection device collects the pet image, and may include: the geographical location of the image capturing device, the device identification number of the image capturing device, the time when the pet image is captured (hereinafter simply referred to as capture time). The geographic location may be represented by latitude and longitude coordinates, the device identification number may be represented by a c+ number, and the acquisition time may be represented by year-month-day-time-minute-second.
S13, identifying the identification information of the pets in the pet image and storing the identification information, the pet image and the acquisition information in a correlated mode.
There are different identification information, i.e., the identification information has a one-to-one correspondence with the pet, for example, gold hair correspondence identification information a1, samoyer correspondence identification information a2, and halftoning correspondence identification information a3.
After the identification information corresponding to the pet in the pet image is identified, the identification information can be associated with the geographical positions of the pet image and the image acquisition equipment, the equipment identification number of the image acquisition equipment and the time when the pet image is acquired and stored in a preset database.
For example, assuming that a hashtag is captured by the image capturing device C (camera) located at a certain geographic location L (location) at a certain time T (time), a record (a 3, T, L, C) may be formed for association storage by comparing the identification information of the hashtag to a3 in steps S11-S13. And the other multiple parameter information can be acquired conveniently according to any one parameter association. For example, a plurality of parameters such as a pet image having the same device identification number, identification information, a geographical position of an image capturing device, a time when the pet image is captured, and the like may be obtained by associating the parameters based on the device identification number.
In an alternative embodiment, the identifying the pet in the pet image and storing the identification information in association with the pet image and the collection information includes:
inputting the pet image into a pre-trained pet identification model;
acquiring the recognition result of the pet identification recognition model;
and determining the identification information of the pet according to the identification result.
In this embodiment, the pet identification recognition model is trained in advance, and the training process may include: a plurality of pet images are acquired in advance; dividing a plurality of pet images and identification information into a training set with a first proportion and a test set with a second proportion, wherein the first proportion is far greater than the second proportion; inputting the training set into a preset deep neural network to perform supervised learning and training to obtain a pet identification recognition model; inputting the test set into the pet identification model for testing to obtain a test passing rate; and when the test passing rate is greater than or equal to a preset passing rate threshold, ending training of the pet identification recognition model, and when the test passing rate is less than the preset passing rate threshold, re-dividing the training set and the testing set, learning and training the pet identification recognition model based on the new training set, and testing the passing rate of the newly trained pet identification recognition model based on the new testing set. Since the pet identification recognition model is not the focus of the present invention, the specific process of training the pet identification recognition model is not described in detail herein.
In an alternative embodiment, said inputting said pet image into a pre-trained pet identification recognition model comprises:
and detecting a target area in the pet image.
Clipping the target area in the pet image;
and inputting the cut target area as an input image into a pre-trained gesture recognition model.
In this embodiment, a YOLO target detection algorithm may be used to select a region where a pet in the pet image is located from a detection frame, where the detection frame is selected as a target region, and because the number of pixels in the target region is far less than that of the whole pet image, and the target region almost only includes the target object, namely the pet, but no other non-target objects, the target region is cut out to be used as an input image of the pet identification model, which is not only conducive to improving the efficiency of identifying the pet identification information by the pet identification model, but also does not have interference of the non-target objects in the target region, and also improves the accuracy of identifying the pet identification information by the pet identification model.
S14, judging whether the acquired information of any two pet images is the same or not.
In this embodiment, any two pet images may be obtained from a preset database, and whether the pets in the two pet images are in the same category may be determined based on the identification information and the collection information associated with the two pet images, and the initialized category probability may be corrected according to the identification information and the collection information. The probability that a certain pet belongs to a certain category is high, and the probability that the pet belongs to other categories is low. The activity track and the activity area of the pets of different categories can be analyzed based on the corrected category probabilities.
And S15, when the geographic position, the equipment identification number and the acquisition time are the same, correcting the class probability by adopting a first correction model to obtain a first class probability.
In this embodiment, the acquired acquisition information corresponding to any two pet images is identical, that is, the geographic position, the device identification number and the acquisition time are identical, which indicates that the two pet images are acquired by the same image acquisition device at the same time.
It is assumed that the image acquisition device is denoted by c, the geographical position is denoted by l, the population is denoted by p, the pet identification is denoted by a, a belongs to the population p and is noted as a e p, and the probability of a e p is ρ
A camera c collects two pets i1 and i2 at a certain moment t, then there isAndthe corresponding class probability is +.> and />
In an optional embodiment, the correcting the class probability with the first correction model to obtain a first class probability includes:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a first category probability;
wherein, gamma is a correction factor coefficient, and />The initial category probabilities of two pets acquired by the same image acquisition device at the same time are respectively +.>For the first class probability.
The above embodiment is based on a category probability correction algorithm of a single image acquisition device at the same time, and the pets in one scene add a weight gamma to the same population factor.
S16, when the geographic position and the equipment identification number are the same but the acquisition time is different, correcting the category probability by adopting a second correction model to obtain a second category probability.
In this embodiment, the geographic positions and the device identification numbers corresponding to any two obtained pet images are the same, and the acquisition times are different, which indicates that the two pet images are acquired by the same image acquisition device at different times.
A camera c collects two different pets i1 and i2 at different moments t1 and t2, so that there are and />
In an alternative embodiment, the correcting the class probability using the second correction model obtains a second class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected class probability by adopting the following formula to obtain a second class probability;
wherein ,for penalty factor, γ is the correction factor coefficient, t is time, < >>Andthe initial category probabilities of two pets are acquired by the same image acquisition device at different moments respectively, < + >>And the second class probability.
The embodiment is based on a category probability correction algorithm of a single image acquisition device at different moments, pets appearing in the same scene in a short time are given a penalty factor beta according to time intervals t The same population factor is added with a weight value beta t * Gamma, i.e. adding a penalty factor beta to the correction factor gamma t ,β t And the interval of time t.
And S17, when the acquisition time is the same but the geographic position and the equipment identification number are different, correcting the class probability by adopting a third correction model to obtain a third class probability.
In this embodiment, the geographic positions and the device identification numbers corresponding to any two obtained pet images are different, but when the acquisition times are the same, it is indicated that the two pet images are acquired by two different image acquisition devices at the same time.
The cameras c1 and c2 respectively collect the pets i1, i2, i3 and i4 at the same time t
Thus there is and />
In an alternative embodiment, the correcting the class probability using the third correction model obtains a third class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a third category probability;
wherein ,for the penalty factor, γ is the correction factor coefficient, l is distance-> and />The initial category probabilities of two pets collected by different image collecting devices at the same time are respectively +.> and />The initial category probabilities of two pets collected by different image collecting devices at the same time are respectively +.>And probability for the third category.
The above embodiment is based on a class probability correction algorithm at the same time of a plurality of image capturing devices, where i1 and i3 are matched to the same pet (but it is impossible for two cameras i1 and i3 that are far away from each other to be the same pet) through a matching algorithm, so that the correction factor βl at this time is related to the distance βl.
S18, determining the activity track of the pet based on the corrected category probability.
In this embodiment, after correcting the category probabilities of any two pets according to the collected information, the corrected category probabilities, the pet images, the collected information and the identification information may be stored in an associated manner, and the activity track of the pet in the same category may be obtained based on the information stored in the associated manner, and the activity area of the pet may be determined according to the activity track.
In an alternative embodiment, the determining the activity trajectory of the pet based on the modified class probability includes:
acquiring all corrected category probabilities corresponding to each pet image;
screening out the maximum category probability from all the corrected category probabilities to be used as the target category probability of the pet image;
acquiring acquisition information corresponding to pet images with the same target class probability;
and determining the activity track of the pet according to the acquired information.
For example, if the corrected class probabilities for the gold hair, samoyer, halfty, dewing, and equine at time a1 are 0.9, 0.1, 0, and the corrected class probabilities for the gold hair, samoyer, halfty, dewing, and equine at time a1 are 0.9, 0, 0.1, 0, and 0, respectively, and the corrected class probabilities for the gold hair, samoyer, halfty, dewing, and equine at time a1 are 0.8, 0.1, 0, and 0, respectively, then the class probabilities of 0.9 are the target class probabilities for a1, indicating that a1 belongs to the gold hair. At this time, the acquisition information of all the pet images corresponding to a1 is extracted, and then the moving track of a1 is determined according to the extracted acquisition information. Specifically, according to the position and the number of the image acquisition equipment in the acquisition information and the corresponding acquisition time, determining when and where the puppy appears.
The activity track of the pet can also be displayed in the form of a map.
The urban pet population quantity monitoring method based on image recognition can be applied to searching for lost pets, rescue of the unrestrained pets, law enforcement evidence for prohibiting the pets from entering a specific area and the like.
In summary, according to the urban pet activity track monitoring method based on image recognition, the class probability of each pet class is initialized, the pet image and the acquisition information sent by the image acquisition device are obtained, the acquisition information comprises the geographic position and the device identification number of the image acquisition device and the acquisition time, the identification information of the pet in the pet image is recognized and is associated with and stored with the pet image and the acquisition information, whether the acquisition information of any two pet images is identical or not is judged, when the geographic position, the device identification number and the acquisition time in the acquisition information are identical, the class probability is updated by adopting a first correction model to obtain the first class probability, when the geographic position and the device identification number in the acquisition information are identical but the acquisition time is different, the class probability is updated by adopting a second correction model to obtain the second class probability, when the acquisition time in the acquisition information is identical but the geographic position and the device identification number are different, the class probability is updated by adopting a third correction model to obtain the third class probability, and the activity track of the pet after correction is determined based on the class probability. According to the pet image processing method, the initialized category probability is corrected through the plurality of parameter information in the acquired information, so that pets in the pet image are more and more close to real pet categories, particularly, the category probabilities of the pets in different image acquisition devices are corrected, finally, the pet image, the identification information and the acquired information are related based on the corrected category probabilities, and the moving track of the pets is determined based on the related information. The specific category of the pet is not required to be identified in the whole process, and the problem that the feature vector of the pet image extracted by the traditional algorithm is inaccurate can be avoided.
Example two
Fig. 2 is a block diagram of an urban pet activity trail monitoring device based on image recognition according to a second embodiment of the present invention.
In some embodiments, the image recognition-based urban pet activity trail monitoring device 20 may include a plurality of functional modules comprised of program code segments. Program code for each program segment in the image recognition based urban pet activity trail monitoring device 20 may be stored in a memory of the terminal and executed by the at least one processor to perform image recognition based monitoring of urban pet activity trail (see fig. 1 for details).
In this embodiment, the image recognition-based urban pet activity trail monitoring device 20 may be divided into a plurality of functional modules according to the functions performed by the device. The functional module may include: the device comprises a probability initialization module 201, an information acquisition module 202, an identification recognition module 203, an information judgment module 204, a first correction module 205, a second correction module 206, a third correction module 207 and a track determination module 208. The module referred to in the present invention refers to a series of computer program segments capable of being executed by at least one processor and of performing a fixed function, stored in a memory. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
The probability initializing module 201 is configured to initialize a category probability of each pet category.
In this embodiment, the class probability refers to the probability that a pet belongs to a certain class, the class probability is initialized first, the same initial value is given to the class probabilities of all the pet classes, and it is assumed that the initial class probability that a pet belongs to each class is the same.
The categories of pets that may occur in the city may be enumerated and then the category probabilities are initialized based on the enumerated categories such that each category probability is the same and the sum is 1.
By way of example, assume that pets that may occur in a city are: gold wool, samoyer, halftime, demansism, ma Quan and the like, 5 categories can be correspondingly arranged, and the category probability of each category is 1/5. The class probability can be initialized or modified according to actual requirements.
After initializing the category probability of each category, storing the category probability and the identification information of each category.
The information acquisition module 202 is configured to acquire the pet image and the acquisition information acquired by the image acquisition device, where the acquisition information includes a geographic location, a device identification number, and an acquisition time of the image acquisition device.
In this embodiment, a plurality of high-definition digital image capturing devices may be preset according to relevant policy specifications or actual scene requirements, so as to capture images of pets.
The presetting of the plurality of image acquisition devices comprises presetting positions of the plurality of image acquisition devices and heights of the image acquisition devices. For example, assuming that a park prohibits a pet from entering, an image capturing apparatus may be installed at an entrance or an open place of the park. When the installation position of the image acquisition equipment is determined, the installation height of the image acquisition equipment is determined, so that the pet image acquired by the image acquisition equipment is free from shielding, and the identification precision of the pet image is convenient to improve.
In this embodiment, a unique device identification number may be set for each high-definition digital image capturing device, so as to indicate the identity of the high-definition digital image capturing device.
The information collection means information when the image collection device collects the pet image, and may include: the geographical location of the image capturing device, the device identification number of the image capturing device, the time when the pet image is captured (hereinafter simply referred to as capture time). The geographic location may be represented by latitude and longitude coordinates, the device identification number may be represented by a c+ number, and the acquisition time may be represented by year-month-day-time-minute-second.
The identification identifying module 203 is configured to identify identification information of a pet in the pet image and store the identification information in association with the pet image and the acquired information.
There are different identification information, i.e., the identification information has a one-to-one correspondence with the pet, for example, gold hair correspondence identification information a1, samoyer correspondence identification information a2, and halftoning correspondence identification information a3.
After the identification information corresponding to the pet in the pet image is identified, the identification information can be associated with the geographical positions of the pet image and the image acquisition equipment, the equipment identification number of the image acquisition equipment and the time when the pet image is acquired and stored in a preset database.
For example, assuming that a hashtag is captured by the image capturing device C (camera) located at a certain geographic location L (location) at a certain time T (time), a record (a 3, T, L, C) may be formed for association storage by comparing the identification information of the hashtag to a3 in steps S11-S13. And the other multiple parameter information can be acquired conveniently according to any one parameter association. For example, a plurality of parameters such as a pet image having the same device identification number, identification information, a geographical position of an image capturing device, a time when the pet image is captured, and the like may be obtained by associating the parameters based on the device identification number.
In an alternative embodiment, the identification recognition module 203 recognizes the identification information of the pet in the pet image and stores the identification information in association with the pet image and the collection information includes:
inputting the pet image into a pre-trained pet identification model;
acquiring the recognition result of the pet identification recognition model;
and determining the identification information of the pet according to the identification result.
In this embodiment, the pet identification recognition model is trained in advance, and the training process may include: a plurality of pet images are acquired in advance; dividing a plurality of pet images and identification information into a training set with a first proportion and a test set with a second proportion, wherein the first proportion is far greater than the second proportion; inputting the training set into a preset deep neural network to perform supervised learning and training to obtain a pet identification recognition model; inputting the test set into the pet identification model for testing to obtain a test passing rate; and when the test passing rate is greater than or equal to a preset passing rate threshold, ending training of the pet identification recognition model, and when the test passing rate is less than the preset passing rate threshold, re-dividing the training set and the testing set, learning and training the pet identification recognition model based on the new training set, and testing the passing rate of the newly trained pet identification recognition model based on the new testing set. Since the pet identification recognition model is not the focus of the present invention, the specific process of training the pet identification recognition model is not described in detail herein.
In an alternative embodiment, said inputting said pet image into a pre-trained pet identification recognition model comprises:
and detecting a target area in the pet image.
Clipping the target area in the pet image;
and inputting the cut target area as an input image into a pre-trained gesture recognition model.
In this embodiment, a YOLO target detection algorithm may be used to select a region where a pet in the pet image is located from a detection frame, where the detection frame is selected as a target region, and because the number of pixels in the target region is far less than that of the whole pet image, and the target region almost only includes the target object, namely the pet, but no other non-target objects, the target region is cut out to be used as an input image of the pet identification model, which is not only conducive to improving the efficiency of identifying the pet identification information by the pet identification model, but also does not have interference of the non-target objects in the target region, and also improves the accuracy of identifying the pet identification information by the pet identification model.
The information judging module 204 is configured to judge whether the acquired information of any two pet images is the same.
In this embodiment, any two pet images may be obtained from a preset database, and whether the pets in the two pet images are in the same category may be determined based on the identification information and the collection information associated with the two pet images, and the initialized category probability may be corrected according to the identification information and the collection information. The probability that a certain pet belongs to a certain category is high, and the probability that the pet belongs to other categories is low. The activity track and the activity area of the pets of different categories can be analyzed based on the corrected category probabilities.
And the first correction module 205 is configured to correct the class probability by using a first correction model to obtain a first class probability when the geographic location, the equipment identification number and the acquisition time are the same.
In this embodiment, the acquired acquisition information corresponding to any two pet images is identical, that is, the geographic position, the device identification number and the acquisition time are identical, which indicates that the two pet images are acquired by the same image acquisition device at the same time.
It is assumed that the image acquisition device is denoted by c, the geographical position is denoted by l, the population is denoted by p, the pet identification is denoted by a, a belongs to the population p and is noted as a e p, and the probability of a e p is ρ
A camera c collects two pets i1 and i2 at a certain moment t, then there isAndthe corresponding class probability is +.> and />
In an alternative embodiment, the modifying the class probability by the first modification module 205 using the first modification model to obtain the first class probability includes:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a first category probability;
wherein, gamma is a correction factor coefficient, and />The initial category probabilities of two pets acquired by the same image acquisition device at the same time are respectively +.>For the first class probability.
The above embodiment is based on a category probability correction algorithm of a single image acquisition device at the same time, and the pets in one scene add a weight gamma to the same population factor.
And a second correction module 206, configured to correct the class probability by using a second correction model to obtain a second class probability when the geographic location and the equipment identification number are the same but the acquisition time is different.
In this embodiment, the geographic positions and the device identification numbers corresponding to any two obtained pet images are the same, and the acquisition times are different, which indicates that the two pet images are acquired by the same image acquisition device at different times.
A camera c collects two different pets i1 and i2 at different moments t1 and t2, so that there are and />
In an alternative embodiment, the second correction module 206 corrects the class probability using a second correction model to obtain a second class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected class probability by adopting the following formula to obtain a second class probability;
wherein ,for penalty factor, γ is the correction factor coefficient, t is time, < >>Andthe initial category probabilities of two pets are acquired by the same image acquisition device at different moments respectively, < + >>And the second class probability.
The embodiment is based on a category probability correction algorithm of a single image acquisition device at different moments, pets appearing in the same scene in a short time are given a penalty factor beta according to time intervals t The same population factor is added with a weight value beta t * Gamma, i.e. adding a penalty factor beta to the correction factor gamma t ,β t And the interval of time t.
And a third correction module 207, configured to correct the class probability by using a third correction model to obtain a third class probability when the acquisition time is the same but the geographic location and the equipment identification number are different.
In this embodiment, the geographic positions and the device identification numbers corresponding to any two obtained pet images are different, but when the acquisition times are the same, it is indicated that the two pet images are acquired by two different image acquisition devices at the same time.
The cameras c1 and c2 respectively collect the pets i1, i2, i3 and i4 at the same time t
Thus there is and />
In an alternative embodiment, the third correction module 207 corrects the class probability using a third correction model to obtain a third class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a third category probability;
wherein ,for the penalty factor, γ is the correction factor coefficient, l is distance-> and />The initial category probabilities of two pets collected by different image collecting devices at the same time are respectively +.> and />The initial category probabilities of two pets collected by different image collecting devices at the same time are respectively +.>And probability for the third category.
The above embodiment is based on a class probability correction algorithm of the same time of a plurality of image acquisition devices, wherein i1 and i3 are matched to the same pet (but not possibly the same pet for two cameras i1 and i3 far away from each other) through a matching algorithm, so that the correction factor beta at this time l And distance l.
The track determining module 208 is configured to determine an activity track of the pet based on the modified category probability.
In this embodiment, after correcting the category probabilities of any two pets according to the collected information, the corrected category probabilities, the pet images, the collected information and the identification information may be stored in an associated manner, and the activity track of the pet in the same category may be obtained based on the information stored in the associated manner, and the activity area of the pet may be determined according to the activity track.
In an alternative embodiment, the trajectory determination module 208 determines the activity trajectory of the pet based on the revised category probabilities includes:
acquiring all corrected category probabilities corresponding to each pet image;
screening out the maximum category probability from all the corrected category probabilities to be used as the target category probability of the pet image;
acquiring acquisition information corresponding to pet images with the same target class probability;
and determining the activity track of the pet according to the acquired information.
For example, if the corrected class probabilities for the gold hair, samoyer, halfty, dewing, and equine at time a1 are 0.9, 0.1, 0, and the corrected class probabilities for the gold hair, samoyer, halfty, dewing, and equine at time a1 are 0.9, 0, 0.1, 0, and 0, respectively, and the corrected class probabilities for the gold hair, samoyer, halfty, dewing, and equine at time a1 are 0.8, 0.1, 0, and 0, respectively, then the class probabilities of 0.9 are the target class probabilities for a1, indicating that a1 belongs to the gold hair. At this time, the acquisition information of all the pet images corresponding to a1 is extracted, and then the moving track of a1 is determined according to the extracted acquisition information. Specifically, according to the position and the number of the image acquisition equipment in the acquisition information and the corresponding acquisition time, determining when and where the puppy appears.
The activity track of the pet can also be displayed in the form of a map.
The urban pet population quantity monitoring method based on image recognition can be applied to searching for lost pets, rescue of the unrestrained pets, law enforcement evidence for prohibiting the pets from entering a specific area and the like.
In summary, according to the urban pet activity track monitoring device based on image recognition, the class probability of each pet class is initialized, the pet image and the acquisition information sent by the image acquisition device are obtained, the acquisition information comprises the geographic position and the device identification number of the image acquisition device and the acquisition time, the identification information of the pet in the pet image is recognized and is associated with and stored with the pet image and the acquisition information, whether the acquisition information of any two pet images is the same or not is judged, when the geographic position, the device identification number and the acquisition time in the acquisition information are the same, the class probability is updated by adopting a first correction model to obtain the first class probability, when the geographic position and the device identification number in the acquisition information are the same but the acquisition time is different, the class probability is updated by adopting a second correction model to obtain the second class probability, when the acquisition time in the acquisition information is the same but the geographic position and the device identification number are different, the class probability is updated by adopting a third correction model to obtain the third class probability, and the activity track is determined based on the corrected class probability. According to the pet image processing method, the initialized category probability is corrected through the plurality of parameter information in the acquired information, so that pets in the pet image are more and more close to real pet categories, particularly, the category probabilities of the pets in different image acquisition devices are corrected, finally, the pet image, the identification information and the acquired information are related based on the corrected category probabilities, and the moving track of the pets is determined based on the related information. The specific category of the pet is not required to be identified in the whole process, and the problem that the feature vector of the pet image extracted by the traditional algorithm is inaccurate can be avoided.
Example III
Fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention. In the preferred embodiment of the invention, the terminal 3 comprises a memory 31, at least one processor 32, at least one communication bus 33 and a transceiver 34.
It will be appreciated by those skilled in the art that the configuration of the terminal shown in fig. 3 is not limiting of the embodiments of the present invention, and that it may be a bus type configuration, a star type configuration, or a combination of hardware and software, or a different arrangement of components, as the terminal 3 may include more or less hardware or software than is shown.
In some embodiments, the terminal 3 includes a terminal capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The terminal 3 may further comprise a client device, which includes, but is not limited to, any electronic product capable of performing man-machine interaction with a client through a keyboard, a mouse, a remote controller, a touch pad, a voice control device, etc., for example, a personal computer, a tablet computer, a smart phone, a digital camera, etc.
It should be noted that the terminal 3 is only used as an example, and other electronic products that may be present in the present invention or may be present in the future are also included in the scope of the present invention by way of reference.
In some embodiments, the memory 31 is used to store program codes and various data, such as devices installed in the terminal 3, and to enable high-speed, automatic access to programs or data during operation of the terminal 3. The Memory 31 includes Read-Only Memory (ROM), programmable Read-Only Memory (PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (One-time Programmable Read-Only Memory, OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
In some embodiments, the at least one processor 32 may be comprised of an integrated circuit, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The at least one processor 32 is a Control Unit (Control Unit) of the terminal 3, connects respective components of the entire terminal 3 using various interfaces and lines, and executes various functions of the terminal 3 and processes data by running or executing programs or modules stored in the memory 31 and calling data stored in the memory 31.
In some embodiments, the at least one communication bus 33 is arranged to enable connected communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the terminal 3 may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 32 through a power management device, so as to perform functions of managing charging, discharging, power consumption management, etc. through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The terminal 3 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a terminal, or a network device, etc.) or a processor (processor) to perform portions of the methods described in the various embodiments of the invention.
In a further embodiment, in connection with fig. 2, the at least one processor 32 may execute the operating means of the terminal 3 as well as various installed applications, program codes, etc., such as the various modules described above.
The memory 31 has program code stored therein, and the at least one processor 32 can invoke the program code stored in the memory 31 to perform related functions. For example, each of the modules depicted in fig. 2 is a program code stored in the memory 31 and executed by the at least one processor 32 to implement the functions of the respective module.
In one embodiment of the invention, the memory 31 stores a plurality of instructions that are executed by the at least one processor 32 to implement all or part of the steps of the method of the invention.
Specifically, the specific implementation method of the above instruction by the at least one processor 32 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements or that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. An image recognition-based urban pet activity track monitoring method is characterized by comprising the following steps:
initializing class probabilities of each pet class;
acquiring a pet image and acquisition information acquired by image acquisition equipment, wherein the acquisition information comprises the geographic position, the equipment identification number and the acquisition time of the image acquisition equipment;
identifying the identification information of the pets in the pet image and storing the identification information and the pet image in association with the acquisition information;
judging whether the acquired information of any two pet images is the same or not;
when the geographic position, the equipment identification number and the acquisition time are the same, correcting the category probability by adopting a first correction model to obtain a first category probability;
when the geographic position and the equipment identification number are the same but the acquisition time is different, correcting the category probability by adopting a second correction model to obtain a second category probability;
When the acquisition time is the same but the geographic position and the equipment identification number are different, correcting the category probability by adopting a third correction model to obtain a third category probability;
and determining the activity track of the pet based on the corrected category probability.
2. The method of claim 1, wherein modifying the class probability using a first modification model to obtain a first class probability comprises:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a first category probability;
wherein, gamma is a correction factor coefficient, and />The initial category probabilities of two pets acquired by the same image acquisition device at the same time are respectively +.>For the first class probability.
3. The method of claim 1, wherein the correcting the class probability using the second correction model results in a second class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected class probability by adopting the following formula to obtain a second class probability;
wherein , and />Are penalty factors, gamma is a correction factor coefficient, t u1 Is the acquisition time, t, of the pet u1 u2 For the acquisition time of pet u2, +.> and />The initial category probabilities of two pets are acquired by the same image acquisition device at different moments respectively, < + >>And the second class probability.
4. The method of claim 1, wherein the correcting the class probability using a third correction model yields a third class probability as follows:
correcting the category probability by adopting the following formula;
normalizing the corrected category probability by adopting the following formula to obtain a third category probability;
wherein , are penalty factors, gamma is a correction factor coefficient, < ->l r1 For the distance between the pet r1 and the corresponding image acquisition device, l r3 For the distance, t, between the pet r3 and the corresponding image acquisition device r1 Is the acquisition time, t, of the pet r1 r2 Is the acquisition time t of the pet r2 r4 For the acquisition time of pet r4, +.> and />The initial category probabilities of the pet r1 and the pet r2 are acquired by different image acquisition devices at the same time respectively, < >>For use and->Initial class probability of pet r4, which is acquired by different image acquisition devices at the same time,/for>And probability for the third category.
5. The method of any one of claims 1 to 4, wherein the identifying the identification information of the pet in the pet image and storing the identification information in association with the pet image and the collection information comprises:
Inputting the pet image into a pre-trained pet identification model;
acquiring the recognition result of the pet identification recognition model;
and determining the identification information of the pet according to the identification result.
6. The method of claim 5, wherein said inputting the pet image into a pre-trained pet identification recognition model comprises:
detecting a target area in the pet image;
clipping the target area in the pet image;
and inputting the cut target area as an input image into a pre-trained gesture recognition model.
7. The method of any one of claims 1 to 4, wherein the determining the activity trajectory of the pet based on the revised class probabilities comprises:
acquiring all corrected category probabilities corresponding to each pet image;
screening out the maximum category probability from all the corrected category probabilities to be used as the target category probability of the pet image;
acquiring acquisition information corresponding to pet images with the same target class probability;
and determining the activity track of the pet according to the acquired information.
8. An image recognition-based urban pet activity trail monitoring device, comprising:
the probability initialization module is used for initializing the category probability of each pet category;
the information acquisition module is used for acquiring the pet image and acquisition information acquired by the image acquisition equipment, wherein the acquisition information comprises the geographic position, the equipment identification number and the acquisition time of the image acquisition equipment;
the identification module is used for identifying the identification information of the pets in the pet image and storing the identification information, the pet image and the acquisition information in an associated mode;
the information judging module is used for judging whether the acquired information of any two pet images is the same or not;
the first correction module is used for correcting the category probability by adopting a first correction model to obtain a first category probability when the geographic position, the equipment identification number and the acquisition time are the same;
the second correction module is used for correcting the category probability by adopting a second correction model to obtain a second category probability when the geographic position and the equipment identification number are the same but the acquisition time is different;
the third correction module is used for correcting the class probability by adopting a third correction model to obtain a third class probability when the acquisition time is the same but the geographic position and the equipment identification number are different;
And the track determining module is used for determining the activity track of the pet based on the corrected category probability.
9. A terminal comprising a processor for implementing the image recognition-based urban pet activity trail monitoring method according to any one of claims 1 to 7 when executing a computer program stored in a memory.
10. A computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the image recognition-based urban pet activity trail monitoring method according to any one of claims 1 to 7.
CN201910829499.XA 2019-09-03 2019-09-03 Urban pet activity track monitoring method based on image recognition and related equipment Active CN110751675B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910829499.XA CN110751675B (en) 2019-09-03 2019-09-03 Urban pet activity track monitoring method based on image recognition and related equipment
PCT/CN2020/111880 WO2021043074A1 (en) 2019-09-03 2020-08-27 Urban pet motion trajectory monitoring method based on image recognition, and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910829499.XA CN110751675B (en) 2019-09-03 2019-09-03 Urban pet activity track monitoring method based on image recognition and related equipment

Publications (2)

Publication Number Publication Date
CN110751675A CN110751675A (en) 2020-02-04
CN110751675B true CN110751675B (en) 2023-08-11

Family

ID=69276012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910829499.XA Active CN110751675B (en) 2019-09-03 2019-09-03 Urban pet activity track monitoring method based on image recognition and related equipment

Country Status (2)

Country Link
CN (1) CN110751675B (en)
WO (1) WO2021043074A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751675B (en) * 2019-09-03 2023-08-11 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment
CN111354024B (en) * 2020-04-10 2023-04-21 深圳市五元科技有限公司 Behavior prediction method of key target, AI server and storage medium
CN112529020B (en) * 2020-12-24 2024-05-24 携程旅游信息技术(上海)有限公司 Animal identification method, system, equipment and storage medium based on neural network
CN112904778B (en) * 2021-02-02 2022-04-15 东北林业大学 Wild animal intelligent monitoring method based on multi-dimensional information fusion
CN114550490B (en) * 2022-02-22 2023-12-22 北京信路威科技股份有限公司 Parking space statistics method, system, computer equipment and storage medium of parking lot
CN117692767B (en) * 2024-02-02 2024-06-11 深圳市积加创新技术有限公司 Low-power consumption monitoring system based on scene self-adaptive dynamic time-sharing strategy

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376786A (en) * 2018-10-31 2019-02-22 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and readable storage medium storing program for executing
CN109934176A (en) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 Pedestrian's identifying system, recognition methods and computer readable storage medium
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN110163301A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of classification method and device of image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9256807B1 (en) * 2012-09-27 2016-02-09 Google Inc. Generating labeled images
US10713500B2 (en) * 2016-09-12 2020-07-14 Kennesaw State University Research And Service Foundation, Inc. Identification and classification of traffic conflicts using live video images
CN110751675B (en) * 2019-09-03 2023-08-11 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376786A (en) * 2018-10-31 2019-02-22 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and readable storage medium storing program for executing
CN109934176A (en) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 Pedestrian's identifying system, recognition methods and computer readable storage medium
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN110163301A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of classification method and device of image

Also Published As

Publication number Publication date
WO2021043074A1 (en) 2021-03-11
CN110751675A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN110751675B (en) Urban pet activity track monitoring method based on image recognition and related equipment
CN110751022B (en) Urban pet activity track monitoring method based on image recognition and related equipment
US11379696B2 (en) Pedestrian re-identification method, computer device and readable medium
CN202940921U (en) Real-time monitoring system based on face identification
CN106355367A (en) Warehouse monitoring management device
CN107833328B (en) Access control verification method and device based on face recognition and computing equipment
CN102999951A (en) Intelligent personnel attendance checking method based on wireless network received signal strength
CN111914667B (en) Smoking detection method and device
CN111553266A (en) Identification verification method and device and electronic equipment
CN111985452B (en) Automatic generation method and system for personnel movement track and foot drop point
CN113837030A (en) Intelligent personnel management and control method and system for epidemic situation prevention and control and computer equipment
WO2021022795A1 (en) Method, apparatus, and device for detecting fraudulent behavior during facial recognition process
CN112926491A (en) User identification method and device, electronic equipment and storage medium
CN116453226A (en) Human body posture recognition method and device based on artificial intelligence and related equipment
CN114038040A (en) Machine room inspection monitoring method, device and equipment
CN113837138B (en) Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal
CN112153341B (en) Task supervision method, device and system, electronic equipment and storage medium
CN113592902A (en) Target tracking method and device, computer equipment and storage medium
CN114581949A (en) Computer room personnel monitoring method and device, computer equipment and storage medium
CN113762096A (en) Health code identification method and device, storage medium and electronic equipment
CN109960995B (en) Motion data determination system, method and device
EP3958228A1 (en) System, device and method for tracking movement of objects across different areas by multiple surveillance cameras
CN111666786A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115577379B (en) Hierarchical protection security analysis method, system and equipment
CN116311080B (en) Monitoring image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant