WO2021043074A1 - Procédé de surveillance de trajectoire de mouvement d'animal de compagnie dans un cadre urbain basé sur la reconnaissance d'image, et dispositifs associés - Google Patents
Procédé de surveillance de trajectoire de mouvement d'animal de compagnie dans un cadre urbain basé sur la reconnaissance d'image, et dispositifs associés Download PDFInfo
- Publication number
- WO2021043074A1 WO2021043074A1 PCT/CN2020/111880 CN2020111880W WO2021043074A1 WO 2021043074 A1 WO2021043074 A1 WO 2021043074A1 CN 2020111880 W CN2020111880 W CN 2020111880W WO 2021043074 A1 WO2021043074 A1 WO 2021043074A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pet
- category probability
- category
- image
- probability
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/70—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry
Definitions
- This application relates to the field of artificial intelligence technology, and in particular to a method, device, terminal and storage medium for monitoring urban pet activity tracks based on image recognition.
- tracking the activity trajectory of urban pets mainly uses video surveillance to analyze and identify moving targets, so as to record the moving process of the targets, which is convenient for tracking and analysis.
- the inventor realized that most of the pets are cats and dogs. These pets are relatively active and run faster.
- video surveillance to analyze the data collected by multiple cameras, the results obtained are static images without temporality. Continuity. Each camera saves the video data monitored so far. As the monitored target moves, the activity track appears within the monitoring range of different cameras, which causes the data of the monitored target's activity track to be recorded in different camera files, and the target is tracked and The analysis brought great difficulties and affected the tracking and analysis of the pet's trajectory in the later stage.
- the first aspect of the present application provides a method for monitoring urban pet activity tracks based on image recognition, the method including:
- Acquiring pet images and collection information collected by an image collection device where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
- the second correction model is used to correct the category probability to obtain the second category probability
- the third correction model is used to correct the category probability to obtain the third category probability
- the activity trajectory of the pet is determined based on the corrected category probability.
- a second aspect of the present application provides a device for monitoring urban pet activity tracks based on image recognition, the device comprising:
- the probability initial module is used to initialize the category probability of each pet category
- An information acquisition module for acquiring pet images and collection information collected by an image collection device, the collection information including the geographic location, device identification number, and collection time of the image collection device;
- An identification recognition module for identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
- the information judgment module is used to judge whether the collected information of any two pet images is the same
- the first correction module is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same;
- the second correction module is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different;
- the third correction module is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different;
- the trajectory determination module is used to determine the activity trajectory of the pet based on the corrected category probability.
- a third aspect of the present application provides a terminal, the terminal includes a processor, and the processor is configured to implement the following steps when executing computer-readable instructions stored in a memory:
- Acquiring pet images and collection information collected by an image collection device where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
- the second correction model is used to correct the category probability to obtain the second category probability
- the third correction model is used to correct the category probability to obtain the third category probability
- the activity trajectory of the pet is determined based on the corrected category probability.
- a fourth aspect of the present application provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
- Acquiring pet images and collection information collected by an image collection device where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
- the second correction model is used to correct the category probability to obtain the second category probability
- the third correction model is used to correct the category probability to obtain the third category probability
- the activity trajectory of the pet is determined based on the corrected category probability.
- the image recognition-based urban pet activity track monitoring method, device, terminal, and storage medium described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities.
- This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices.
- the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information. The entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate feature vector extraction of pet images by traditional algorithms.
- FIG. 1 is a flowchart of a method for monitoring urban pet activity tracks based on image recognition provided in Embodiment 1 of the present application.
- Fig. 2 is a structural diagram of a device for monitoring urban pet activity tracks based on image recognition provided in the second embodiment of the present application.
- FIG. 3 is a schematic structural diagram of a terminal provided in Embodiment 3 of the present application.
- FIG. 1 is a flowchart of a method for monitoring urban pet activity tracks based on image recognition provided in Embodiment 1 of the present application.
- the function of urban pet activity track monitoring based on image recognition can be directly integrated on the terminal, or developed by software The tool kit (Software Development Kit, SKD) runs in the terminal.
- the method for monitoring urban pet activity tracks based on image recognition specifically includes the following steps. According to different needs, the order of the steps in the flowchart can be changed, and some of the steps can be omitted.
- the category probability refers to the probability that a certain pet belongs to a certain category.
- the category probability is initialized first, and the category probabilities of all pet categories are assigned the same initial value, assuming that a certain pet belongs to each category The initial category probability is the same.
- the pets that may appear in the city are: Golden Retriever, Samoyed, Husky, German Shepherd, Horse Dog, etc.
- 5 categories can be set correspondingly, and the category probability of each category is 1/5.
- the category probability can be initialized or modified according to actual needs.
- the category probability and the identification information of each category are stored.
- S12 Acquire pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device.
- a plurality of high-definition digital image acquisition devices may be preset to collect images of pets according to relevant policy regulations or actual scene requirements.
- the presetting a plurality of image acquisition devices includes presetting the positions of the plurality of image acquisition devices and the height of the image acquisition devices.
- the image capture device can be installed at the entrance and exit of the park or in an open area.
- the installation position of the image acquisition device is determined, the installation height of the image acquisition device is determined, so that the pet image collected by the image acquisition device is unobstructed, which is convenient for improving the recognition accuracy of the pet image.
- the collection information refers to the information when the image collection device collects the pet image, and may include: the geographic location of the image collection device, the device identification number of the image collection device, and the time when the pet image was collected (hereinafter referred to as Acquisition time).
- the geographic location may be represented by latitude and longitude coordinates
- the device identification number may be represented by C+digits
- the collection time may be represented by year-month-day-hour-minute-second.
- S13 Identify identification information of the pet in the pet image and store the identification information in association with the pet image and the collected information.
- identification information that is, identification information and pets have a one-to-one correspondence
- golden retriever corresponds to identification information a1
- Samoyed corresponds to identification information a2
- Husky corresponds to identification information a3.
- the identification information corresponding to the pet in the pet image After the identification information corresponding to the pet in the pet image is identified, it can be associated with the pet image and the geographic location of the image acquisition device, the device identification number of the image acquisition device, and the time when the pet image was collected and stored in a preset In the database.
- an image capture device C located at a certain geographic location L (location) has captured a husky
- the comparison of the husky by the above steps S11-S13 If the identification information is a3, a record (a3, T, L, C) can be formed for associative storage. It is convenient to subsequently obtain other multiple parameter information according to any one parameter association. For example, multiple parameters such as pet images with the same device identification number, identification information, geographic location of the image collection device, and time when the pet image was collected can be obtained in association according to the parameter of the device identification number.
- the identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information includes:
- the identification information of the pet is determined according to the recognition result.
- the pet identification recognition model is pre-trained, and the training process may include: acquiring a plurality of pet images in advance; dividing the plurality of pet images and identification information into a training set of a first proportion and a second proportion The test set, wherein the first ratio is much larger than the second ratio; input the training set into a preset deep neural network for supervised learning and training, and obtain a pet identification recognition model; input the test set into the The pet identification recognition model is tested to obtain a test pass rate; when the test pass rate is greater than or equal to the preset pass rate threshold, the training of the pet identification recognition model is ended, and when the test pass rate is less than the preset pass rate Rate threshold, then re-divide the training set and test set, learn and train the pet identification recognition model based on the new training set, and test the pass rate of the newly trained pet identification identification model based on the new test set. Since the pet identification model is not the focus of this application, the specific process of training the pet identification model will not be elaborated here.
- the input of the pet image into a pre-trained pet identification recognition model includes:
- the target area in the pet image is detected.
- the cropped target area is used as an input image and input into a pre-trained gesture recognition model.
- the YOLO target detection algorithm can be used to select the area of the pet in the pet image with a detection frame.
- the area selected by the detection frame is the target area, because the number of pixels in the target area is much smaller than that of the entire pet image.
- the number of pixels, and the target area almost only contains the target object of pets, and no other non-target objects. Therefore, cropping out the target area as the input image of the pet identification model will not only help to improve the pet identification model's recognition of pet identification.
- the efficiency of information, and the absence of interference from non-target objects in the target area can also improve the accuracy of the pet identification model for identifying pet identification information.
- any two pet images can be obtained from a preset database, and based on the identification information and collection information associated with the two pet images, it is determined whether the pets in the two pet images belong to the same category, and based on the identification information And collect information to modify the initialized category probability.
- the probability of a certain pet belonging to a certain category is large, and the probability of belonging to other categories is small. Later, the activity trajectory and activity area of pets of different categories can be analyzed based on the corrected category probability.
- the collection information corresponding to any two pet images acquired is the same, that is, the geographic location, device identification number, and acquisition time are exactly the same, indicating that the two pet images were acquired by the same image acquisition device at the same time. .
- the image acquisition device is represented by c
- the geographic location is represented by l
- the population is represented by p
- the pet identification is represented by a.
- A belongs to the population p and is denoted as a ⁇ p
- the probability of a ⁇ p is ⁇
- a certain camera c collects two pets i1 and i2 at a certain time t, then there are with And the corresponding category probability is with
- the using the first correction model to correct the category probability to obtain the first category probability includes:
- ⁇ is the correction factor coefficient
- the foregoing embodiment is based on the category probability correction algorithm of a single image acquisition device at the same time, and pets appearing in a scene at the same time add a weight ⁇ to the same population factor.
- the geographic locations and device identification numbers corresponding to any two acquired pet images are the same, and the acquisition time is different, indicating that the two pet images were acquired by the same image acquisition device at different times.
- a certain camera c collects two different pets i1 and i2 at different times t1 and t2, then there is with
- the second correction model is used to correct the category probability to obtain the second category probability as follows:
- ⁇ is the correction factor coefficient
- t is the time
- the above embodiment is based on the category probability correction algorithm of a single image acquisition device at different times. Pets that appear in the same scene within a short period of time are given a penalty factor ⁇ t according to the time interval, and a weight value ⁇ t * is added to the same population factor. ⁇ , that is, add a penalty factor ⁇ t to the correction factor coefficient ⁇ , and ⁇ t is related to the interval of time t.
- the geographic locations and device identification numbers corresponding to any two pet images acquired are not the same, but when the acquisition time is the same, it indicates that the two pet images were acquired by two different image acquisition devices at the same time. of.
- Cameras c1 and c2 collect pets i1, i2 and i3, i4 at the same time t, respectively
- the third category probability is corrected by using the third correction model to obtain the third category probability as follows:
- ⁇ is the correction factor coefficient
- l is the distance
- the above embodiment is based on the category probability correction algorithm of multiple image acquisition devices at the same time, in which i1 and i3 are matched to the same pet through a matching algorithm (but two cameras i1 and i3 that are far apart cannot be the same pet), Therefore, the correction factor ⁇ l at this time is related to the distance l.
- the corrected category probabilities, pet images, collected information, and identification information can be stored in association, and based on the associated stored information, the category probabilities of the same category can be obtained.
- the activity trajectory of the pet, and the activity area of the pet is determined according to the activity trajectory.
- the determining the pet's activity track based on the corrected category probability includes:
- the activity track of the pet is determined according to the collected information.
- the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd, and Horse Dog at t1 are 0.9, 0.1, 0, 0, respectively, and a1 at t2 corresponds to Golden Retriever, Samoyed, Husky
- the corrected category probabilities of German Shepherd and Horse Dog are 0.9, 0, 0.1, 0, 0,
- the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd and Horse Dog at t2 are 0.8 and 0.1 respectively.
- 0.1, 0, 0, the category probability 0.9 is used as the target category probability of a1, indicating that a1 belongs to the golden retriever.
- the collection information of all pet images corresponding to a1 is extracted, and then the activity trajectory of a1 is determined according to the extracted collection information. Specifically, according to the location and machine number of the image acquisition device in the acquisition information, and the corresponding acquisition time, it is determined when and where the puppy appears.
- the above-mentioned urban pet population monitoring method based on image recognition can be used not only to find lost pets, but also to rescue stray pets and law enforcement basis for prohibiting pets from entering specific areas.
- the image recognition-based urban pet activity track monitoring method described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities.
- This application initializes the category probability of each pet category, acquires pet images and collection information sent by an image collection device, the collection information includes the geographic location of the image collection device, the device identification number, and the collection time, and identifies the pet image
- the identification information of the pet and the associated storage of the identification information with the pet image and the collection information to determine whether the collection information of any two pet images are the same, when the geographic location, the device identification number and the collection information in the collection information
- the first correction model is used to update the category probability to obtain the first category probability.
- the second correction model is used The category probability is updated to obtain the second category probability.
- the third correction model is used to update the category probability to obtain the second category probability.
- the three-category probabilities are used to determine the pet's activity trajectory based on the corrected category probabilities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices.
- the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information.
- the entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate extraction of the feature vector of the pet image by the traditional algorithm.
- Fig. 2 is a structural diagram of a device for monitoring urban pet activity tracks based on image recognition provided in the second embodiment of the present application.
- the device 20 for monitoring urban pet activity tracks based on image recognition may include multiple functional modules composed of computer-readable instruction segments.
- the computer-readable instructions of each program segment in the urban pet activity track monitoring device 20 based on image recognition can be stored in the memory of the terminal and executed by the at least one processor to execute based on image recognition (see Figure 1 describes) the monitoring of urban pet activity tracks.
- the image recognition-based urban pet activity track monitoring device 20 can be divided into multiple functional modules according to the functions it performs.
- the functional modules may include: a probability initial module 201, an information acquisition module 202, an identification recognition module 203, an information judgment module 204, a first correction module 205, a second correction module 206, a third correction module 207, and a trajectory determination module 208.
- the module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In this embodiment, the functions of each module will be described in detail in subsequent embodiments.
- the probability initial module 201 is used to initialize the category probability of each pet category.
- the category probability refers to the probability that a certain pet belongs to a certain category.
- the category probability is initialized first, and the category probabilities of all pet categories are assigned the same initial value, assuming that a certain pet belongs to each category The initial category probability is the same.
- the pets that may appear in the city are: Golden Retriever, Samoyed, Husky, German Shepherd, Horse Dog, etc.
- 5 categories can be set correspondingly, and the category probability of each category is 1/5.
- the category probability can be initialized or modified according to actual needs.
- the category probability and the identification information of each category are stored.
- the information acquisition module 202 is configured to acquire pet images and collection information collected by an image collection device, and the collection information includes the geographic location, device identification number, and collection time of the image collection device.
- a plurality of high-definition digital image acquisition devices may be preset to collect images of pets according to relevant policy regulations or actual scene requirements.
- the presetting a plurality of image acquisition devices includes presetting the positions of the plurality of image acquisition devices and the height of the image acquisition devices.
- the image capture device can be installed at the entrance and exit of the park or in an open area.
- the installation position of the image acquisition device is determined, the installation height of the image acquisition device is determined, so that the pet image collected by the image acquisition device is unobstructed, which is convenient for improving the recognition accuracy of the pet image.
- the collection information refers to the information when the image collection device collects the pet image, and may include: the geographic location of the image collection device, the device identification number of the image collection device, and the time when the pet image was collected (hereinafter referred to as Acquisition time).
- the geographic location may be represented by latitude and longitude coordinates
- the device identification number may be represented by C+digits
- the collection time may be represented by year-month-day-hour-minute-second.
- the identification recognition module 203 is configured to identify the identification information of the pet in the pet image and store the identification information in association with the pet image and the collected information.
- identification information that is, identification information and pets have a one-to-one correspondence
- golden retriever corresponds to identification information a1
- Samoyed corresponds to identification information a2
- Husky corresponds to identification information a3.
- the identification information corresponding to the pet in the pet image After the identification information corresponding to the pet in the pet image is identified, it can be associated with the pet image and the geographic location of the image acquisition device, the device identification number of the image acquisition device, and the time when the pet image was collected and stored in a preset In the database.
- an image capture device C located at a certain geographic location L (location) has captured a husky
- the above-mentioned modules 201-203 compare the husky's If the identification information is a3, a record (a3, T, L, C) can be formed for associative storage. It is convenient to subsequently obtain other multiple parameter information according to any one parameter association. For example, multiple parameters such as pet images with the same device identification number, identification information, geographic location of the image collection device, and time when the pet image was collected can be obtained in association according to the parameter of the device identification number.
- the identification recognition module 203 identifying the identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information includes:
- the identification information of the pet is determined according to the recognition result.
- the pet identification recognition model is pre-trained, and the training process may include: acquiring a plurality of pet images in advance; dividing the plurality of pet images and identification information into a training set of a first proportion and a second proportion The test set, wherein the first ratio is much larger than the second ratio; input the training set into a preset deep neural network for supervised learning and training, and obtain a pet identification recognition model; input the test set into the The pet identification recognition model is tested to obtain a test pass rate; when the test pass rate is greater than or equal to the preset pass rate threshold, the training of the pet identification recognition model is ended, and when the test pass rate is less than the preset pass rate Rate threshold, then re-divide the training set and test set, learn and train the pet identification recognition model based on the new training set, and test the pass rate of the newly trained pet identification identification model based on the new test set. Since the pet identification model is not the focus of this application, the specific process of training the pet identification model will not be elaborated here.
- the input of the pet image into a pre-trained pet identification recognition model includes:
- the target area in the pet image is detected.
- the cropped target area is used as an input image and input into a pre-trained gesture recognition model.
- the YOLO target detection algorithm can be used to select the area of the pet in the pet image with a detection frame.
- the area selected by the detection frame is the target area, because the number of pixels in the target area is much smaller than that of the entire pet image.
- the number of pixels, and the target area almost only contains the target object of pets, and no other non-target objects. Therefore, cropping out the target area as the input image of the pet identification model will not only help to improve the pet identification model's recognition of pet identification.
- the efficiency of information, and the absence of interference from non-target objects in the target area can also improve the accuracy of the pet identification model for identifying pet identification information.
- the information judging module 204 is used to judge whether the collected information of any two pet images is the same.
- any two pet images can be obtained from a preset database, and based on the identification information and collection information associated with the two pet images, it is determined whether the pets in the two pet images belong to the same category, and based on the identification information And collect information to modify the initialized category probability.
- the probability of a certain pet belonging to a certain category is large, and the probability of belonging to other categories is small. Later, the activity trajectory and activity area of pets of different categories can be analyzed based on the corrected category probability.
- the first correction module 205 is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same.
- the collection information corresponding to any two pet images acquired is the same, that is, the geographic location, device identification number, and acquisition time are exactly the same, indicating that the two pet images were acquired by the same image acquisition device at the same time. .
- the image acquisition device is represented by c
- the geographic location is represented by l
- the population is represented by p
- the pet identification is represented by a.
- A belongs to the population p and is denoted as a ⁇ p
- the probability of a ⁇ p is ⁇
- a certain camera c collects two pets i1 and i2 at a certain time t, then there are with And the corresponding category probability is with
- that the first correction module 205 uses the first correction model to correct the category probability to obtain the first category probability includes:
- ⁇ is the correction factor coefficient
- the foregoing embodiment is based on the category probability correction algorithm of a single image acquisition device at the same time, and pets appearing in a scene at the same time add a weight ⁇ to the same population factor.
- the second correction module 206 is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different.
- the geographic locations and device identification numbers corresponding to any two acquired pet images are the same, and the acquisition time is different, indicating that the two pet images were acquired by the same image acquisition device at different times.
- a certain camera c collects two different pets i1 and i2 at different times t1 and t2, then there is with
- the second correction module 206 uses a second correction model to correct the category probability to obtain the second category probability as follows:
- ⁇ is the correction factor coefficient
- t is the time
- the above embodiment is based on the category probability correction algorithm of a single image acquisition device at different times. Pets that appear in the same scene within a short period of time are given a penalty factor ⁇ t according to the time interval, and a weight value ⁇ t * is added to the same population factor. ⁇ , that is, add a penalty factor ⁇ t to the correction factor coefficient ⁇ , and ⁇ t is related to the interval of time t.
- the third correction module 207 is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different.
- the geographic locations and device identification numbers corresponding to any two pet images acquired are not the same, but when the acquisition time is the same, it indicates that the two pet images were acquired by two different image acquisition devices at the same time. of.
- Cameras c1 and c2 collect pets i1, i2 and i3, i4 at the same time t, respectively
- the third correction module 207 uses a third correction model to correct the category probability to obtain the third category probability as follows:
- ⁇ is the correction factor coefficient
- l is the distance
- the above embodiment is based on the category probability correction algorithm of multiple image acquisition devices at the same time, in which i1 and i3 are matched to the same pet through a matching algorithm (but two cameras i1 and i3 that are far apart cannot be the same pet), Therefore, the correction factor ⁇ l at this time is related to the distance l.
- the trajectory determination module 208 is configured to determine the activity trajectory of the pet based on the corrected category probability.
- the corrected category probabilities, pet images, collected information, and identification information can be stored in association, and based on the associated stored information, the category probabilities of the same category can be obtained.
- the activity trajectory of the pet, and the activity area of the pet is determined according to the activity trajectory.
- the trajectory determination module 208 determining the activity trajectory of the pet based on the corrected category probability includes:
- the activity track of the pet is determined according to the collected information.
- the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd, and Horse Dog at t1 are 0.9, 0.1, 0, 0, respectively, and a1 at t2 corresponds to Golden Retriever, Samoyed, Husky
- the corrected category probabilities of German Shepherd and Horse Dog are 0.9, 0, 0.1, 0, 0,
- the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd and Horse Dog at t2 are 0.8 and 0.1 respectively.
- 0.1, 0, 0, the category probability 0.9 is used as the target category probability of a1, indicating that a1 belongs to the golden retriever.
- the collection information of all pet images corresponding to a1 is extracted, and then the activity trajectory of a1 is determined according to the extracted collection information. Specifically, according to the location and machine number of the image acquisition device in the acquisition information, and the corresponding acquisition time, it is determined when and where the puppy appears.
- the above-mentioned urban pet population monitoring method based on image recognition can be applied not only to finding lost pets, but also to rescue stray pets and law enforcement basis for prohibiting pets from entering specific areas.
- the urban pet activity track monitoring device based on image recognition described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities.
- This application initializes the category probability of each pet category, acquires pet images and collection information sent by an image collection device, the collection information includes the geographic location of the image collection device, the device identification number, and the collection time, and identifies the pet image
- the identification information of the pet and the associated storage of the identification information with the pet image and the collection information to determine whether the collection information of any two pet images are the same, when the geographic location, the device identification number and the collection information in the collection information
- the first correction model is used to update the category probability to obtain the first category probability.
- the second correction model is used The category probability is updated to obtain the second category probability.
- the third correction model is used to update the category probability to obtain the second category probability.
- the three-category probabilities are used to determine the pet's activity trajectory based on the corrected category probabilities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices.
- the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information.
- the entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate extraction of the feature vector of the pet image by the traditional algorithm.
- the terminal 3 includes a memory 31, at least one processor 32, at least one communication bus 33, and a transceiver 34.
- the structure of the terminal shown in FIG. 3 does not constitute a limitation of the embodiment of the present application. It may be a bus-type structure or a star structure. The terminal 3 may also include more More or less other hardware or software, or different component arrangements.
- the terminal 3 is a terminal that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
- Its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, and Programming gate arrays, digital processors and embedded devices, etc.
- the terminal 3 may also include client equipment.
- the client equipment includes, but is not limited to, any electronic product that can interact with the client through a keyboard, a mouse, a remote control, a touch panel, or a voice control device, for example, a personal computer. Computers, tablets, smart phones, digital cameras, etc.
- terminal 3 is only an example. If other existing or future electronic products can be adapted to this application, they should also be included in the protection scope of this application and included here by reference.
- the memory 31 is used to store computer-readable instructions and various data, such as a device installed in the terminal 3, and realize high-speed and automatic completion of programs or data during the operation of the terminal 3 Access.
- the memory 31 includes volatile and non-volatile memory, such as random access memory (Random Access Memory, RAM), read-only memory (Read-Only Memory, ROM), and programmable read-only memory (Programmable Read-Only).
- PROM Erasable Programmable Read-Only Memory
- EPROM Erasable Programmable Read-Only Memory
- OTPROM Electronic Erasable Programmable Read-Only Memory
- EEPROM Electrically-Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- the computer-readable storage medium may be non-volatile or volatile.
- the at least one processor 32 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one Or a combination of multiple central processing units (CPU), microprocessors, digital processing chips, graphics processors, and various control chips.
- the at least one processor 32 is the control core (Control Unit) of the terminal 3.
- Various interfaces and lines are used to connect the various components of the entire terminal 3, and by running or executing programs or modules stored in the memory 31, And call the data stored in the memory 31 to execute various functions of the terminal 3 and process data.
- the at least one communication bus 33 is configured to implement connection and communication between the memory 31 and the at least one processor 32 and the like.
- the terminal 3 may also include a power source (such as a battery) for supplying power to various components.
- the power source may be logically connected to the at least one processor 32 through a power management device, so as to realize management through the power management device. Functions such as charging, discharging, and power management.
- the power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators.
- the terminal 3 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
- the above-mentioned integrated unit implemented in the form of a software function module may be stored in a computer readable storage medium.
- the above-mentioned software function module is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a terminal, or a network device, etc.) or a processor execute the method described in each embodiment of the present application. section.
- the at least one processor 32 can execute the operating device of the terminal 3 and various installed applications, computer-readable instructions, etc., such as the above-mentioned modules.
- the memory 31 stores computer-readable instructions, and the at least one processor 32 can call the computer-readable instructions stored in the memory 31 to perform related functions.
- the various modules described in FIG. 2 are computer-readable instructions stored in the memory 31 and executed by the at least one processor 32, so as to realize the functions of the various modules.
- the memory 31 stores multiple instructions, and the multiple instructions are executed by the at least one processor 32 to implement all or part of the steps in the method described in the present application.
- the disclosed device and method can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
- modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
La présente invention a trait au domaine technique de l'intelligence artificielle. L'invention concerne un procédé et un appareil de surveillance de trajectoire de mouvement d'animal de compagnie dans un cadre urbain sur la base d'une reconnaissance d'image, d'un terminal et d'un support de stockage. Le procédé consiste : à initialiser une probabilité de catégorie d'une catégorie d'animal de compagnie ; à acquérir des images d'animal de compagnie et une position géographique, un numéro d'identification de dispositif et un temps de collecte d'un dispositif de collecte d'images ; à reconnaître des informations d'identification d'un animal de compagnie, et à stocker ces dernières et les images d'animal de compagnie d'une manière associée ; lorsque des positions géographiques, des numéros d'identification de dispositif et des temps de collecte sont tous les mêmes, à utiliser un premier modèle de correction pour corriger la probabilité de catégorie ; lorsque les positions géographiques et les numéros d'identification de dispositif sont les mêmes, mais que les temps de collecte sont différents, à utiliser un deuxième modèle de correction pour corriger la probabilité de catégorie ; lorsque les temps de collecte sont les mêmes, mais que les positions géographiques et les numéros d'identification de dispositif sont tous différents, à utiliser un troisième modèle de correction pour corriger la probabilité de catégorie ; et à déterminer une trajectoire de mouvement de l'animal de compagnie sur la base de la probabilité de catégorie corrigée. La présente invention peut être appliquée au domaine de la ville intelligente, et la trajectoire de mouvement de l'animal de compagnie dans une ville peut être surveillée en fonction de la probabilité.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910829499.X | 2019-09-03 | ||
CN201910829499.XA CN110751675B (zh) | 2019-09-03 | 2019-09-03 | 基于图像识别的城市宠物活动轨迹监测方法及相关设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021043074A1 true WO2021043074A1 (fr) | 2021-03-11 |
Family
ID=69276012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/111880 WO2021043074A1 (fr) | 2019-09-03 | 2020-08-27 | Procédé de surveillance de trajectoire de mouvement d'animal de compagnie dans un cadre urbain basé sur la reconnaissance d'image, et dispositifs associés |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110751675B (fr) |
WO (1) | WO2021043074A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114550490A (zh) * | 2022-02-22 | 2022-05-27 | 北京信路威科技股份有限公司 | 停车场的车位统计方法、系统、计算机设备和存储介质 |
CN117692767A (zh) * | 2024-02-02 | 2024-03-12 | 深圳市积加创新技术有限公司 | 一种基于场景自适应动态分时策略的低功耗监控系统 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110751675B (zh) * | 2019-09-03 | 2023-08-11 | 平安科技(深圳)有限公司 | 基于图像识别的城市宠物活动轨迹监测方法及相关设备 |
CN111354024B (zh) * | 2020-04-10 | 2023-04-21 | 深圳市五元科技有限公司 | 关键目标的行为预测方法、ai服务器及存储介质 |
CN112529020B (zh) * | 2020-12-24 | 2024-05-24 | 携程旅游信息技术(上海)有限公司 | 基于神经网络的动物识别方法、系统、设备及存储介质 |
CN112904778B (zh) * | 2021-02-02 | 2022-04-15 | 东北林业大学 | 一种基于多维信息融合的野生动物智能监测方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9852363B1 (en) * | 2012-09-27 | 2017-12-26 | Google Inc. | Generating labeled images |
CN109934293A (zh) * | 2019-03-15 | 2019-06-25 | 苏州大学 | 图像识别方法、装置、介质及混淆感知卷积神经网络 |
CN109934176A (zh) * | 2019-03-15 | 2019-06-25 | 艾特城信息科技有限公司 | 行人识别系统、识别方法及计算机可读存储介质 |
CN110751675A (zh) * | 2019-09-03 | 2020-02-04 | 平安科技(深圳)有限公司 | 基于图像识别的城市宠物活动轨迹监测方法及相关设备 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10713500B2 (en) * | 2016-09-12 | 2020-07-14 | Kennesaw State University Research And Service Foundation, Inc. | Identification and classification of traffic conflicts using live video images |
CN109376786A (zh) * | 2018-10-31 | 2019-02-22 | 中国科学院深圳先进技术研究院 | 一种图像分类方法、装置、终端设备及可读存储介质 |
CN110163301A (zh) * | 2019-05-31 | 2019-08-23 | 北京金山云网络技术有限公司 | 一种图像的分类方法及装置 |
-
2019
- 2019-09-03 CN CN201910829499.XA patent/CN110751675B/zh active Active
-
2020
- 2020-08-27 WO PCT/CN2020/111880 patent/WO2021043074A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9852363B1 (en) * | 2012-09-27 | 2017-12-26 | Google Inc. | Generating labeled images |
CN109934293A (zh) * | 2019-03-15 | 2019-06-25 | 苏州大学 | 图像识别方法、装置、介质及混淆感知卷积神经网络 |
CN109934176A (zh) * | 2019-03-15 | 2019-06-25 | 艾特城信息科技有限公司 | 行人识别系统、识别方法及计算机可读存储介质 |
CN110751675A (zh) * | 2019-09-03 | 2020-02-04 | 平安科技(深圳)有限公司 | 基于图像识别的城市宠物活动轨迹监测方法及相关设备 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114550490A (zh) * | 2022-02-22 | 2022-05-27 | 北京信路威科技股份有限公司 | 停车场的车位统计方法、系统、计算机设备和存储介质 |
CN114550490B (zh) * | 2022-02-22 | 2023-12-22 | 北京信路威科技股份有限公司 | 停车场的车位统计方法、系统、计算机设备和存储介质 |
CN117692767A (zh) * | 2024-02-02 | 2024-03-12 | 深圳市积加创新技术有限公司 | 一种基于场景自适应动态分时策略的低功耗监控系统 |
CN117692767B (zh) * | 2024-02-02 | 2024-06-11 | 深圳市积加创新技术有限公司 | 一种基于场景自适应动态分时策略的低功耗监控系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110751675B (zh) | 2023-08-11 |
CN110751675A (zh) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021043074A1 (fr) | Procédé de surveillance de trajectoire de mouvement d'animal de compagnie dans un cadre urbain basé sur la reconnaissance d'image, et dispositifs associés | |
WO2021043073A1 (fr) | Procédé de surveillance de trajectoire de déplacement d'animal domestique en milieu urbain basé sur la reconnaissance d'image et dispositifs associés | |
US11232327B2 (en) | Smart video surveillance system using a neural network engine | |
JP6488083B2 (ja) | 駐車区画占有率判定のための映像および視覚ベースのアクセス制御のハイブリッド方法およびシステム | |
CN111507989A (zh) | 语义分割模型的训练生成方法、车辆外观检测方法、装置 | |
US11475671B2 (en) | Multiple robots assisted surveillance system | |
CN106791710A (zh) | 目标检测方法、装置和电子设备 | |
US20210056312A1 (en) | Video blocking region selection method and apparatus, electronic device, and system | |
Guzhva et al. | Now you see me: Convolutional neural network based tracker for dairy cows | |
US20230060211A1 (en) | System and Method for Tracking Moving Objects by Video Data | |
CN105844659A (zh) | 运动部件的跟踪方法和装置 | |
US20190096066A1 (en) | System and Method for Segmenting Out Multiple Body Parts | |
CN112836683B (zh) | 用于便携式摄像设备的车牌识别方法、装置、设备和介质 | |
CN114360261B (zh) | 车辆逆行的识别方法、装置、大数据分析平台和介质 | |
CN111985452B (zh) | 一种人员活动轨迹及落脚点的自动生成方法及系统 | |
CN113689475A (zh) | 跨境头轨迹跟踪方法、设备及存储介质 | |
JP2021106330A (ja) | 情報処理装置、情報処理方法、及びプログラム | |
CN115661521A (zh) | 消防栓漏水的检测方法、系统、电子设备及存储介质 | |
CN114038040A (zh) | 机房巡检监督方法、装置、设备 | |
CN113012223A (zh) | 目标流动监测方法、装置、计算机设备和存储介质 | |
Ouseph et al. | Machine Learning Based Smart Parking Management for Intelligent Transportation Systems | |
CN112989892A (zh) | 一种动物监控方法及装置 | |
US20190096045A1 (en) | System and Method for Realizing Increased Granularity in Images of a Dataset | |
CN111666786A (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN116823892B (zh) | 基于楼宇管控的身份确定方法、装置、设备和介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20860659 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20860659 Country of ref document: EP Kind code of ref document: A1 |