WO2021043074A1 - Urban pet motion trajectory monitoring method based on image recognition, and related devices - Google Patents

Urban pet motion trajectory monitoring method based on image recognition, and related devices Download PDF

Info

Publication number
WO2021043074A1
WO2021043074A1 PCT/CN2020/111880 CN2020111880W WO2021043074A1 WO 2021043074 A1 WO2021043074 A1 WO 2021043074A1 CN 2020111880 W CN2020111880 W CN 2020111880W WO 2021043074 A1 WO2021043074 A1 WO 2021043074A1
Authority
WO
WIPO (PCT)
Prior art keywords
pet
category probability
category
image
probability
Prior art date
Application number
PCT/CN2020/111880
Other languages
French (fr)
Chinese (zh)
Inventor
金晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021043074A1 publication Critical patent/WO2021043074A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, terminal and storage medium for monitoring urban pet activity tracks based on image recognition.
  • tracking the activity trajectory of urban pets mainly uses video surveillance to analyze and identify moving targets, so as to record the moving process of the targets, which is convenient for tracking and analysis.
  • the inventor realized that most of the pets are cats and dogs. These pets are relatively active and run faster.
  • video surveillance to analyze the data collected by multiple cameras, the results obtained are static images without temporality. Continuity. Each camera saves the video data monitored so far. As the monitored target moves, the activity track appears within the monitoring range of different cameras, which causes the data of the monitored target's activity track to be recorded in different camera files, and the target is tracked and The analysis brought great difficulties and affected the tracking and analysis of the pet's trajectory in the later stage.
  • the first aspect of the present application provides a method for monitoring urban pet activity tracks based on image recognition, the method including:
  • Acquiring pet images and collection information collected by an image collection device where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
  • the second correction model is used to correct the category probability to obtain the second category probability
  • the third correction model is used to correct the category probability to obtain the third category probability
  • the activity trajectory of the pet is determined based on the corrected category probability.
  • a second aspect of the present application provides a device for monitoring urban pet activity tracks based on image recognition, the device comprising:
  • the probability initial module is used to initialize the category probability of each pet category
  • An information acquisition module for acquiring pet images and collection information collected by an image collection device, the collection information including the geographic location, device identification number, and collection time of the image collection device;
  • An identification recognition module for identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
  • the information judgment module is used to judge whether the collected information of any two pet images is the same
  • the first correction module is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same;
  • the second correction module is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different;
  • the third correction module is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different;
  • the trajectory determination module is used to determine the activity trajectory of the pet based on the corrected category probability.
  • a third aspect of the present application provides a terminal, the terminal includes a processor, and the processor is configured to implement the following steps when executing computer-readable instructions stored in a memory:
  • Acquiring pet images and collection information collected by an image collection device where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
  • the second correction model is used to correct the category probability to obtain the second category probability
  • the third correction model is used to correct the category probability to obtain the third category probability
  • the activity trajectory of the pet is determined based on the corrected category probability.
  • a fourth aspect of the present application provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • Acquiring pet images and collection information collected by an image collection device where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
  • the second correction model is used to correct the category probability to obtain the second category probability
  • the third correction model is used to correct the category probability to obtain the third category probability
  • the activity trajectory of the pet is determined based on the corrected category probability.
  • the image recognition-based urban pet activity track monitoring method, device, terminal, and storage medium described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities.
  • This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices.
  • the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information. The entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate feature vector extraction of pet images by traditional algorithms.
  • FIG. 1 is a flowchart of a method for monitoring urban pet activity tracks based on image recognition provided in Embodiment 1 of the present application.
  • Fig. 2 is a structural diagram of a device for monitoring urban pet activity tracks based on image recognition provided in the second embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a terminal provided in Embodiment 3 of the present application.
  • FIG. 1 is a flowchart of a method for monitoring urban pet activity tracks based on image recognition provided in Embodiment 1 of the present application.
  • the function of urban pet activity track monitoring based on image recognition can be directly integrated on the terminal, or developed by software The tool kit (Software Development Kit, SKD) runs in the terminal.
  • the method for monitoring urban pet activity tracks based on image recognition specifically includes the following steps. According to different needs, the order of the steps in the flowchart can be changed, and some of the steps can be omitted.
  • the category probability refers to the probability that a certain pet belongs to a certain category.
  • the category probability is initialized first, and the category probabilities of all pet categories are assigned the same initial value, assuming that a certain pet belongs to each category The initial category probability is the same.
  • the pets that may appear in the city are: Golden Retriever, Samoyed, Husky, German Shepherd, Horse Dog, etc.
  • 5 categories can be set correspondingly, and the category probability of each category is 1/5.
  • the category probability can be initialized or modified according to actual needs.
  • the category probability and the identification information of each category are stored.
  • S12 Acquire pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device.
  • a plurality of high-definition digital image acquisition devices may be preset to collect images of pets according to relevant policy regulations or actual scene requirements.
  • the presetting a plurality of image acquisition devices includes presetting the positions of the plurality of image acquisition devices and the height of the image acquisition devices.
  • the image capture device can be installed at the entrance and exit of the park or in an open area.
  • the installation position of the image acquisition device is determined, the installation height of the image acquisition device is determined, so that the pet image collected by the image acquisition device is unobstructed, which is convenient for improving the recognition accuracy of the pet image.
  • the collection information refers to the information when the image collection device collects the pet image, and may include: the geographic location of the image collection device, the device identification number of the image collection device, and the time when the pet image was collected (hereinafter referred to as Acquisition time).
  • the geographic location may be represented by latitude and longitude coordinates
  • the device identification number may be represented by C+digits
  • the collection time may be represented by year-month-day-hour-minute-second.
  • S13 Identify identification information of the pet in the pet image and store the identification information in association with the pet image and the collected information.
  • identification information that is, identification information and pets have a one-to-one correspondence
  • golden retriever corresponds to identification information a1
  • Samoyed corresponds to identification information a2
  • Husky corresponds to identification information a3.
  • the identification information corresponding to the pet in the pet image After the identification information corresponding to the pet in the pet image is identified, it can be associated with the pet image and the geographic location of the image acquisition device, the device identification number of the image acquisition device, and the time when the pet image was collected and stored in a preset In the database.
  • an image capture device C located at a certain geographic location L (location) has captured a husky
  • the comparison of the husky by the above steps S11-S13 If the identification information is a3, a record (a3, T, L, C) can be formed for associative storage. It is convenient to subsequently obtain other multiple parameter information according to any one parameter association. For example, multiple parameters such as pet images with the same device identification number, identification information, geographic location of the image collection device, and time when the pet image was collected can be obtained in association according to the parameter of the device identification number.
  • the identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information includes:
  • the identification information of the pet is determined according to the recognition result.
  • the pet identification recognition model is pre-trained, and the training process may include: acquiring a plurality of pet images in advance; dividing the plurality of pet images and identification information into a training set of a first proportion and a second proportion The test set, wherein the first ratio is much larger than the second ratio; input the training set into a preset deep neural network for supervised learning and training, and obtain a pet identification recognition model; input the test set into the The pet identification recognition model is tested to obtain a test pass rate; when the test pass rate is greater than or equal to the preset pass rate threshold, the training of the pet identification recognition model is ended, and when the test pass rate is less than the preset pass rate Rate threshold, then re-divide the training set and test set, learn and train the pet identification recognition model based on the new training set, and test the pass rate of the newly trained pet identification identification model based on the new test set. Since the pet identification model is not the focus of this application, the specific process of training the pet identification model will not be elaborated here.
  • the input of the pet image into a pre-trained pet identification recognition model includes:
  • the target area in the pet image is detected.
  • the cropped target area is used as an input image and input into a pre-trained gesture recognition model.
  • the YOLO target detection algorithm can be used to select the area of the pet in the pet image with a detection frame.
  • the area selected by the detection frame is the target area, because the number of pixels in the target area is much smaller than that of the entire pet image.
  • the number of pixels, and the target area almost only contains the target object of pets, and no other non-target objects. Therefore, cropping out the target area as the input image of the pet identification model will not only help to improve the pet identification model's recognition of pet identification.
  • the efficiency of information, and the absence of interference from non-target objects in the target area can also improve the accuracy of the pet identification model for identifying pet identification information.
  • any two pet images can be obtained from a preset database, and based on the identification information and collection information associated with the two pet images, it is determined whether the pets in the two pet images belong to the same category, and based on the identification information And collect information to modify the initialized category probability.
  • the probability of a certain pet belonging to a certain category is large, and the probability of belonging to other categories is small. Later, the activity trajectory and activity area of pets of different categories can be analyzed based on the corrected category probability.
  • the collection information corresponding to any two pet images acquired is the same, that is, the geographic location, device identification number, and acquisition time are exactly the same, indicating that the two pet images were acquired by the same image acquisition device at the same time. .
  • the image acquisition device is represented by c
  • the geographic location is represented by l
  • the population is represented by p
  • the pet identification is represented by a.
  • A belongs to the population p and is denoted as a ⁇ p
  • the probability of a ⁇ p is ⁇
  • a certain camera c collects two pets i1 and i2 at a certain time t, then there are with And the corresponding category probability is with
  • the using the first correction model to correct the category probability to obtain the first category probability includes:
  • is the correction factor coefficient
  • the foregoing embodiment is based on the category probability correction algorithm of a single image acquisition device at the same time, and pets appearing in a scene at the same time add a weight ⁇ to the same population factor.
  • the geographic locations and device identification numbers corresponding to any two acquired pet images are the same, and the acquisition time is different, indicating that the two pet images were acquired by the same image acquisition device at different times.
  • a certain camera c collects two different pets i1 and i2 at different times t1 and t2, then there is with
  • the second correction model is used to correct the category probability to obtain the second category probability as follows:
  • is the correction factor coefficient
  • t is the time
  • the above embodiment is based on the category probability correction algorithm of a single image acquisition device at different times. Pets that appear in the same scene within a short period of time are given a penalty factor ⁇ t according to the time interval, and a weight value ⁇ t * is added to the same population factor. ⁇ , that is, add a penalty factor ⁇ t to the correction factor coefficient ⁇ , and ⁇ t is related to the interval of time t.
  • the geographic locations and device identification numbers corresponding to any two pet images acquired are not the same, but when the acquisition time is the same, it indicates that the two pet images were acquired by two different image acquisition devices at the same time. of.
  • Cameras c1 and c2 collect pets i1, i2 and i3, i4 at the same time t, respectively
  • the third category probability is corrected by using the third correction model to obtain the third category probability as follows:
  • is the correction factor coefficient
  • l is the distance
  • the above embodiment is based on the category probability correction algorithm of multiple image acquisition devices at the same time, in which i1 and i3 are matched to the same pet through a matching algorithm (but two cameras i1 and i3 that are far apart cannot be the same pet), Therefore, the correction factor ⁇ l at this time is related to the distance l.
  • the corrected category probabilities, pet images, collected information, and identification information can be stored in association, and based on the associated stored information, the category probabilities of the same category can be obtained.
  • the activity trajectory of the pet, and the activity area of the pet is determined according to the activity trajectory.
  • the determining the pet's activity track based on the corrected category probability includes:
  • the activity track of the pet is determined according to the collected information.
  • the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd, and Horse Dog at t1 are 0.9, 0.1, 0, 0, respectively, and a1 at t2 corresponds to Golden Retriever, Samoyed, Husky
  • the corrected category probabilities of German Shepherd and Horse Dog are 0.9, 0, 0.1, 0, 0,
  • the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd and Horse Dog at t2 are 0.8 and 0.1 respectively.
  • 0.1, 0, 0, the category probability 0.9 is used as the target category probability of a1, indicating that a1 belongs to the golden retriever.
  • the collection information of all pet images corresponding to a1 is extracted, and then the activity trajectory of a1 is determined according to the extracted collection information. Specifically, according to the location and machine number of the image acquisition device in the acquisition information, and the corresponding acquisition time, it is determined when and where the puppy appears.
  • the above-mentioned urban pet population monitoring method based on image recognition can be used not only to find lost pets, but also to rescue stray pets and law enforcement basis for prohibiting pets from entering specific areas.
  • the image recognition-based urban pet activity track monitoring method described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities.
  • This application initializes the category probability of each pet category, acquires pet images and collection information sent by an image collection device, the collection information includes the geographic location of the image collection device, the device identification number, and the collection time, and identifies the pet image
  • the identification information of the pet and the associated storage of the identification information with the pet image and the collection information to determine whether the collection information of any two pet images are the same, when the geographic location, the device identification number and the collection information in the collection information
  • the first correction model is used to update the category probability to obtain the first category probability.
  • the second correction model is used The category probability is updated to obtain the second category probability.
  • the third correction model is used to update the category probability to obtain the second category probability.
  • the three-category probabilities are used to determine the pet's activity trajectory based on the corrected category probabilities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices.
  • the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information.
  • the entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate extraction of the feature vector of the pet image by the traditional algorithm.
  • Fig. 2 is a structural diagram of a device for monitoring urban pet activity tracks based on image recognition provided in the second embodiment of the present application.
  • the device 20 for monitoring urban pet activity tracks based on image recognition may include multiple functional modules composed of computer-readable instruction segments.
  • the computer-readable instructions of each program segment in the urban pet activity track monitoring device 20 based on image recognition can be stored in the memory of the terminal and executed by the at least one processor to execute based on image recognition (see Figure 1 describes) the monitoring of urban pet activity tracks.
  • the image recognition-based urban pet activity track monitoring device 20 can be divided into multiple functional modules according to the functions it performs.
  • the functional modules may include: a probability initial module 201, an information acquisition module 202, an identification recognition module 203, an information judgment module 204, a first correction module 205, a second correction module 206, a third correction module 207, and a trajectory determination module 208.
  • the module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In this embodiment, the functions of each module will be described in detail in subsequent embodiments.
  • the probability initial module 201 is used to initialize the category probability of each pet category.
  • the category probability refers to the probability that a certain pet belongs to a certain category.
  • the category probability is initialized first, and the category probabilities of all pet categories are assigned the same initial value, assuming that a certain pet belongs to each category The initial category probability is the same.
  • the pets that may appear in the city are: Golden Retriever, Samoyed, Husky, German Shepherd, Horse Dog, etc.
  • 5 categories can be set correspondingly, and the category probability of each category is 1/5.
  • the category probability can be initialized or modified according to actual needs.
  • the category probability and the identification information of each category are stored.
  • the information acquisition module 202 is configured to acquire pet images and collection information collected by an image collection device, and the collection information includes the geographic location, device identification number, and collection time of the image collection device.
  • a plurality of high-definition digital image acquisition devices may be preset to collect images of pets according to relevant policy regulations or actual scene requirements.
  • the presetting a plurality of image acquisition devices includes presetting the positions of the plurality of image acquisition devices and the height of the image acquisition devices.
  • the image capture device can be installed at the entrance and exit of the park or in an open area.
  • the installation position of the image acquisition device is determined, the installation height of the image acquisition device is determined, so that the pet image collected by the image acquisition device is unobstructed, which is convenient for improving the recognition accuracy of the pet image.
  • the collection information refers to the information when the image collection device collects the pet image, and may include: the geographic location of the image collection device, the device identification number of the image collection device, and the time when the pet image was collected (hereinafter referred to as Acquisition time).
  • the geographic location may be represented by latitude and longitude coordinates
  • the device identification number may be represented by C+digits
  • the collection time may be represented by year-month-day-hour-minute-second.
  • the identification recognition module 203 is configured to identify the identification information of the pet in the pet image and store the identification information in association with the pet image and the collected information.
  • identification information that is, identification information and pets have a one-to-one correspondence
  • golden retriever corresponds to identification information a1
  • Samoyed corresponds to identification information a2
  • Husky corresponds to identification information a3.
  • the identification information corresponding to the pet in the pet image After the identification information corresponding to the pet in the pet image is identified, it can be associated with the pet image and the geographic location of the image acquisition device, the device identification number of the image acquisition device, and the time when the pet image was collected and stored in a preset In the database.
  • an image capture device C located at a certain geographic location L (location) has captured a husky
  • the above-mentioned modules 201-203 compare the husky's If the identification information is a3, a record (a3, T, L, C) can be formed for associative storage. It is convenient to subsequently obtain other multiple parameter information according to any one parameter association. For example, multiple parameters such as pet images with the same device identification number, identification information, geographic location of the image collection device, and time when the pet image was collected can be obtained in association according to the parameter of the device identification number.
  • the identification recognition module 203 identifying the identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information includes:
  • the identification information of the pet is determined according to the recognition result.
  • the pet identification recognition model is pre-trained, and the training process may include: acquiring a plurality of pet images in advance; dividing the plurality of pet images and identification information into a training set of a first proportion and a second proportion The test set, wherein the first ratio is much larger than the second ratio; input the training set into a preset deep neural network for supervised learning and training, and obtain a pet identification recognition model; input the test set into the The pet identification recognition model is tested to obtain a test pass rate; when the test pass rate is greater than or equal to the preset pass rate threshold, the training of the pet identification recognition model is ended, and when the test pass rate is less than the preset pass rate Rate threshold, then re-divide the training set and test set, learn and train the pet identification recognition model based on the new training set, and test the pass rate of the newly trained pet identification identification model based on the new test set. Since the pet identification model is not the focus of this application, the specific process of training the pet identification model will not be elaborated here.
  • the input of the pet image into a pre-trained pet identification recognition model includes:
  • the target area in the pet image is detected.
  • the cropped target area is used as an input image and input into a pre-trained gesture recognition model.
  • the YOLO target detection algorithm can be used to select the area of the pet in the pet image with a detection frame.
  • the area selected by the detection frame is the target area, because the number of pixels in the target area is much smaller than that of the entire pet image.
  • the number of pixels, and the target area almost only contains the target object of pets, and no other non-target objects. Therefore, cropping out the target area as the input image of the pet identification model will not only help to improve the pet identification model's recognition of pet identification.
  • the efficiency of information, and the absence of interference from non-target objects in the target area can also improve the accuracy of the pet identification model for identifying pet identification information.
  • the information judging module 204 is used to judge whether the collected information of any two pet images is the same.
  • any two pet images can be obtained from a preset database, and based on the identification information and collection information associated with the two pet images, it is determined whether the pets in the two pet images belong to the same category, and based on the identification information And collect information to modify the initialized category probability.
  • the probability of a certain pet belonging to a certain category is large, and the probability of belonging to other categories is small. Later, the activity trajectory and activity area of pets of different categories can be analyzed based on the corrected category probability.
  • the first correction module 205 is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same.
  • the collection information corresponding to any two pet images acquired is the same, that is, the geographic location, device identification number, and acquisition time are exactly the same, indicating that the two pet images were acquired by the same image acquisition device at the same time. .
  • the image acquisition device is represented by c
  • the geographic location is represented by l
  • the population is represented by p
  • the pet identification is represented by a.
  • A belongs to the population p and is denoted as a ⁇ p
  • the probability of a ⁇ p is ⁇
  • a certain camera c collects two pets i1 and i2 at a certain time t, then there are with And the corresponding category probability is with
  • that the first correction module 205 uses the first correction model to correct the category probability to obtain the first category probability includes:
  • is the correction factor coefficient
  • the foregoing embodiment is based on the category probability correction algorithm of a single image acquisition device at the same time, and pets appearing in a scene at the same time add a weight ⁇ to the same population factor.
  • the second correction module 206 is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different.
  • the geographic locations and device identification numbers corresponding to any two acquired pet images are the same, and the acquisition time is different, indicating that the two pet images were acquired by the same image acquisition device at different times.
  • a certain camera c collects two different pets i1 and i2 at different times t1 and t2, then there is with
  • the second correction module 206 uses a second correction model to correct the category probability to obtain the second category probability as follows:
  • is the correction factor coefficient
  • t is the time
  • the above embodiment is based on the category probability correction algorithm of a single image acquisition device at different times. Pets that appear in the same scene within a short period of time are given a penalty factor ⁇ t according to the time interval, and a weight value ⁇ t * is added to the same population factor. ⁇ , that is, add a penalty factor ⁇ t to the correction factor coefficient ⁇ , and ⁇ t is related to the interval of time t.
  • the third correction module 207 is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different.
  • the geographic locations and device identification numbers corresponding to any two pet images acquired are not the same, but when the acquisition time is the same, it indicates that the two pet images were acquired by two different image acquisition devices at the same time. of.
  • Cameras c1 and c2 collect pets i1, i2 and i3, i4 at the same time t, respectively
  • the third correction module 207 uses a third correction model to correct the category probability to obtain the third category probability as follows:
  • is the correction factor coefficient
  • l is the distance
  • the above embodiment is based on the category probability correction algorithm of multiple image acquisition devices at the same time, in which i1 and i3 are matched to the same pet through a matching algorithm (but two cameras i1 and i3 that are far apart cannot be the same pet), Therefore, the correction factor ⁇ l at this time is related to the distance l.
  • the trajectory determination module 208 is configured to determine the activity trajectory of the pet based on the corrected category probability.
  • the corrected category probabilities, pet images, collected information, and identification information can be stored in association, and based on the associated stored information, the category probabilities of the same category can be obtained.
  • the activity trajectory of the pet, and the activity area of the pet is determined according to the activity trajectory.
  • the trajectory determination module 208 determining the activity trajectory of the pet based on the corrected category probability includes:
  • the activity track of the pet is determined according to the collected information.
  • the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd, and Horse Dog at t1 are 0.9, 0.1, 0, 0, respectively, and a1 at t2 corresponds to Golden Retriever, Samoyed, Husky
  • the corrected category probabilities of German Shepherd and Horse Dog are 0.9, 0, 0.1, 0, 0,
  • the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd and Horse Dog at t2 are 0.8 and 0.1 respectively.
  • 0.1, 0, 0, the category probability 0.9 is used as the target category probability of a1, indicating that a1 belongs to the golden retriever.
  • the collection information of all pet images corresponding to a1 is extracted, and then the activity trajectory of a1 is determined according to the extracted collection information. Specifically, according to the location and machine number of the image acquisition device in the acquisition information, and the corresponding acquisition time, it is determined when and where the puppy appears.
  • the above-mentioned urban pet population monitoring method based on image recognition can be applied not only to finding lost pets, but also to rescue stray pets and law enforcement basis for prohibiting pets from entering specific areas.
  • the urban pet activity track monitoring device based on image recognition described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities.
  • This application initializes the category probability of each pet category, acquires pet images and collection information sent by an image collection device, the collection information includes the geographic location of the image collection device, the device identification number, and the collection time, and identifies the pet image
  • the identification information of the pet and the associated storage of the identification information with the pet image and the collection information to determine whether the collection information of any two pet images are the same, when the geographic location, the device identification number and the collection information in the collection information
  • the first correction model is used to update the category probability to obtain the first category probability.
  • the second correction model is used The category probability is updated to obtain the second category probability.
  • the third correction model is used to update the category probability to obtain the second category probability.
  • the three-category probabilities are used to determine the pet's activity trajectory based on the corrected category probabilities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices.
  • the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information.
  • the entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate extraction of the feature vector of the pet image by the traditional algorithm.
  • the terminal 3 includes a memory 31, at least one processor 32, at least one communication bus 33, and a transceiver 34.
  • the structure of the terminal shown in FIG. 3 does not constitute a limitation of the embodiment of the present application. It may be a bus-type structure or a star structure. The terminal 3 may also include more More or less other hardware or software, or different component arrangements.
  • the terminal 3 is a terminal that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • Its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, and Programming gate arrays, digital processors and embedded devices, etc.
  • the terminal 3 may also include client equipment.
  • the client equipment includes, but is not limited to, any electronic product that can interact with the client through a keyboard, a mouse, a remote control, a touch panel, or a voice control device, for example, a personal computer. Computers, tablets, smart phones, digital cameras, etc.
  • terminal 3 is only an example. If other existing or future electronic products can be adapted to this application, they should also be included in the protection scope of this application and included here by reference.
  • the memory 31 is used to store computer-readable instructions and various data, such as a device installed in the terminal 3, and realize high-speed and automatic completion of programs or data during the operation of the terminal 3 Access.
  • the memory 31 includes volatile and non-volatile memory, such as random access memory (Random Access Memory, RAM), read-only memory (Read-Only Memory, ROM), and programmable read-only memory (Programmable Read-Only).
  • PROM Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • OTPROM Electronic Erasable Programmable Read-Only Memory
  • EEPROM Electrically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • the computer-readable storage medium may be non-volatile or volatile.
  • the at least one processor 32 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one Or a combination of multiple central processing units (CPU), microprocessors, digital processing chips, graphics processors, and various control chips.
  • the at least one processor 32 is the control core (Control Unit) of the terminal 3.
  • Various interfaces and lines are used to connect the various components of the entire terminal 3, and by running or executing programs or modules stored in the memory 31, And call the data stored in the memory 31 to execute various functions of the terminal 3 and process data.
  • the at least one communication bus 33 is configured to implement connection and communication between the memory 31 and the at least one processor 32 and the like.
  • the terminal 3 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the at least one processor 32 through a power management device, so as to realize management through the power management device. Functions such as charging, discharging, and power management.
  • the power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators.
  • the terminal 3 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the above-mentioned integrated unit implemented in the form of a software function module may be stored in a computer readable storage medium.
  • the above-mentioned software function module is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a terminal, or a network device, etc.) or a processor execute the method described in each embodiment of the present application. section.
  • the at least one processor 32 can execute the operating device of the terminal 3 and various installed applications, computer-readable instructions, etc., such as the above-mentioned modules.
  • the memory 31 stores computer-readable instructions, and the at least one processor 32 can call the computer-readable instructions stored in the memory 31 to perform related functions.
  • the various modules described in FIG. 2 are computer-readable instructions stored in the memory 31 and executed by the at least one processor 32, so as to realize the functions of the various modules.
  • the memory 31 stores multiple instructions, and the multiple instructions are executed by the at least one processor 32 to implement all or part of the steps in the method described in the present application.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to the technical field of artificial intelligence. Provided are an urban pet motion trajectory monitoring method and apparatus based on image recognition, a terminal, and a storage medium. The method comprises: initializing a category probability of a pet category; acquiring pet images and a geographical position, device identification number and collection time of an image collection device; recognizing identification information of a pet, and storing same and the pet images in an associated manner; when geographical positions, device identification numbers and collection times are all the same, using a first correction model to correct the category probability; when the geographical positions and the device identification numbers are the same, but the collection times are different, using a second correction model to correct the category probability; when the collection times are the same, but the geographical positions and the device identification numbers are all different, using a third correction model to correct the category probability; and determining a motion trajectory of the pet on the basis of the corrected category probability. The present application can be applied to the field of smart city, and the motion trajectory of the pet in a city can be monitored on the basis of probability.

Description

基于图像识别的城市宠物活动轨迹监测方法及相关设备Urban pet activity track monitoring method and related equipment based on image recognition
本申请要求于2019年09月03日提交中国专利局、申请号为201910829499.X,发明名称为“基于图像识别的城市宠物活动轨迹监测方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the Chinese Patent Office on September 3, 2019, the application number is 201910829499.X, and the invention title is "Image recognition-based urban pet activity track monitoring method and related equipment", all of which The content is incorporated in this application by reference.
技术领域Technical field
本申请涉及人工智能技术领域,具体涉及一种基于图像识别的城市宠物活动轨迹监测方法、装置、终端及存储介质。This application relates to the field of artificial intelligence technology, and in particular to a method, device, terminal and storage medium for monitoring urban pet activity tracks based on image recognition.
背景技术Background technique
近年来,随着人们生活水平的提高,城区居民饲养宠物日渐增多,人类在享受宠物带来的物质和精神满足时,也应当善待宠物,促进人与宠物之间的和谐共处,也契合构建智慧城市的理念。In recent years, with the improvement of people’s living standards, the number of pets kept by urban residents has increased. When humans enjoy the material and spiritual satisfaction brought by pets, they should also be kind to pets, promote harmonious coexistence between humans and pets, and build wisdom. The idea of the city.
现有技术中,对城市宠物的活动轨迹的跟踪,主要是通过视频监控进行分析来识别移动目标,从而将目标的移动过程记录下来,便于跟踪与分析。但是,发明人意识到,宠物大多是猫和狗,这些宠物比较好动,跑动速度比较快,而使用视频监控分析多个摄像头采集到的数据,得到的是静态画面,不具有时间上的连续性。每个摄像头保存至今监控到的视频数据,随着监控目标的移动,活动轨迹在不同摄像头的监控范围内出现,这就导致监控目标活动轨迹的数据记录在不同的摄像头文件中,给目标跟踪与分析带了很大的困难,影响了后期对宠物活动轨迹的跟踪与分析。In the prior art, tracking the activity trajectory of urban pets mainly uses video surveillance to analyze and identify moving targets, so as to record the moving process of the targets, which is convenient for tracking and analysis. However, the inventor realized that most of the pets are cats and dogs. These pets are relatively active and run faster. However, using video surveillance to analyze the data collected by multiple cameras, the results obtained are static images without temporality. Continuity. Each camera saves the video data monitored so far. As the monitored target moves, the activity track appears within the monitoring range of different cameras, which causes the data of the monitored target's activity track to be recorded in different camera files, and the target is tracked and The analysis brought great difficulties and affected the tracking and analysis of the pet's trajectory in the later stage.
因此,有必要提供一种新的方案,对城市宠物的活动区域进行监测。Therefore, it is necessary to provide a new solution to monitor the activity area of urban pets.
发明内容Summary of the invention
鉴于以上内容,有必要提出一种基于图像识别的城市宠物活动轨迹监测方法、装置、终端及存储介质,能够基于概率对城市中的宠物的活动轨迹进行监测。In view of the above content, it is necessary to propose a method, device, terminal and storage medium for monitoring the trajectory of urban pets based on image recognition, which can monitor the trajectory of pets in the city based on probability.
本申请的第一方面提供一种基于图像识别的城市宠物活动轨迹监测方法,所述方法包括:The first aspect of the present application provides a method for monitoring urban pet activity tracks based on image recognition, the method including:
初始化每一个宠物类别的类别概率;Initialize the category probability of each pet category;
获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;Acquiring pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;Identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
判断任意两个宠物图像的采集信息是否相同;Determine whether the collected information of any two pet images is the same;
当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability;
当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability;
当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;When the collection time is the same, but the geographic location and the device identification number are not the same, the third correction model is used to correct the category probability to obtain the third category probability;
基于修正后的类别概率确定所述宠物的活动轨迹。The activity trajectory of the pet is determined based on the corrected category probability.
本申请的第二方面提供一种基于图像识别的城市宠物活动轨迹监测装置,所述装置包括:A second aspect of the present application provides a device for monitoring urban pet activity tracks based on image recognition, the device comprising:
概率初始模块,用于初始化每一个宠物类别的类别概率;The probability initial module is used to initialize the category probability of each pet category;
信息获取模块,用于获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;An information acquisition module for acquiring pet images and collection information collected by an image collection device, the collection information including the geographic location, device identification number, and collection time of the image collection device;
标识识别模块,用于识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠 物图像及所述采集信息关联存储;An identification recognition module for identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
信息判断模块,用于判断任意两个宠物图像的采集信息是否相同;The information judgment module is used to judge whether the collected information of any two pet images is the same;
第一修正模块,用于当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;The first correction module is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same;
第二修正模块,用于当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;The second correction module is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different;
第三修正模块,用于当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;The third correction module is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different;
轨迹确定模块,用于基于修正后的类别概率确定所述宠物的活动轨迹。The trajectory determination module is used to determine the activity trajectory of the pet based on the corrected category probability.
本申请的第三方面提供一种终端,所述终端包括处理器,所述处理器用于执行存储器中存储的计算机可读指令时实现以下步骤:A third aspect of the present application provides a terminal, the terminal includes a processor, and the processor is configured to implement the following steps when executing computer-readable instructions stored in a memory:
初始化每一个宠物类别的类别概率;Initialize the category probability of each pet category;
获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;Acquiring pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;Identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
判断任意两个宠物图像的采集信息是否相同;Determine whether the collected information of any two pet images is the same;
当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability;
当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability;
当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;When the collection time is the same, but the geographic location and the device identification number are not the same, the third correction model is used to correct the category probability to obtain the third category probability;
基于修正后的类别概率确定所述宠物的活动轨迹。The activity trajectory of the pet is determined based on the corrected category probability.
本申请的第四方面提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:A fourth aspect of the present application provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
初始化每一个宠物类别的类别概率;Initialize the category probability of each pet category;
获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;Acquiring pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;Identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
判断任意两个宠物图像的采集信息是否相同;Determine whether the collected information of any two pet images is the same;
当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability;
当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability;
当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;When the collection time is the same, but the geographic location and the device identification number are not the same, the third correction model is used to correct the category probability to obtain the third category probability;
基于修正后的类别概率确定所述宠物的活动轨迹。The activity trajectory of the pet is determined based on the corrected category probability.
综上所述,本申请所述的基于图像识别的城市宠物活动轨迹监测方法、装置、终端及存储介质,可应用在智慧宠物的管理中,从而推动智慧城市的发展。本申请通过采集信息中的多个参数信息对初始化的类别概率进行修正,使得宠物图像中的宠物越来越接近真实的宠物类别,尤其是对于出现在不同图像采集设备中的宠物的类别概率进行了修正,最后基于修正后的类别概率关联宠物图像、标识信息和采集信息,并基于关联后的信息确定宠物的活动轨迹。整个过程无需识别出宠物的具体类别,能够避免传统算法提取宠物图像的特征向量不准确的问题。In summary, the image recognition-based urban pet activity track monitoring method, device, terminal, and storage medium described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices. Finally, the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information. The entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate feature vector extraction of pet images by traditional algorithms.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are the embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on the provided drawings without creative work.
图1是本申请实施例一提供的基于图像识别的城市宠物活动轨迹监测方法的流程图。FIG. 1 is a flowchart of a method for monitoring urban pet activity tracks based on image recognition provided in Embodiment 1 of the present application.
图2是本申请实施例二提供的基于图像识别的城市宠物活动轨迹监测装置的结构图。Fig. 2 is a structural diagram of a device for monitoring urban pet activity tracks based on image recognition provided in the second embodiment of the present application.
图3是本申请实施例三提供的终端的结构示意图。FIG. 3 is a schematic structural diagram of a terminal provided in Embodiment 3 of the present application.
如下具体实施方式将结合上述附图进一步说明本申请。The following specific embodiments will further illustrate this application in conjunction with the above-mentioned drawings.
具体实施方式detailed description
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。In order to be able to understand the above objectives, features and advantages of the application more clearly, the application will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the application and the features in the embodiments can be combined with each other if there is no conflict.
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In the following description, many specific details are set forth in order to fully understand the present application, and the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terminology used in the specification of the application herein is only for the purpose of describing specific embodiments, and is not intended to limit the application.
实施例一Example one
图1是本申请实施例一提供的基于图像识别的城市宠物活动轨迹监测方法的流程图。FIG. 1 is a flowchart of a method for monitoring urban pet activity tracks based on image recognition provided in Embodiment 1 of the present application.
在本实施例中,对于需要进行基于图像识别的城市宠物活动轨迹监测的终端,可以直接在终端上集成本申请的方法所提供的基于图像识别的城市宠物活动轨迹监测的功能,或者以软件开发工具包(Software Development Kit,SKD)的形式运行在终端中。In this embodiment, for a terminal that needs to perform urban pet activity track monitoring based on image recognition, the function of urban pet activity track monitoring based on image recognition provided by the method of this application can be directly integrated on the terminal, or developed by software The tool kit (Software Development Kit, SKD) runs in the terminal.
如图1所示,所述基于图像识别的城市宠物活动轨迹监测方法具体包括以下步骤,根据不同的需求,该流程图中步骤的顺序可以改变,某些可以省略。As shown in FIG. 1, the method for monitoring urban pet activity tracks based on image recognition specifically includes the following steps. According to different needs, the order of the steps in the flowchart can be changed, and some of the steps can be omitted.
S11,初始化每一个宠物类别的类别概率。S11, initialize the category probability of each pet category.
本实施例中,所述类别概率是指某个宠物属于某个类别的几率,先对类别概率进行初始化,对于所有的宠物类别的类别概率赋予相同的初始值,假设某个宠物属于每个类别的初始类别概率一样。In this embodiment, the category probability refers to the probability that a certain pet belongs to a certain category. The category probability is initialized first, and the category probabilities of all pet categories are assigned the same initial value, assuming that a certain pet belongs to each category The initial category probability is the same.
可以对城市中可能出现的宠物的种类进行枚举,然后基于枚举出来的种类初始化类别概率,使得每个类别概率相同且总和为1。It is possible to enumerate the types of pets that may appear in the city, and then initialize the category probabilities based on the enumerated categories, so that the probability of each category is the same and the total is 1.
示例性的,假设城市中可能出现的宠物为:金毛、萨摩耶、哈士奇、德牧、马犬等,则可以对应设置5个类别,每个类别的类别概率均为1/5。类别概率可以根据实际需求进行初始化或修改。Exemplarily, suppose that the pets that may appear in the city are: Golden Retriever, Samoyed, Husky, German Shepherd, Horse Dog, etc., then 5 categories can be set correspondingly, and the category probability of each category is 1/5. The category probability can be initialized or modified according to actual needs.
初始化每个类别的类别概率之后,存储所述类别概率及每个类别的标识信息。After initializing the category probability of each category, the category probability and the identification information of each category are stored.
S12,获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间。S12: Acquire pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device.
本实施例中,可以根据相关政策规定或者实际的场景需求,预先设置多个高清数字图像采集设备,以采集宠物的图像。In this embodiment, a plurality of high-definition digital image acquisition devices may be preset to collect images of pets according to relevant policy regulations or actual scene requirements.
所述预先设置多个图像采集设备包括预先设置所述多个图像采集设备的位置及图像采集设备的高度。示例性的,假设公园禁止宠物进入,那么可以在公园的出入口或者开阔的地方安装图像采集设备。当确定了图像采集设备的安装位置,再确定图像采集设备的安装高度,使得图像采集设备采集的宠物图像无遮挡,便于提高宠物图像的识别精度。The presetting a plurality of image acquisition devices includes presetting the positions of the plurality of image acquisition devices and the height of the image acquisition devices. Exemplarily, assuming that pets are prohibited from entering the park, the image capture device can be installed at the entrance and exit of the park or in an open area. When the installation position of the image acquisition device is determined, the installation height of the image acquisition device is determined, so that the pet image collected by the image acquisition device is unobstructed, which is convenient for improving the recognition accuracy of the pet image.
本实施例中,还可以为每一个高清数字图像采集设备对应设置一个唯一的设备标识号,用于表示高清数字图像采集设备的身份。In this embodiment, it is also possible to set a unique device identification number corresponding to each high-definition digital image acquisition device, which is used to indicate the identity of the high-definition digital image acquisition device.
所述采集信息是指所述图像采集设备采集所述宠物图像时的信息,可以包括:图像采集设备的地理位置,图像采集设备的设备标识号,采集所述宠物图像时的时间(下文简称为采集时间)。所述地理位置可以用经纬度坐标表示,所述设备标识号可以用C+数字表示,所述采集时间可以用年-月-日-时-分-秒表示。The collection information refers to the information when the image collection device collects the pet image, and may include: the geographic location of the image collection device, the device identification number of the image collection device, and the time when the pet image was collected (hereinafter referred to as Acquisition time). The geographic location may be represented by latitude and longitude coordinates, the device identification number may be represented by C+digits, and the collection time may be represented by year-month-day-hour-minute-second.
S13,识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储。S13: Identify identification information of the pet in the pet image and store the identification information in association with the pet image and the collected information.
有不同的标识信息,即标识信息与宠物具有一一对应的关系,例如,金毛对应标识信息a1,萨摩耶对应标识信息a2,哈士奇对应标识信息a3。There are different identification information, that is, identification information and pets have a one-to-one correspondence, for example, golden retriever corresponds to identification information a1, Samoyed corresponds to identification information a2, and Husky corresponds to identification information a3.
所述宠物图像中的宠物对应的标识信息被识别出之后,可以与宠物图像及图像采集设备的地理位置、图像采集设备的设备标识号、采集所述宠物图像时的时间进行关联存储于预设数据库中。After the identification information corresponding to the pet in the pet image is identified, it can be associated with the pet image and the geographic location of the image acquisition device, the device identification number of the image acquisition device, and the time when the pet image was collected and stored in a preset In the database.
示例性的,假设在某个时间T(time),位于某个地理位置L(location)的图像采集设备C(camera)拍摄到了一只哈士奇,通过上述步骤S11-S13比对出这只哈士奇的标识信息为a3,则可以组成一条记录(a3,T,L,C)进行关联存储。便于后续根据任意一个参数关联获取得到其他多个参数信息。例如,可以根据设备标识号这个参数,关联获取得到具有相同设备标识号的宠物图像、标识信息、图像采集设备的地理位置、采集所述宠物图像时的时间等多个参数。Exemplarily, suppose that at a certain time T (time), an image capture device C (camera) located at a certain geographic location L (location) has captured a husky, and the comparison of the husky by the above steps S11-S13 If the identification information is a3, a record (a3, T, L, C) can be formed for associative storage. It is convenient to subsequently obtain other multiple parameter information according to any one parameter association. For example, multiple parameters such as pet images with the same device identification number, identification information, geographic location of the image collection device, and time when the pet image was collected can be obtained in association according to the parameter of the device identification number.
在一个可选的实施例中,所述识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储包括:In an optional embodiment, the identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information includes:
将所述宠物图像输入预先训练好的宠物标识识别模型中;Input the pet image into a pre-trained pet identification recognition model;
获取所述宠物标识识别模型的识别结果;Acquiring the recognition result of the pet identification recognition model;
根据所述识别结果确定所述宠物的标识信息。The identification information of the pet is determined according to the recognition result.
本实施例中,所述宠物标识识别模型是预先训练好的,其训练过程可以包括:预先获取多个宠物图像;将多个宠物图像及标识信息分为第一比例的训练集和第二比例的测试集,其中,第一比例远大于第二比例;将所述训练集输入预先设置的深度神经网络中进行有监督的学习和训练,得到宠物标识识别模型;将所述测试集输入所述宠物标识识别模型中进行测试,得到测试通过率;当所述测试通过率大于或者等于预设通过率阈值,结束所述宠物标识识别模型的训练,当所述测试通过率小于所述预设通过率阈值,则重新划分训练集和测试集,并基于新的训练集学习和训练宠物标识识别模型,基于新的测试集测试新训练得到的宠物标识识别模型的通过率。由于宠物标识识别模型并不是本申请的重点,因此关于训练宠物标识识别模型的具体过程,本文在此不再详细阐述。In this embodiment, the pet identification recognition model is pre-trained, and the training process may include: acquiring a plurality of pet images in advance; dividing the plurality of pet images and identification information into a training set of a first proportion and a second proportion The test set, wherein the first ratio is much larger than the second ratio; input the training set into a preset deep neural network for supervised learning and training, and obtain a pet identification recognition model; input the test set into the The pet identification recognition model is tested to obtain a test pass rate; when the test pass rate is greater than or equal to the preset pass rate threshold, the training of the pet identification recognition model is ended, and when the test pass rate is less than the preset pass rate Rate threshold, then re-divide the training set and test set, learn and train the pet identification recognition model based on the new training set, and test the pass rate of the newly trained pet identification identification model based on the new test set. Since the pet identification model is not the focus of this application, the specific process of training the pet identification model will not be elaborated here.
在一个可选的实施例中,所述将所述宠物图像输入预先训练好的宠物标识识别模型中包括:In an optional embodiment, the input of the pet image into a pre-trained pet identification recognition model includes:
检测出所述宠物图像中的目标区域。The target area in the pet image is detected.
对所述宠物图像中的所述目标区域进行裁剪;Crop the target area in the pet image;
将裁剪出的所述目标区域作为输入图像输入预先训练好的姿态识别模型中。The cropped target area is used as an input image and input into a pre-trained gesture recognition model.
本实施例中,可以采用YOLO目标检测算法将所述宠物图像中宠物所在的区域用检测框框选出来,检测框框选的区域即为目标区域,由于目标区域的像素数量远小于整幅宠物图像的像素数量,且目标区域几乎只包含了宠物这一目标对象,而无其他非目标对象,因此将目标区域裁剪出来作为宠物标识识别模型的输入图像,不仅有助于提高宠物标识识别模型识别宠物标识信息的效率,而且目标区域中不存在非目标对象的干扰,还能提高宠物标识识别模型识别宠物标识信息的精度。In this embodiment, the YOLO target detection algorithm can be used to select the area of the pet in the pet image with a detection frame. The area selected by the detection frame is the target area, because the number of pixels in the target area is much smaller than that of the entire pet image. The number of pixels, and the target area almost only contains the target object of pets, and no other non-target objects. Therefore, cropping out the target area as the input image of the pet identification model will not only help to improve the pet identification model's recognition of pet identification. The efficiency of information, and the absence of interference from non-target objects in the target area, can also improve the accuracy of the pet identification model for identifying pet identification information.
S14,判断任意两个宠物图像的采集信息是否相同。S14: Determine whether the collection information of any two pet images is the same.
本实施例中,可以从预设数据库中获取任意两个宠物图像,并基于两个宠物图像关联的 标识信息和采集信息判断这两个宠物图像中的宠物是否为同一个类别,同时根据标识信息和采集信息修正初始化的类别概率。使得某个宠物属于某个类别的类别概率大,属于其他类别的类别概率小。后续可以基于修正后的类别概率分析不同类别的宠物的活动轨迹及活动区域。In this embodiment, any two pet images can be obtained from a preset database, and based on the identification information and collection information associated with the two pet images, it is determined whether the pets in the two pet images belong to the same category, and based on the identification information And collect information to modify the initialized category probability. The probability of a certain pet belonging to a certain category is large, and the probability of belonging to other categories is small. Later, the activity trajectory and activity area of pets of different categories can be analyzed based on the corrected category probability.
S15,当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率。S15: When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability.
本实施例中,所获取的任意两个宠物图像对应的采集信息相同,即地理位置、设备标识号和采集时间完全相同,表明这两个宠物图像是由同一个图像采集设备在同一时刻采集的。In this embodiment, the collection information corresponding to any two pet images acquired is the same, that is, the geographic location, device identification number, and acquisition time are exactly the same, indicating that the two pet images were acquired by the same image acquisition device at the same time. .
假设,图像采集设备用c表示,地理位置用l表示,种群用p表示,宠物标识用a表示,a属于种群p记为a∈p,a∈p的概率为ρSuppose that the image acquisition device is represented by c, the geographic location is represented by l, the population is represented by p, and the pet identification is represented by a. A belongs to the population p and is denoted as a ∈ p, and the probability of a ∈ p is ρ
某个摄像头c在某一时刻t采集到两个宠物i1和i2,那么有
Figure PCTCN2020111880-appb-000001
Figure PCTCN2020111880-appb-000002
以及对应的类别概率为
Figure PCTCN2020111880-appb-000003
Figure PCTCN2020111880-appb-000004
A certain camera c collects two pets i1 and i2 at a certain time t, then there are
Figure PCTCN2020111880-appb-000001
with
Figure PCTCN2020111880-appb-000002
And the corresponding category probability is
Figure PCTCN2020111880-appb-000003
with
Figure PCTCN2020111880-appb-000004
在一个可选的实施例中,所述采用第一修正模型对所述类别概率进行修正得到第一类别概率包括:In an optional embodiment, the using the first correction model to correct the category probability to obtain the first category probability includes:
采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
Figure PCTCN2020111880-appb-000005
Figure PCTCN2020111880-appb-000005
采用如下公式对修正后的类别概率进行归一化后得到第一类别概率;Use the following formula to normalize the corrected category probability to obtain the first category probability;
Figure PCTCN2020111880-appb-000006
Figure PCTCN2020111880-appb-000006
其中,γ为修正因子系数,
Figure PCTCN2020111880-appb-000007
Figure PCTCN2020111880-appb-000008
分别为同一个图像采集设备在同一时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000009
为所述第一类别概率。
Among them, γ is the correction factor coefficient,
Figure PCTCN2020111880-appb-000007
with
Figure PCTCN2020111880-appb-000008
Respectively, the initial category probability of two pets collected by the same image acquisition device at the same time,
Figure PCTCN2020111880-appb-000009
Is the first category probability.
上述实施例为基于单个图像采集设备同一时刻的类别概率修正算法,同时出现在一个场景内的宠物对相同种群因子加一个权值γ。The foregoing embodiment is based on the category probability correction algorithm of a single image acquisition device at the same time, and pets appearing in a scene at the same time add a weight γ to the same population factor.
S16,当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率。S16: When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability.
本实施例中,所获取的任意两个宠物图像对应的地理位置和设备标识号相同,采集时间不同,表明这两个宠物图像是由同一个图像采集设备在不同时刻采集的。In this embodiment, the geographic locations and device identification numbers corresponding to any two acquired pet images are the same, and the acquisition time is different, indicating that the two pet images were acquired by the same image acquisition device at different times.
某个摄像头c在不同时刻t1和t2采集到两个不同的宠物i1和i2,那么有
Figure PCTCN2020111880-appb-000010
Figure PCTCN2020111880-appb-000011
A certain camera c collects two different pets i1 and i2 at different times t1 and t2, then there is
Figure PCTCN2020111880-appb-000010
with
Figure PCTCN2020111880-appb-000011
在一个可选的实施例中,所述采用第二修正模型对所述类别概率进行修正得到第二类别概率如下所示:In an optional embodiment, the second correction model is used to correct the category probability to obtain the second category probability as follows:
采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
Figure PCTCN2020111880-appb-000012
Figure PCTCN2020111880-appb-000012
采用如下公式对修正后的类别概率进行归一化后得到第二类别概率;Use the following formula to normalize the corrected category probability to obtain the second category probability;
Figure PCTCN2020111880-appb-000013
Figure PCTCN2020111880-appb-000013
其中,
Figure PCTCN2020111880-appb-000014
为处罚因子,γ为修正因子系数,t为时间,
Figure PCTCN2020111880-appb-000015
Figure PCTCN2020111880-appb-000016
分别为同一个图像采集设备在不同时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000017
为所述第二类别概率。
among them,
Figure PCTCN2020111880-appb-000014
Is the penalty factor, γ is the correction factor coefficient, t is the time,
Figure PCTCN2020111880-appb-000015
with
Figure PCTCN2020111880-appb-000016
Are the initial category probabilities of two pets collected by the same image acquisition device at different times,
Figure PCTCN2020111880-appb-000017
Is the second category probability.
上述实施例是基于单个图像采集设备不同时刻的类别概率修正算法,短时间内先后出现在同一场景的宠物,根据时间间隔给一个处罚因子β t,对相同种群因子加一个权值为β t*γ,即对修正因子系数γ加上一个处罚因子β t,β t和时间t的间隔相关。 The above embodiment is based on the category probability correction algorithm of a single image acquisition device at different times. Pets that appear in the same scene within a short period of time are given a penalty factor β t according to the time interval, and a weight value β t * is added to the same population factor. γ, that is, add a penalty factor β t to the correction factor coefficient γ, and β t is related to the interval of time t.
S17,当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率。S17: When the collection time is the same but the geographic location and the device identification number are different, the third correction model is used to correct the category probability to obtain the third category probability.
本实施例中,所获取的任意两个宠物图像对应的地理位置和设备标识号均不相同,但采集时间相同时,表明这两个宠物图像是由两个不同的图像采集设备在同一时刻采集的。In this embodiment, the geographic locations and device identification numbers corresponding to any two pet images acquired are not the same, but when the acquisition time is the same, it indicates that the two pet images were acquired by two different image acquisition devices at the same time. of.
摄像头c1和c2在同一时刻t分别采集到宠物i1、i2和i3,i4Cameras c1 and c2 collect pets i1, i2 and i3, i4 at the same time t, respectively
于是有
Figure PCTCN2020111880-appb-000018
Figure PCTCN2020111880-appb-000019
So there is
Figure PCTCN2020111880-appb-000018
with
Figure PCTCN2020111880-appb-000019
在一个可选的实施例中,所述采用第三修正模型对所述类别概率进行修正得到第三类别概率如下:In an optional embodiment, the third category probability is corrected by using the third correction model to obtain the third category probability as follows:
采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
Figure PCTCN2020111880-appb-000020
Figure PCTCN2020111880-appb-000020
采用如下公式对修正后的类别概率进行归一化后得到第三类别概率;Use the following formula to normalize the corrected category probability to obtain the third category probability;
Figure PCTCN2020111880-appb-000021
Figure PCTCN2020111880-appb-000021
其中,
Figure PCTCN2020111880-appb-000022
为处罚因子,γ为修正因子系数,
Figure PCTCN2020111880-appb-000023
l为距离,
Figure PCTCN2020111880-appb-000024
Figure PCTCN2020111880-appb-000025
分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000026
Figure PCTCN2020111880-appb-000027
分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000028
为所述第三类别概率。
among them,
Figure PCTCN2020111880-appb-000022
Is the penalty factor, γ is the correction factor coefficient,
Figure PCTCN2020111880-appb-000023
l is the distance,
Figure PCTCN2020111880-appb-000024
with
Figure PCTCN2020111880-appb-000025
Are the initial category probabilities of two pets collected by different image collection devices at the same time,
Figure PCTCN2020111880-appb-000026
with
Figure PCTCN2020111880-appb-000027
Are the initial category probabilities of two pets collected by different image collection devices at the same time,
Figure PCTCN2020111880-appb-000028
Is the third category probability.
上述实施例是基于多个图像采集设备同一时刻的类别概率修正算法,其中i1和i3经过匹配算法匹配为同一个宠物(但对于距离比较远的两个摄像头i1和i3不可能是同一宠物),因此此时的修正因子β l和距离远近l相关。 The above embodiment is based on the category probability correction algorithm of multiple image acquisition devices at the same time, in which i1 and i3 are matched to the same pet through a matching algorithm (but two cameras i1 and i3 that are far apart cannot be the same pet), Therefore, the correction factor β l at this time is related to the distance l.
S18,基于修正后的类别概率确定所述宠物的活动轨迹。S18: Determine an activity track of the pet based on the corrected category probability.
本实施例中,根据采集信息对任意两个宠物的类别概率进行了修正之后,可以将修正后的类别概率、宠物图像、采集信息和标识信息关联存储,并基于关联存储的信息得到同一类别的宠物的活动轨迹,根据所述活动轨迹确定所述宠物的活动区域。In this embodiment, after the category probabilities of any two pets are corrected based on the collected information, the corrected category probabilities, pet images, collected information, and identification information can be stored in association, and based on the associated stored information, the category probabilities of the same category can be obtained. The activity trajectory of the pet, and the activity area of the pet is determined according to the activity trajectory.
在一个可选的实施例中,所述基于修正后的类别概率确定所述宠物的活动轨迹包括:In an optional embodiment, the determining the pet's activity track based on the corrected category probability includes:
获取每个宠物图像对应的所有修正后的类别概率;Obtain all the corrected category probabilities corresponding to each pet image;
从所述所有修正后的类别概率中筛选出最大的类别概率作为所述宠物图像的目标类别概率;Selecting the largest category probability from all the corrected category probabilities as the target category probability of the pet image;
获取具有相同目标类别概率的宠物图像对应的采集信息;Obtain collection information corresponding to pet images with the same target category probability;
根据所述采集信息确定所述宠物的活动轨迹。The activity track of the pet is determined according to the collected information.
示例性的,假如t1时刻a1对应金毛、萨摩耶、哈士奇、德牧、马犬的修正后的类别概率分别为0.9、0.1、0、0、0,t2时刻a1对应金毛、萨摩耶、哈士奇、德牧、马犬的修正后的类别概率分别为0.9、0、0.1、0、0,t2时刻a1对应金毛、萨摩耶、哈士奇、德牧、马犬的修正后的类别概率分别为0.8、0.1、0.1、0、0,则类别概率0.9作为a1的目标类别概率,表明a1属于金毛。此时将a1对应的所有宠物图像的采集信息提取出来,并进而根据提取出的采集信息确定a1的活动轨迹。具体的,根据采集信息中的图像采集设备的位置及机号、对应的采集时间确定出这只小狗在何时出现在了何地。Exemplarily, if the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd, and Horse Dog at t1 are 0.9, 0.1, 0, 0, 0, respectively, and a1 at t2 corresponds to Golden Retriever, Samoyed, Husky, The corrected category probabilities of German Shepherd and Horse Dog are 0.9, 0, 0.1, 0, 0, and the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd and Horse Dog at t2 are 0.8 and 0.1 respectively. , 0.1, 0, 0, the category probability 0.9 is used as the target category probability of a1, indicating that a1 belongs to the golden retriever. At this time, the collection information of all pet images corresponding to a1 is extracted, and then the activity trajectory of a1 is determined according to the extracted collection information. Specifically, according to the location and machine number of the image acquisition device in the acquisition information, and the corresponding acquisition time, it is determined when and where the puppy appears.
还可以以地图的形式显示宠物的活动轨迹。You can also display the pet's trajectory in the form of a map.
需要说明的是,上述基于图像识别的城市宠物种群数量监测方法,不仅可以应用于寻找丢失的宠物,还可以应用于对流浪宠物的救助、禁止宠物进入特定地区的执法依据等。It should be noted that the above-mentioned urban pet population monitoring method based on image recognition can be used not only to find lost pets, but also to rescue stray pets and law enforcement basis for prohibiting pets from entering specific areas.
综上所述,本申请所述的基于图像识别的城市宠物活动轨迹监测方法,可应用在智慧宠物的管理中,从而推动智慧城市的发展。本申请初始化每一个宠物类别的类别概率,获取图像采集设备发送的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置及设备标识号、采集时间,识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储,判断任意两个宠物图像的采集信息是否相同,当所述采集信息中的地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行更新得到第一类别概率,当所述采集信息中的地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行更新得到第二类别概率,当所述采集信息中的采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行更新得到第三类别概率,基于修正后的类别概率确定所述宠物的活动轨迹。本申请通过采集信息中的多个参数信息对初始化的类别概率进行修正,使得宠物图像中的宠物越来越接近真实的宠物类别,尤其是对于出现在不同图像采集设备中的宠物的类别概率进行了修正,最后基于修正后的类别概率关联宠物图像、标识信息和采集信息,并基于关联后的信息确定宠物的活动轨迹。整个过程无需识别出宠物的具体类别,能够避免传统算法提取宠物图像的特征向量不准确的问题。In summary, the image recognition-based urban pet activity track monitoring method described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities. This application initializes the category probability of each pet category, acquires pet images and collection information sent by an image collection device, the collection information includes the geographic location of the image collection device, the device identification number, and the collection time, and identifies the pet image The identification information of the pet and the associated storage of the identification information with the pet image and the collection information to determine whether the collection information of any two pet images are the same, when the geographic location, the device identification number and the collection information in the collection information When the collection time is the same, the first correction model is used to update the category probability to obtain the first category probability. When the geographic location and device identification number in the collection information are the same, but the collection time is different, the second correction model is used The category probability is updated to obtain the second category probability. When the collection time in the collection information is the same but the geographic location and the device identification number are different, the third correction model is used to update the category probability to obtain the second category probability. The three-category probabilities are used to determine the pet's activity trajectory based on the corrected category probabilities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices. Finally, the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information. The entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate extraction of the feature vector of the pet image by the traditional algorithm.
实施例二Example two
图2是本申请实施例二提供的基于图像识别的城市宠物活动轨迹监测装置的结构图。Fig. 2 is a structural diagram of a device for monitoring urban pet activity tracks based on image recognition provided in the second embodiment of the present application.
在一些实施例中,所述基于图像识别的城市宠物活动轨迹监测装置20可以包括多个由计算机可读指令段所组成的功能模块。所述基于图像识别的城市宠物活动轨迹监测装置20中的各个程序段的计算机可读指令可以存储于终端的存储器中,并由所述至少一个处理器所执行,以基于图像识别执行(详见图1描述)对城市宠物活动轨迹的监测。In some embodiments, the device 20 for monitoring urban pet activity tracks based on image recognition may include multiple functional modules composed of computer-readable instruction segments. The computer-readable instructions of each program segment in the urban pet activity track monitoring device 20 based on image recognition can be stored in the memory of the terminal and executed by the at least one processor to execute based on image recognition (see Figure 1 describes) the monitoring of urban pet activity tracks.
本实施例中,所述基于图像识别的城市宠物活动轨迹监测装置20根据其所执行的功能,可以被划分为多个功能模块。所述功能模块可以包括:概率初始模块201、信息获取模块202、标识识别模块203、信息判断模块204、第一修正模块205、第二修正模块206、第三修正模块207及轨迹确定模块208。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。在本实施例中,关于各模块的功能将在后续的实施例中详述。In this embodiment, the image recognition-based urban pet activity track monitoring device 20 can be divided into multiple functional modules according to the functions it performs. The functional modules may include: a probability initial module 201, an information acquisition module 202, an identification recognition module 203, an information judgment module 204, a first correction module 205, a second correction module 206, a third correction module 207, and a trajectory determination module 208. The module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In this embodiment, the functions of each module will be described in detail in subsequent embodiments.
概率初始模块201,用于初始化每一个宠物类别的类别概率。The probability initial module 201 is used to initialize the category probability of each pet category.
本实施例中,所述类别概率是指某个宠物属于某个类别的几率,先对类别概率进行初始化,对于所有的宠物类别的类别概率赋予相同的初始值,假设某个宠物属于每个类别的初始类别概率一样。In this embodiment, the category probability refers to the probability that a certain pet belongs to a certain category. The category probability is initialized first, and the category probabilities of all pet categories are assigned the same initial value, assuming that a certain pet belongs to each category The initial category probability is the same.
可以对城市中可能出现的宠物的种类进行枚举,然后基于枚举出来的种类初始化类别概率,使得每个类别概率相同且总和为1。It is possible to enumerate the types of pets that may appear in the city, and then initialize the category probabilities based on the enumerated categories, so that the probability of each category is the same and the total is 1.
示例性的,假设城市中可能出现的宠物为:金毛、萨摩耶、哈士奇、德牧、马犬等,则可以对应设置5个类别,每个类别的类别概率均为1/5。类别概率可以根据实际需求进行初始化或修改。Exemplarily, suppose that the pets that may appear in the city are: Golden Retriever, Samoyed, Husky, German Shepherd, Horse Dog, etc., then 5 categories can be set correspondingly, and the category probability of each category is 1/5. The category probability can be initialized or modified according to actual needs.
初始化每个类别的类别概率之后,存储所述类别概率及每个类别的标识信息。After initializing the category probability of each category, the category probability and the identification information of each category are stored.
信息获取模块202,用于获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间。The information acquisition module 202 is configured to acquire pet images and collection information collected by an image collection device, and the collection information includes the geographic location, device identification number, and collection time of the image collection device.
本实施例中,可以根据相关政策规定或者实际的场景需求,预先设置多个高清数字图像采集设备,以采集宠物的图像。In this embodiment, a plurality of high-definition digital image acquisition devices may be preset to collect images of pets according to relevant policy regulations or actual scene requirements.
所述预先设置多个图像采集设备包括预先设置所述多个图像采集设备的位置及图像采集设备的高度。示例性的,假设公园禁止宠物进入,那么可以在公园的出入口或者开阔的地方安装图像采集设备。当确定了图像采集设备的安装位置,再确定图像采集设备的安装高度,使得图像采集设备采集的宠物图像无遮挡,便于提高宠物图像的识别精度。The presetting a plurality of image acquisition devices includes presetting the positions of the plurality of image acquisition devices and the height of the image acquisition devices. Exemplarily, assuming that pets are prohibited from entering the park, the image capture device can be installed at the entrance and exit of the park or in an open area. When the installation position of the image acquisition device is determined, the installation height of the image acquisition device is determined, so that the pet image collected by the image acquisition device is unobstructed, which is convenient for improving the recognition accuracy of the pet image.
本实施例中,还可以为每一个高清数字图像采集设备对应设置一个唯一的设备标识号,用于表示高清数字图像采集设备的身份。In this embodiment, it is also possible to set a unique device identification number corresponding to each high-definition digital image acquisition device, which is used to indicate the identity of the high-definition digital image acquisition device.
所述采集信息是指所述图像采集设备采集所述宠物图像时的信息,可以包括:图像采集设备的地理位置,图像采集设备的设备标识号,采集所述宠物图像时的时间(下文简称为采集时间)。所述地理位置可以用经纬度坐标表示,所述设备标识号可以用C+数字表示,所述采集时间可以用年-月-日-时-分-秒表示。The collection information refers to the information when the image collection device collects the pet image, and may include: the geographic location of the image collection device, the device identification number of the image collection device, and the time when the pet image was collected (hereinafter referred to as Acquisition time). The geographic location may be represented by latitude and longitude coordinates, the device identification number may be represented by C+digits, and the collection time may be represented by year-month-day-hour-minute-second.
标识识别模块203,用于识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储。The identification recognition module 203 is configured to identify the identification information of the pet in the pet image and store the identification information in association with the pet image and the collected information.
有不同的标识信息,即标识信息与宠物具有一一对应的关系,例如,金毛对应标识信息a1,萨摩耶对应标识信息a2,哈士奇对应标识信息a3。There are different identification information, that is, identification information and pets have a one-to-one correspondence, for example, golden retriever corresponds to identification information a1, Samoyed corresponds to identification information a2, and Husky corresponds to identification information a3.
所述宠物图像中的宠物对应的标识信息被识别出之后,可以与宠物图像及图像采集设备的地理位置、图像采集设备的设备标识号、采集所述宠物图像时的时间进行关联存储于预设数据库中。After the identification information corresponding to the pet in the pet image is identified, it can be associated with the pet image and the geographic location of the image acquisition device, the device identification number of the image acquisition device, and the time when the pet image was collected and stored in a preset In the database.
示例性的,假设在某个时间T(time),位于某个地理位置L(location)的图像采集设备C(camera)拍摄到了一只哈士奇,通过上述模块201-203比对出这只哈士奇的标识信息为a3,则可以组成一条记录(a3,T,L,C)进行关联存储。便于后续根据任意一个参数关联获取得到其他多个参数信息。例如,可以根据设备标识号这个参数,关联获取得到具有相同设备标识号的宠物图像、标识信息、图像采集设备的地理位置、采集所述宠物图像时的时间等多个参数。Exemplarily, suppose that at a certain time T (time), an image capture device C (camera) located at a certain geographic location L (location) has captured a husky, and the above-mentioned modules 201-203 compare the husky's If the identification information is a3, a record (a3, T, L, C) can be formed for associative storage. It is convenient to subsequently obtain other multiple parameter information according to any one parameter association. For example, multiple parameters such as pet images with the same device identification number, identification information, geographic location of the image collection device, and time when the pet image was collected can be obtained in association according to the parameter of the device identification number.
在一个可选的实施例中,所述标识识别模块203识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储包括:In an optional embodiment, the identification recognition module 203 identifying the identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information includes:
将所述宠物图像输入预先训练好的宠物标识识别模型中;Input the pet image into a pre-trained pet identification recognition model;
获取所述宠物标识识别模型的识别结果;Acquiring the recognition result of the pet identification recognition model;
根据所述识别结果确定所述宠物的标识信息。The identification information of the pet is determined according to the recognition result.
本实施例中,所述宠物标识识别模型是预先训练好的,其训练过程可以包括:预先获取多个宠物图像;将多个宠物图像及标识信息分为第一比例的训练集和第二比例的测试集,其中,第一比例远大于第二比例;将所述训练集输入预先设置的深度神经网络中进行有监督的学习和训练,得到宠物标识识别模型;将所述测试集输入所述宠物标识识别模型中进行测试,得到测试通过率;当所述测试通过率大于或者等于预设通过率阈值,结束所述宠物标识识别模型的训练,当所述测试通过率小于所述预设通过率阈值,则重新划分训练集和测试集,并基于新的训练集学习和训练宠物标识识别模型,基于新的测试集测试新训练得到的宠物标识识别模型的通过率。由于宠物标识识别模型并不是本申请的重点,因此关于训练宠物标识识别模型的具体过程,本文在此不再详细阐述。In this embodiment, the pet identification recognition model is pre-trained, and the training process may include: acquiring a plurality of pet images in advance; dividing the plurality of pet images and identification information into a training set of a first proportion and a second proportion The test set, wherein the first ratio is much larger than the second ratio; input the training set into a preset deep neural network for supervised learning and training, and obtain a pet identification recognition model; input the test set into the The pet identification recognition model is tested to obtain a test pass rate; when the test pass rate is greater than or equal to the preset pass rate threshold, the training of the pet identification recognition model is ended, and when the test pass rate is less than the preset pass rate Rate threshold, then re-divide the training set and test set, learn and train the pet identification recognition model based on the new training set, and test the pass rate of the newly trained pet identification identification model based on the new test set. Since the pet identification model is not the focus of this application, the specific process of training the pet identification model will not be elaborated here.
在一个可选的实施例中,所述将所述宠物图像输入预先训练好的宠物标识识别模型中包括:In an optional embodiment, the input of the pet image into a pre-trained pet identification recognition model includes:
检测出所述宠物图像中的目标区域。The target area in the pet image is detected.
对所述宠物图像中的所述目标区域进行裁剪;Crop the target area in the pet image;
将裁剪出的所述目标区域作为输入图像输入预先训练好的姿态识别模型中。The cropped target area is used as an input image and input into a pre-trained gesture recognition model.
本实施例中,可以采用YOLO目标检测算法将所述宠物图像中宠物所在的区域用检测框框选出来,检测框框选的区域即为目标区域,由于目标区域的像素数量远小于整幅宠物图像的像素数量,且目标区域几乎只包含了宠物这一目标对象,而无其他非目标对象,因此将目标区域裁剪出来作为宠物标识识别模型的输入图像,不仅有助于提高宠物标识识别模型识别宠物标识信息的效率,而且目标区域中不存在非目标对象的干扰,还能提高宠物标识识别模型识别宠物标识信息的精度。In this embodiment, the YOLO target detection algorithm can be used to select the area of the pet in the pet image with a detection frame. The area selected by the detection frame is the target area, because the number of pixels in the target area is much smaller than that of the entire pet image. The number of pixels, and the target area almost only contains the target object of pets, and no other non-target objects. Therefore, cropping out the target area as the input image of the pet identification model will not only help to improve the pet identification model's recognition of pet identification. The efficiency of information, and the absence of interference from non-target objects in the target area, can also improve the accuracy of the pet identification model for identifying pet identification information.
信息判断模块204,用于判断任意两个宠物图像的采集信息是否相同。The information judging module 204 is used to judge whether the collected information of any two pet images is the same.
本实施例中,可以从预设数据库中获取任意两个宠物图像,并基于两个宠物图像关联的 标识信息和采集信息判断这两个宠物图像中的宠物是否为同一个类别,同时根据标识信息和采集信息修正初始化的类别概率。使得某个宠物属于某个类别的类别概率大,属于其他类别的类别概率小。后续可以基于修正后的类别概率分析不同类别的宠物的活动轨迹及活动区域。In this embodiment, any two pet images can be obtained from a preset database, and based on the identification information and collection information associated with the two pet images, it is determined whether the pets in the two pet images belong to the same category, and based on the identification information And collect information to modify the initialized category probability. The probability of a certain pet belonging to a certain category is large, and the probability of belonging to other categories is small. Later, the activity trajectory and activity area of pets of different categories can be analyzed based on the corrected category probability.
第一修正模块205,用于当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率。The first correction module 205 is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same.
本实施例中,所获取的任意两个宠物图像对应的采集信息相同,即地理位置、设备标识号和采集时间完全相同,表明这两个宠物图像是由同一个图像采集设备在同一时刻采集的。In this embodiment, the collection information corresponding to any two pet images acquired is the same, that is, the geographic location, device identification number, and acquisition time are exactly the same, indicating that the two pet images were acquired by the same image acquisition device at the same time. .
假设,图像采集设备用c表示,地理位置用l表示,种群用p表示,宠物标识用a表示,a属于种群p记为a∈p,a∈p的概率为ρSuppose that the image acquisition device is represented by c, the geographic location is represented by l, the population is represented by p, and the pet identification is represented by a. A belongs to the population p and is denoted as a ∈ p, and the probability of a ∈ p is ρ
某个摄像头c在某一时刻t采集到两个宠物i1和i2,那么有
Figure PCTCN2020111880-appb-000029
Figure PCTCN2020111880-appb-000030
以及对应的类别概率为
Figure PCTCN2020111880-appb-000031
Figure PCTCN2020111880-appb-000032
A certain camera c collects two pets i1 and i2 at a certain time t, then there are
Figure PCTCN2020111880-appb-000029
with
Figure PCTCN2020111880-appb-000030
And the corresponding category probability is
Figure PCTCN2020111880-appb-000031
with
Figure PCTCN2020111880-appb-000032
在一个可选的实施例中,所述第一修正模块205采用第一修正模型对所述类别概率进行修正得到第一类别概率包括:In an optional embodiment, that the first correction module 205 uses the first correction model to correct the category probability to obtain the first category probability includes:
采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
Figure PCTCN2020111880-appb-000033
Figure PCTCN2020111880-appb-000033
采用如下公式对修正后的类别概率进行归一化后得到第一类别概率;Use the following formula to normalize the corrected category probability to obtain the first category probability;
Figure PCTCN2020111880-appb-000034
Figure PCTCN2020111880-appb-000034
其中,γ为修正因子系数,
Figure PCTCN2020111880-appb-000035
Figure PCTCN2020111880-appb-000036
分别为同一个图像采集设备在同一时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000037
为所述第一类别概率。
Among them, γ is the correction factor coefficient,
Figure PCTCN2020111880-appb-000035
with
Figure PCTCN2020111880-appb-000036
Respectively, the initial category probability of two pets collected by the same image acquisition device at the same time,
Figure PCTCN2020111880-appb-000037
Is the first category probability.
上述实施例为基于单个图像采集设备同一时刻的类别概率修正算法,同时出现在一个场景内的宠物对相同种群因子加一个权值γ。The foregoing embodiment is based on the category probability correction algorithm of a single image acquisition device at the same time, and pets appearing in a scene at the same time add a weight γ to the same population factor.
第二修正模块206,用于当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率。The second correction module 206 is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different.
本实施例中,所获取的任意两个宠物图像对应的地理位置和设备标识号相同,采集时间不同,表明这两个宠物图像是由同一个图像采集设备在不同时刻采集的。In this embodiment, the geographic locations and device identification numbers corresponding to any two acquired pet images are the same, and the acquisition time is different, indicating that the two pet images were acquired by the same image acquisition device at different times.
某个摄像头c在不同时刻t1和t2采集到两个不同的宠物i1和i2,那么有
Figure PCTCN2020111880-appb-000038
Figure PCTCN2020111880-appb-000039
A certain camera c collects two different pets i1 and i2 at different times t1 and t2, then there is
Figure PCTCN2020111880-appb-000038
with
Figure PCTCN2020111880-appb-000039
在一个可选的实施例中,所述第二修正模块206采用第二修正模型对所述类别概率进行修正得到第二类别概率如下所示:In an optional embodiment, the second correction module 206 uses a second correction model to correct the category probability to obtain the second category probability as follows:
采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
Figure PCTCN2020111880-appb-000040
Figure PCTCN2020111880-appb-000040
采用如下公式对修正后的类别概率进行归一化后得到第二类别概率;Use the following formula to normalize the corrected category probability to obtain the second category probability;
Figure PCTCN2020111880-appb-000041
Figure PCTCN2020111880-appb-000041
其中,
Figure PCTCN2020111880-appb-000042
为处罚因子,γ为修正因子系数,t为时间,
Figure PCTCN2020111880-appb-000043
Figure PCTCN2020111880-appb-000044
分别为同一个图像采集设备在不同时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000045
为所述第二类别概率。
among them,
Figure PCTCN2020111880-appb-000042
Is the penalty factor, γ is the correction factor coefficient, t is the time,
Figure PCTCN2020111880-appb-000043
with
Figure PCTCN2020111880-appb-000044
Are the initial category probabilities of two pets collected by the same image acquisition device at different times,
Figure PCTCN2020111880-appb-000045
Is the second category probability.
上述实施例是基于单个图像采集设备不同时刻的类别概率修正算法,短时间内先后出现在同一场景的宠物,根据时间间隔给一个处罚因子β t,对相同种群因子加一个权值为β t*γ,即对修正因子系数γ加上一个处罚因子β t,β t和时间t的间隔相关。 The above embodiment is based on the category probability correction algorithm of a single image acquisition device at different times. Pets that appear in the same scene within a short period of time are given a penalty factor β t according to the time interval, and a weight value β t * is added to the same population factor. γ, that is, add a penalty factor β t to the correction factor coefficient γ, and β t is related to the interval of time t.
第三修正模块207,用于当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率。The third correction module 207 is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different.
本实施例中,所获取的任意两个宠物图像对应的地理位置和设备标识号均不相同,但采集时间相同时,表明这两个宠物图像是由两个不同的图像采集设备在同一时刻采集的。In this embodiment, the geographic locations and device identification numbers corresponding to any two pet images acquired are not the same, but when the acquisition time is the same, it indicates that the two pet images were acquired by two different image acquisition devices at the same time. of.
摄像头c1和c2在同一时刻t分别采集到宠物i1、i2和i3,i4Cameras c1 and c2 collect pets i1, i2 and i3, i4 at the same time t, respectively
于是有
Figure PCTCN2020111880-appb-000046
Figure PCTCN2020111880-appb-000047
So there is
Figure PCTCN2020111880-appb-000046
with
Figure PCTCN2020111880-appb-000047
在一个可选的实施例中,所述第三修正模块207采用第三修正模型对所述类别概率进行修正得到第三类别概率如下:In an optional embodiment, the third correction module 207 uses a third correction model to correct the category probability to obtain the third category probability as follows:
采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
Figure PCTCN2020111880-appb-000048
Figure PCTCN2020111880-appb-000048
采用如下公式对修正后的类别概率进行归一化后得到第三类别概率;Use the following formula to normalize the corrected category probability to obtain the third category probability;
Figure PCTCN2020111880-appb-000049
Figure PCTCN2020111880-appb-000049
其中,
Figure PCTCN2020111880-appb-000050
为处罚因子,γ为修正因子系数,
Figure PCTCN2020111880-appb-000051
l为距离,
Figure PCTCN2020111880-appb-000052
Figure PCTCN2020111880-appb-000053
分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000054
Figure PCTCN2020111880-appb-000055
分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
Figure PCTCN2020111880-appb-000056
为所述第三类别概率。
among them,
Figure PCTCN2020111880-appb-000050
Is the penalty factor, γ is the correction factor coefficient,
Figure PCTCN2020111880-appb-000051
l is the distance,
Figure PCTCN2020111880-appb-000052
with
Figure PCTCN2020111880-appb-000053
Are the initial category probabilities of two pets collected by different image collection devices at the same time,
Figure PCTCN2020111880-appb-000054
with
Figure PCTCN2020111880-appb-000055
Respectively, the initial category probabilities of two pets collected by different image collection devices at the same time,
Figure PCTCN2020111880-appb-000056
Is the third category probability.
上述实施例是基于多个图像采集设备同一时刻的类别概率修正算法,其中i1和i3经过匹配算法匹配为同一个宠物(但对于距离比较远的两个摄像头i1和i3不可能是同一宠物),因此此时的修正因子β l和距离远近l相关。 The above embodiment is based on the category probability correction algorithm of multiple image acquisition devices at the same time, in which i1 and i3 are matched to the same pet through a matching algorithm (but two cameras i1 and i3 that are far apart cannot be the same pet), Therefore, the correction factor β l at this time is related to the distance l.
轨迹确定模块208,用于基于修正后的类别概率确定所述宠物的活动轨迹。The trajectory determination module 208 is configured to determine the activity trajectory of the pet based on the corrected category probability.
本实施例中,根据采集信息对任意两个宠物的类别概率进行了修正之后,可以将修正后的类别概率、宠物图像、采集信息和标识信息关联存储,并基于关联存储的信息得到同一类别的宠物的活动轨迹,根据所述活动轨迹确定所述宠物的活动区域。In this embodiment, after the category probabilities of any two pets are corrected based on the collected information, the corrected category probabilities, pet images, collected information, and identification information can be stored in association, and based on the associated stored information, the category probabilities of the same category can be obtained. The activity trajectory of the pet, and the activity area of the pet is determined according to the activity trajectory.
在一个可选的实施例中,所述轨迹确定模块208基于修正后的类别概率确定所述宠物的活动轨迹包括:In an optional embodiment, the trajectory determination module 208 determining the activity trajectory of the pet based on the corrected category probability includes:
获取每个宠物图像对应的所有修正后的类别概率;Obtain all the corrected category probabilities corresponding to each pet image;
从所述所有修正后的类别概率中筛选出最大的类别概率作为所述宠物图像的目标类别概率;Selecting the largest category probability from all the corrected category probabilities as the target category probability of the pet image;
获取具有相同目标类别概率的宠物图像对应的采集信息;Obtain collection information corresponding to pet images with the same target category probability;
根据所述采集信息确定所述宠物的活动轨迹。The activity track of the pet is determined according to the collected information.
示例性的,假如t1时刻a1对应金毛、萨摩耶、哈士奇、德牧、马犬的修正后的类别概率分别为0.9、0.1、0、0、0,t2时刻a1对应金毛、萨摩耶、哈士奇、德牧、马犬的修正后的类别概率分别为0.9、0、0.1、0、0,t2时刻a1对应金毛、萨摩耶、哈士奇、德牧、马犬的修正后的类别概率分别为0.8、0.1、0.1、0、0,则类别概率0.9作为a1的目标类别概率,表明a1属于金毛。此时将a1对应的所有宠物图像的采集信息提取出来,并进而根据提取出的采集信息确定a1的活动轨迹。具体的,根据采集信息中的图像采集设备的位置及机号、对应的采集时间确定出这只小狗在何时出现在了何地。Exemplarily, if the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd, and Horse Dog at t1 are 0.9, 0.1, 0, 0, 0, respectively, and a1 at t2 corresponds to Golden Retriever, Samoyed, Husky, The corrected category probabilities of German Shepherd and Horse Dog are 0.9, 0, 0.1, 0, 0, and the corrected category probabilities of a1 corresponding to Golden Retriever, Samoyed, Husky, German Shepherd and Horse Dog at t2 are 0.8 and 0.1 respectively. , 0.1, 0, 0, the category probability 0.9 is used as the target category probability of a1, indicating that a1 belongs to the golden retriever. At this time, the collection information of all pet images corresponding to a1 is extracted, and then the activity trajectory of a1 is determined according to the extracted collection information. Specifically, according to the location and machine number of the image acquisition device in the acquisition information, and the corresponding acquisition time, it is determined when and where the puppy appears.
还可以以地图的形式显示宠物的活动轨迹。You can also display the pet's trajectory in the form of a map.
需要说明的是,上述基于图像识别的城市宠物种群数量监测方法,不仅可以应用于寻找 丢失的宠物,还可以应用于对流浪宠物的救助、禁止宠物进入特定地区的执法依据等。It should be noted that the above-mentioned urban pet population monitoring method based on image recognition can be applied not only to finding lost pets, but also to rescue stray pets and law enforcement basis for prohibiting pets from entering specific areas.
综上所述,本申请所述的基于图像识别的城市宠物活动轨迹监测装置,可应用在智慧宠物的管理中,从而推动智慧城市的发展。本申请初始化每一个宠物类别的类别概率,获取图像采集设备发送的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置及设备标识号、采集时间,识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储,判断任意两个宠物图像的采集信息是否相同,当所述采集信息中的地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行更新得到第一类别概率,当所述采集信息中的地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行更新得到第二类别概率,当所述采集信息中的采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行更新得到第三类别概率,基于修正后的类别概率确定所述宠物的活动轨迹。本申请通过采集信息中的多个参数信息对初始化的类别概率进行修正,使得宠物图像中的宠物越来越接近真实的宠物类别,尤其是对于出现在不同图像采集设备中的宠物的类别概率进行了修正,最后基于修正后的类别概率关联宠物图像、标识信息和采集信息,并基于关联后的信息确定宠物的活动轨迹。整个过程无需识别出宠物的具体类别,能够避免传统算法提取宠物图像的特征向量不准确的问题。In summary, the urban pet activity track monitoring device based on image recognition described in this application can be applied to the management of smart pets, thereby promoting the development of smart cities. This application initializes the category probability of each pet category, acquires pet images and collection information sent by an image collection device, the collection information includes the geographic location of the image collection device, the device identification number, and the collection time, and identifies the pet image The identification information of the pet and the associated storage of the identification information with the pet image and the collection information to determine whether the collection information of any two pet images are the same, when the geographic location, the device identification number and the collection information in the collection information When the collection time is the same, the first correction model is used to update the category probability to obtain the first category probability. When the geographic location and device identification number in the collection information are the same, but the collection time is different, the second correction model is used The category probability is updated to obtain the second category probability. When the collection time in the collection information is the same but the geographic location and the device identification number are different, the third correction model is used to update the category probability to obtain the second category probability. The three-category probabilities are used to determine the pet's activity trajectory based on the corrected category probabilities. This application corrects the initialized category probabilities through multiple parameter information in the collected information, so that the pets in the pet images are getting closer and closer to the real pet categories, especially for the category probabilities of pets appearing in different image collection devices. Finally, the pet image, identification information and collection information are correlated based on the corrected category probability, and the pet’s activity track is determined based on the correlated information. The entire process does not need to identify the specific category of the pet, which can avoid the problem of inaccurate extraction of the feature vector of the pet image by the traditional algorithm.
实施例三Example three
参阅图3所示,为本申请实施例三提供的终端的结构示意图。在本申请较佳实施例中,所述终端3包括存储器31、至少一个处理器32、至少一条通信总线33及收发器34。Refer to FIG. 3, which is a schematic structural diagram of a terminal provided in Embodiment 3 of this application. In a preferred embodiment of the present application, the terminal 3 includes a memory 31, at least one processor 32, at least one communication bus 33, and a transceiver 34.
本领域技术人员应该了解,图3示出的终端的结构并不构成本申请实施例的限定,既可以是总线型结构,也可以是星形结构,所述终端3还可以包括比图示更多或更少的其他硬件或者软件,或者不同的部件布置。Those skilled in the art should understand that the structure of the terminal shown in FIG. 3 does not constitute a limitation of the embodiment of the present application. It may be a bus-type structure or a star structure. The terminal 3 may also include more More or less other hardware or software, or different component arrangements.
在一些实施例中,所述终端3是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的终端,其硬件包括但不限于微处理器、专用集成电路、可编程门阵列、数字处理器及嵌入式设备等。所述终端3还可包括客户设备,所述客户设备包括但不限于任何一种可与客户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、数码相机等。In some embodiments, the terminal 3 is a terminal that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, and Programming gate arrays, digital processors and embedded devices, etc. The terminal 3 may also include client equipment. The client equipment includes, but is not limited to, any electronic product that can interact with the client through a keyboard, a mouse, a remote control, a touch panel, or a voice control device, for example, a personal computer. Computers, tablets, smart phones, digital cameras, etc.
需要说明的是,所述终端3仅为举例,其他现有的或今后可能出现的电子产品如可适应于本申请,也应包含在本申请的保护范围以内,并以引用方式包含于此。It should be noted that the terminal 3 is only an example. If other existing or future electronic products can be adapted to this application, they should also be included in the protection scope of this application and included here by reference.
在一些实施例中,所述存储器31用于存储计算机可读指令和各种数据,例如安装在所述终端3中的装置,并在终端3的运行过程中实现高速、自动地完成程序或数据的存取。所述存储器31包括易失性和非易失性存储器,例如随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable Read-Only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子擦除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的存储介质。所述计算机可读存储介质可以是非易失性,也可以是易失性的。In some embodiments, the memory 31 is used to store computer-readable instructions and various data, such as a device installed in the terminal 3, and realize high-speed and automatic completion of programs or data during the operation of the terminal 3 Access. The memory 31 includes volatile and non-volatile memory, such as random access memory (Random Access Memory, RAM), read-only memory (Read-Only Memory, ROM), and programmable read-only memory (Programmable Read-Only). Memory, PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electronic Erasable Programmable Read-Only Memory, OTPROM Read memory (Electrically-Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, magnetic disk storage, tape storage, or capable of carrying or storing data Computer-readable storage medium. The computer-readable storage medium may be non-volatile or volatile.
在一些实施例中,所述至少一个处理器32可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述至少一个处理器32是所述终端3的控制核心(Control Unit),利用各种接口和线路连接整个终端3的各个部件,通过运行或执行存储在所述存储器31内的 程序或者模块,以及调用存储在所述存储器31内的数据,以执行终端3的各种功能和处理数据。In some embodiments, the at least one processor 32 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one Or a combination of multiple central processing units (CPU), microprocessors, digital processing chips, graphics processors, and various control chips. The at least one processor 32 is the control core (Control Unit) of the terminal 3. Various interfaces and lines are used to connect the various components of the entire terminal 3, and by running or executing programs or modules stored in the memory 31, And call the data stored in the memory 31 to execute various functions of the terminal 3 and process data.
在一些实施例中,所述至少一条通信总线33被设置为实现所述存储器31以及所述至少一个处理器32等之间的连接通信。In some embodiments, the at least one communication bus 33 is configured to implement connection and communication between the memory 31 and the at least one processor 32 and the like.
尽管未示出,所述终端3还可以包括给各个部件供电的电源(比如电池),优选的,电源可以通过电源管理装置与所述至少一个处理器32逻辑相连,从而通过电源管理装置实现管理充电、放电、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述终端3还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。Although not shown, the terminal 3 may also include a power source (such as a battery) for supplying power to various components. Preferably, the power source may be logically connected to the at least one processor 32 through a power management device, so as to realize management through the power management device. Functions such as charging, discharging, and power management. The power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators. The terminal 3 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。It should be understood that the embodiments are only for illustrative purposes, and are not limited by this structure in the scope of the patent application.
上述以软件功能模块的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,终端,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的部分。The above-mentioned integrated unit implemented in the form of a software function module may be stored in a computer readable storage medium. The above-mentioned software function module is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a terminal, or a network device, etc.) or a processor execute the method described in each embodiment of the present application. section.
在进一步的实施例中,结合图2,所述至少一个处理器32可执行所述终端3的操作装置以及安装的各类应用程序、计算机可读指令等,例如,上述的各个模块。In a further embodiment, with reference to FIG. 2, the at least one processor 32 can execute the operating device of the terminal 3 and various installed applications, computer-readable instructions, etc., such as the above-mentioned modules.
所述存储器31中存储有计算机可读指令,且所述至少一个处理器32可调用所述存储器31中存储的计算机可读指令以执行相关的功能。例如,图2中所述的各个模块是存储在所述存储器31中的计算机可读指令,并由所述至少一个处理器32所执行,从而实现所述各个模块的功能。The memory 31 stores computer-readable instructions, and the at least one processor 32 can call the computer-readable instructions stored in the memory 31 to perform related functions. For example, the various modules described in FIG. 2 are computer-readable instructions stored in the memory 31 and executed by the at least one processor 32, so as to realize the functions of the various modules.
在本申请的一个实施例中,所述存储器31存储多个指令,所述多个指令被所述至少一个处理器32所执行以实现本申请所述的方法中的全部或者部分步骤。In an embodiment of the present application, the memory 31 stores multiple instructions, and the multiple instructions are executed by the at least one processor 32 to implement all or part of the steps in the method described in the present application.
具体地,所述至少一个处理器32对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。Specifically, for the specific implementation method of the at least one processor 32 on the foregoing instructions, reference may be made to the description of the relevant steps in the embodiment corresponding to FIG.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the several embodiments provided in this application, it should be understood that the disclosed device and method can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。For those skilled in the art, it is obvious that the present application is not limited to the details of the foregoing exemplary embodiments, and the present application can be implemented in other specific forms without departing from the spirit or basic characteristics of the application. Therefore, no matter from which point of view, the embodiments should be regarded as exemplary and non-limiting. The scope of this application is defined by the appended claims rather than the above description, and therefore it is intended to fall into the claims. All changes in the meaning and scope of the equivalent elements of are included in this application. Any reference signs in the claims should not be regarded as limiting the claims involved. In addition, it is obvious that the word "including" does not exclude other elements or the singular does not exclude the plural. Multiple units or devices stated in the device claims can also be implemented by one unit or device through software or hardware. Words such as first and second are used to denote names, but do not denote any specific order.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application and not to limit them. Although the application has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the application can be Make modifications or equivalent replacements without departing from the spirit and scope of the technical solution of the present application.

Claims (20)

  1. 一种基于图像识别的城市宠物活动轨迹监测方法,其中,所述方法包括:An image recognition-based urban pet activity track monitoring method, wherein the method includes:
    初始化每一个宠物类别的类别概率;Initialize the category probability of each pet category;
    获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;Acquiring pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
    识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;Identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
    判断任意两个宠物图像的采集信息是否相同;Determine whether the collected information of any two pet images is the same;
    当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability;
    当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability;
    当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;When the collection time is the same, but the geographic location and the device identification number are not the same, the third correction model is used to correct the category probability to obtain the third category probability;
    基于修正后的类别概率确定所述宠物的活动轨迹。The activity trajectory of the pet is determined based on the corrected category probability.
  2. 根据权利要求1所述的基于图像识别的城市宠物活动轨迹监测方法,其中,所述采用第一修正模型对所述类别概率进行修正得到第一类别概率包括:The method for monitoring the activity trajectory of urban pets based on image recognition according to claim 1, wherein said adopting a first correction model to correct said category probability to obtain a first category probability comprises:
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100001
    Figure PCTCN2020111880-appb-100001
    采用如下公式对修正后的类别概率进行归一化后得到第一类别概率;Use the following formula to normalize the corrected category probability to obtain the first category probability;
    Figure PCTCN2020111880-appb-100002
    Figure PCTCN2020111880-appb-100002
    其中,γ为修正因子系数,
    Figure PCTCN2020111880-appb-100003
    Figure PCTCN2020111880-appb-100004
    分别为同一个图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100005
    为所述第一类别概率。
    Among them, γ is the correction factor coefficient,
    Figure PCTCN2020111880-appb-100003
    with
    Figure PCTCN2020111880-appb-100004
    Respectively, the initial category probability of two pets collected by the same image acquisition device at the same time,
    Figure PCTCN2020111880-appb-100005
    Is the first category probability.
  3. 根据权利要求1所述的基于图像识别的城市宠物活动轨迹监测方法,其中,所述采用第二修正模型对所述类别概率进行修正得到第二类别概率包括:The method for monitoring the activity trajectory of urban pets based on image recognition according to claim 1, wherein said adopting a second correction model to correct said category probability to obtain a second category probability comprises:
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100006
    Figure PCTCN2020111880-appb-100006
    采用如下公式对修正后的类别概率进行归一化后得到第二类别概率;Use the following formula to normalize the corrected category probability to obtain the second category probability;
    Figure PCTCN2020111880-appb-100007
    Figure PCTCN2020111880-appb-100007
    其中,
    Figure PCTCN2020111880-appb-100008
    为处罚因子,γ为修正因子系数,t为时间,
    Figure PCTCN2020111880-appb-100009
    Figure PCTCN2020111880-appb-100010
    分别为同一个图像采集设备在不同时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100011
    为所述第二类别概率。
    among them,
    Figure PCTCN2020111880-appb-100008
    Is the penalty factor, γ is the correction factor coefficient, t is time,
    Figure PCTCN2020111880-appb-100009
    with
    Figure PCTCN2020111880-appb-100010
    Are the initial category probabilities of two pets collected by the same image acquisition device at different times,
    Figure PCTCN2020111880-appb-100011
    Is the second category probability.
  4. 根据权利要求1所述的基于图像识别的城市宠物活动轨迹监测方法,其中,所述采用第三修正模型对所述类别概率进行修正得到第三类别概率包括:The method for monitoring the activity trajectory of urban pets based on image recognition according to claim 1, wherein said adopting a third correction model to correct said category probability to obtain a third category probability comprises:
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100012
    Figure PCTCN2020111880-appb-100012
    采用如下公式对修正后的类别概率进行归一化后得到第三类别概率;Use the following formula to normalize the corrected category probability to obtain the third category probability;
    Figure PCTCN2020111880-appb-100013
    Figure PCTCN2020111880-appb-100013
    其中,
    Figure PCTCN2020111880-appb-100014
    为处罚因子,γ为修正因子系数,
    Figure PCTCN2020111880-appb-100015
    l为距离,
    Figure PCTCN2020111880-appb-100016
    Figure PCTCN2020111880-appb-100017
    分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100018
    Figure PCTCN2020111880-appb-100019
    分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100020
    为所述第三类别概率。
    among them,
    Figure PCTCN2020111880-appb-100014
    Is the penalty factor, γ is the correction factor coefficient,
    Figure PCTCN2020111880-appb-100015
    l is the distance,
    Figure PCTCN2020111880-appb-100016
    with
    Figure PCTCN2020111880-appb-100017
    Are the initial category probabilities of two pets collected by different image collection devices at the same time,
    Figure PCTCN2020111880-appb-100018
    with
    Figure PCTCN2020111880-appb-100019
    Are the initial category probabilities of two pets collected by different image collection devices at the same time,
    Figure PCTCN2020111880-appb-100020
    Is the third category probability.
  5. 根据权利要求1至4中任意一项所述的基于图像识别的城市宠物活动轨迹监测方法,其中,所述识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储包括:The method for monitoring the activity track of urban pets based on image recognition according to any one of claims 1 to 4, wherein the identifying information of the pet in the pet image and combining the identification information with the pet image And the associated storage of the collected information includes:
    将所述宠物图像输入预先训练好的宠物标识识别模型中;Input the pet image into a pre-trained pet identification recognition model;
    获取所述宠物标识识别模型的识别结果;Acquiring the recognition result of the pet identification recognition model;
    根据所述识别结果确定所述宠物的标识信息。The identification information of the pet is determined according to the recognition result.
  6. 根据权利要求1至4中任意一项所述的基于图像识别的城市宠物活动轨迹监测方法,其中,所述将所述宠物图像输入预先训练好的宠物标识识别模型中包括:The method for monitoring urban pet activity tracks based on image recognition according to any one of claims 1 to 4, wherein said inputting said pet image into a pre-trained pet identification recognition model comprises:
    检测出所述宠物图像中的目标区域;Detecting the target area in the pet image;
    对所述宠物图像中的所述目标区域进行裁剪;Crop the target area in the pet image;
    将裁剪出的所述目标区域作为输入图像输入预先训练好的姿态识别模型中。The cropped target area is used as an input image and input into a pre-trained gesture recognition model.
  7. 根据权利要求1至4中任意一项所述的基于图像识别的城市宠物活动轨迹监测方法,其中,所述基于修正后的类别概率确定所述宠物的活动轨迹包括:The method for monitoring the activity trajectory of urban pets based on image recognition according to any one of claims 1 to 4, wherein the determining the activity trajectory of the pet based on the corrected category probability comprises:
    获取每个宠物图像对应的所有修正后的类别概率;Obtain all the corrected category probabilities corresponding to each pet image;
    从所述所有修正后的类别概率中筛选出最大的类别概率作为所述宠物图像的目标类别概率;Selecting the largest category probability from all the corrected category probabilities as the target category probability of the pet image;
    获取具有相同目标类别概率的宠物图像对应的采集信息;Obtain collection information corresponding to pet images with the same target category probability;
    根据所述采集信息确定所述宠物的活动轨迹。The activity track of the pet is determined according to the collected information.
  8. 一种基于图像识别的城市宠物活动轨迹监测装置,其中,所述装置包括:An image recognition-based urban pet activity track monitoring device, wherein the device includes:
    概率初始模块,用于初始化每一个宠物类别的类别概率;The probability initial module is used to initialize the category probability of each pet category;
    信息获取模块,用于获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;An information acquisition module for acquiring pet images and collection information collected by an image collection device, the collection information including the geographic location, device identification number, and collection time of the image collection device;
    标识识别模块,用于识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;An identification recognition module for identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
    信息判断模块,用于判断任意两个宠物图像的采集信息是否相同;The information judgment module is used to judge whether the collected information of any two pet images is the same;
    第一修正模块,用于当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;The first correction module is configured to use the first correction model to correct the category probability to obtain the first category probability when the geographic location, device identification number, and collection time are all the same;
    第二修正模块,用于当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;The second correction module is configured to use a second correction model to correct the category probability to obtain the second category probability when the geographic location and the device identification number are the same but the collection time is different;
    第三修正模块,用于当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;The third correction module is configured to use the third correction model to correct the category probability to obtain the third category probability when the collection time is the same but the geographic location and the device identification number are different;
    轨迹确定模块,用于基于修正后的类别概率确定所述宠物的活动轨迹。The trajectory determination module is used to determine the activity trajectory of the pet based on the corrected category probability.
  9. 一种终端,其中,所述终端包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现以下步骤:A terminal, wherein the terminal includes a processor configured to execute computer-readable instructions stored in a memory to implement the following steps:
    初始化每一个宠物类别的类别概率;Initialize the category probability of each pet category;
    获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;Acquiring pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
    识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;Identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
    判断任意两个宠物图像的采集信息是否相同;Determine whether the collected information of any two pet images is the same;
    当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability;
    当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability;
    当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;When the collection time is the same, but the geographic location and the device identification number are not the same, the third correction model is used to correct the category probability to obtain the third category probability;
    基于修正后的类别概率确定所述宠物的活动轨迹。The activity trajectory of the pet is determined based on the corrected category probability.
  10. 根据权利要求9所述的终端,其中,所述处理器执行所述计算机可读指令以实现采用第一修正模型对所述类别概率进行修正得到第一类别概率时,具体包括:The terminal according to claim 9, wherein when the processor executes the computer-readable instructions to implement the first correction model to correct the category probability to obtain the first category probability, the specific steps include:
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100021
    Figure PCTCN2020111880-appb-100021
    采用如下公式对修正后的类别概率进行归一化后得到第一类别概率;Use the following formula to normalize the corrected category probability to obtain the first category probability;
    Figure PCTCN2020111880-appb-100022
    Figure PCTCN2020111880-appb-100022
    其中,γ为修正因子系数,
    Figure PCTCN2020111880-appb-100023
    Figure PCTCN2020111880-appb-100024
    分别为同一个图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100025
    为所述第一类别概率。
    Among them, γ is the correction factor coefficient,
    Figure PCTCN2020111880-appb-100023
    with
    Figure PCTCN2020111880-appb-100024
    Respectively, the initial category probability of two pets collected by the same image acquisition device at the same time,
    Figure PCTCN2020111880-appb-100025
    Is the first category probability.
  11. 根据权利要求9所述的终端,其中,所述处理器执行所述计算机可读指令以实现采用第二修正模型对所述类别概率进行修正得到第二类别概率时,具体包括:The terminal according to claim 9, wherein, when the processor executes the computer-readable instruction to implement the second correction model to correct the category probability to obtain the second category probability, it specifically includes:
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100026
    Figure PCTCN2020111880-appb-100026
    采用如下公式对修正后的类别概率进行归一化后得到第二类别概率;Use the following formula to normalize the corrected category probability to obtain the second category probability;
    Figure PCTCN2020111880-appb-100027
    Figure PCTCN2020111880-appb-100027
    其中,
    Figure PCTCN2020111880-appb-100028
    为处罚因子,γ为修正因子系数,t为时间,
    Figure PCTCN2020111880-appb-100029
    Figure PCTCN2020111880-appb-100030
    分别为同一个图像采集设备在不同时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100031
    为所述第二类别概率。
    among them,
    Figure PCTCN2020111880-appb-100028
    Is the penalty factor, γ is the correction factor coefficient, t is the time,
    Figure PCTCN2020111880-appb-100029
    with
    Figure PCTCN2020111880-appb-100030
    Are the initial category probabilities of two pets collected by the same image acquisition device at different times,
    Figure PCTCN2020111880-appb-100031
    Is the second category probability.
  12. 根据权利要求9所述的终端,其中,所述处理器执行所述计算机可读指令以实现采用第三修正模型对所述类别概率进行修正得到第三类别概率时,具体包括:The terminal according to claim 9, wherein, when the processor executes the computer-readable instructions to implement a third correction model to correct the category probability to obtain the third category probability, the specific steps specifically include:
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100032
    Figure PCTCN2020111880-appb-100032
    采用如下公式对修正后的类别概率进行归一化后得到第三类别概率;Use the following formula to normalize the corrected category probability to obtain the third category probability;
    Figure PCTCN2020111880-appb-100033
    Figure PCTCN2020111880-appb-100033
    其中,
    Figure PCTCN2020111880-appb-100034
    为处罚因子,γ为修正因子系数,
    Figure PCTCN2020111880-appb-100035
    l为距离,
    Figure PCTCN2020111880-appb-100036
    Figure PCTCN2020111880-appb-100037
    分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100038
    Figure PCTCN2020111880-appb-100039
    分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100040
    为所述第三类别概率。
    among them,
    Figure PCTCN2020111880-appb-100034
    Is the penalty factor, γ is the correction factor coefficient,
    Figure PCTCN2020111880-appb-100035
    l is the distance,
    Figure PCTCN2020111880-appb-100036
    with
    Figure PCTCN2020111880-appb-100037
    Respectively, the initial category probabilities of two pets collected by different image collection devices at the same time,
    Figure PCTCN2020111880-appb-100038
    with
    Figure PCTCN2020111880-appb-100039
    Respectively, the initial category probabilities of two pets collected by different image collection devices at the same time,
    Figure PCTCN2020111880-appb-100040
    Is the third category probability.
  13. 根据权利要求9至12中任意一项所述的终端,其中,所述处理器执行所述计算机可读指令以实现识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储时,具体包括:The terminal according to any one of claims 9 to 12, wherein the processor executes the computer-readable instructions to realize the identification information of the pet in the pet image and the identification information is connected with the identification information of the pet. When the pet image and the collected information are stored in association, it specifically includes:
    将所述宠物图像输入预先训练好的宠物标识识别模型中;Input the pet image into a pre-trained pet identification recognition model;
    获取所述宠物标识识别模型的识别结果;Acquiring the recognition result of the pet identification recognition model;
    根据所述识别结果确定所述宠物的标识信息。The identification information of the pet is determined according to the recognition result.
  14. 根据权利要求9至12中任意一项所述的终端,其中,所述处理器执行所述计算机可读指令以实现将所述宠物图像输入预先训练好的宠物标识识别模型中时,具体包括:The terminal according to any one of claims 9 to 12, wherein when the processor executes the computer-readable instructions to input the pet image into a pre-trained pet identification recognition model, it specifically includes:
    检测出所述宠物图像中的目标区域;Detecting the target area in the pet image;
    对所述宠物图像中的所述目标区域进行裁剪;Crop the target area in the pet image;
    将裁剪出的所述目标区域作为输入图像输入预先训练好的姿态识别模型中。The cropped target area is used as an input image and input into a pre-trained gesture recognition model.
  15. 根据权利要求9至12中任意一项所述的终端,其中,所述处理器执行所述计算机可读指令以实现基于修正后的类别概率确定所述宠物的活动轨迹时,具体包括:The terminal according to any one of claims 9 to 12, wherein when the processor executes the computer-readable instructions to determine the pet's activity trajectory based on the corrected category probability, it specifically includes:
    获取每个宠物图像对应的所有修正后的类别概率;Obtain all the corrected category probabilities corresponding to each pet image;
    从所述所有修正后的类别概率中筛选出最大的类别概率作为所述宠物图像的目标类别概率;Selecting the largest category probability from all the corrected category probabilities as the target category probability of the pet image;
    获取具有相同目标类别概率的宠物图像对应的采集信息;Obtain collection information corresponding to pet images with the same target category probability;
    根据所述采集信息确定所述宠物的活动轨迹。The activity track of the pet is determined according to the collected information.
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现以下步骤:A computer-readable storage medium having computer-readable instructions stored thereon, wherein the computer-readable instructions implement the following steps when executed by a processor:
    初始化每一个宠物类别的类别概率;Initialize the category probability of each pet category;
    获取图像采集设备采集的宠物图像及采集信息,所述采集信息包括所述图像采集设备的地理位置、设备标识号及采集时间;Acquiring pet images and collection information collected by an image collection device, where the collection information includes the geographic location, device identification number, and collection time of the image collection device;
    识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储;Identifying identification information of the pet in the pet image and storing the identification information in association with the pet image and the collected information;
    判断任意两个宠物图像的采集信息是否相同;Determine whether the collected information of any two pet images is the same;
    当所述地理位置、设备标识号及采集时间均相同时,采用第一修正模型对所述类别概率进行修正得到第一类别概率;When the geographic location, device identification number, and collection time are all the same, use the first correction model to correct the category probability to obtain the first category probability;
    当所述地理位置及设备标识号相同,但采集时间不同时,采用第二修正模型对所述类别概率进行修正得到第二类别概率;When the geographic location and the device identification number are the same, but the collection time is different, the second correction model is used to correct the category probability to obtain the second category probability;
    当所述采集时间相同,但地理位置及设备标识号均不相同时,采用第三修正模型对所述类别概率进行修正得到第三类别概率;When the collection time is the same, but the geographic location and the device identification number are not the same, the third correction model is used to correct the category probability to obtain the third category probability;
    基于修正后的类别概率确定所述宠物的活动轨迹。The activity trajectory of the pet is determined based on the corrected category probability.
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述计算机可读指令被所述处理器执行以实现采用第一修正模型对所述类别概率进行修正得到第一类别概率时,具体包括:The computer-readable storage medium according to claim 16, wherein the computer-readable instructions are executed by the processor to implement the first correction model to correct the category probability to obtain the first category probability, specifically comprising :
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100041
    Figure PCTCN2020111880-appb-100041
    采用如下公式对修正后的类别概率进行归一化后得到第一类别概率;Use the following formula to normalize the corrected category probability to obtain the first category probability;
    Figure PCTCN2020111880-appb-100042
    Figure PCTCN2020111880-appb-100042
    其中,γ为修正因子系数,
    Figure PCTCN2020111880-appb-100043
    Figure PCTCN2020111880-appb-100044
    分别为同一个图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100045
    为所述第一类别概率。
    Among them, γ is the correction factor coefficient,
    Figure PCTCN2020111880-appb-100043
    with
    Figure PCTCN2020111880-appb-100044
    Respectively, the initial category probability of two pets collected by the same image acquisition device at the same time,
    Figure PCTCN2020111880-appb-100045
    Is the first category probability.
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述计算机可读指令被所述处理器执行以实现采用第二修正模型对所述类别概率进行修正得到第二类别概率时,具体包括:The computer-readable storage medium according to claim 16, wherein the computer-readable instructions are executed by the processor to implement the second correction model to correct the category probability to obtain the second category probability, which specifically includes :
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100046
    Figure PCTCN2020111880-appb-100046
    采用如下公式对修正后的类别概率进行归一化后得到第二类别概率;Use the following formula to normalize the corrected category probability to obtain the second category probability;
    Figure PCTCN2020111880-appb-100047
    Figure PCTCN2020111880-appb-100047
    其中,
    Figure PCTCN2020111880-appb-100048
    为处罚因子,γ为修正因子系数,t为时间,
    Figure PCTCN2020111880-appb-100049
    Figure PCTCN2020111880-appb-100050
    分别为同一个图像采集设备在不同时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100051
    为所述第二类别概率。
    among them,
    Figure PCTCN2020111880-appb-100048
    Is the penalty factor, γ is the correction factor coefficient, t is time,
    Figure PCTCN2020111880-appb-100049
    with
    Figure PCTCN2020111880-appb-100050
    Are the initial category probabilities of two pets collected by the same image acquisition device at different times,
    Figure PCTCN2020111880-appb-100051
    Is the second category probability.
  19. 根据权利要求16所述的计算机可读存储介质,其中,所述计算机可读指令被所述处理器执行以实现采用第三修正模型对所述类别概率进行修正得到第三类别概率时,具体包括:The computer-readable storage medium according to claim 16, wherein when the computer-readable instructions are executed by the processor to implement a third correction model to correct the category probability to obtain the third category probability, it specifically includes :
    采用如下公式对所述类别概率进行修正;Use the following formula to correct the category probability;
    Figure PCTCN2020111880-appb-100052
    Figure PCTCN2020111880-appb-100052
    采用如下公式对修正后的类别概率进行归一化后得到第三类别概率;Use the following formula to normalize the corrected category probability to obtain the third category probability;
    Figure PCTCN2020111880-appb-100053
    Figure PCTCN2020111880-appb-100053
    其中,
    Figure PCTCN2020111880-appb-100054
    为处罚因子,γ为修正因子系数,
    Figure PCTCN2020111880-appb-100055
    l为距离,
    Figure PCTCN2020111880-appb-100056
    Figure PCTCN2020111880-appb-100057
    分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100058
    Figure PCTCN2020111880-appb-100059
    分别为不同图像采集设备在同一时刻采集到两个宠物的初始类别概率,
    Figure PCTCN2020111880-appb-100060
    为所述第三类别概率。
    among them,
    Figure PCTCN2020111880-appb-100054
    Is the penalty factor, γ is the correction factor coefficient,
    Figure PCTCN2020111880-appb-100055
    l is the distance,
    Figure PCTCN2020111880-appb-100056
    with
    Figure PCTCN2020111880-appb-100057
    Are the initial category probabilities of two pets collected by different image collection devices at the same time,
    Figure PCTCN2020111880-appb-100058
    with
    Figure PCTCN2020111880-appb-100059
    Are the initial category probabilities of two pets collected by different image collection devices at the same time,
    Figure PCTCN2020111880-appb-100060
    Is the third category probability.
  20. 根据权利要求16至19中任意一项所述的计算机可读存储介质,其中,所述计算机可读指令被所述处理器执行以实现识别所述宠物图像中的宠物的标识信息并将所述标识信息与所述宠物图像及所述采集信息关联存储时,具体包括:The computer-readable storage medium according to any one of claims 16 to 19, wherein the computer-readable instructions are executed by the processor to realize the identification information of the pet in the pet image and the When the identification information is stored in association with the pet image and the collected information, it specifically includes:
    将所述宠物图像输入预先训练好的宠物标识识别模型中;Input the pet image into a pre-trained pet identification recognition model;
    获取所述宠物标识识别模型的识别结果;Acquiring the recognition result of the pet identification recognition model;
    根据所述识别结果确定所述宠物的标识信息。The identification information of the pet is determined according to the recognition result.
PCT/CN2020/111880 2019-09-03 2020-08-27 Urban pet motion trajectory monitoring method based on image recognition, and related devices WO2021043074A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910829499.XA CN110751675B (en) 2019-09-03 2019-09-03 Urban pet activity track monitoring method based on image recognition and related equipment
CN201910829499.X 2019-09-03

Publications (1)

Publication Number Publication Date
WO2021043074A1 true WO2021043074A1 (en) 2021-03-11

Family

ID=69276012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111880 WO2021043074A1 (en) 2019-09-03 2020-08-27 Urban pet motion trajectory monitoring method based on image recognition, and related devices

Country Status (2)

Country Link
CN (1) CN110751675B (en)
WO (1) WO2021043074A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550490A (en) * 2022-02-22 2022-05-27 北京信路威科技股份有限公司 Parking space statistical method and system for parking lot, computer equipment and storage medium
CN117692767A (en) * 2024-02-02 2024-03-12 深圳市积加创新技术有限公司 Low-power consumption monitoring system based on scene self-adaptive dynamic time-sharing strategy

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751675B (en) * 2019-09-03 2023-08-11 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment
CN111354024B (en) * 2020-04-10 2023-04-21 深圳市五元科技有限公司 Behavior prediction method of key target, AI server and storage medium
CN112529020B (en) * 2020-12-24 2024-05-24 携程旅游信息技术(上海)有限公司 Animal identification method, system, equipment and storage medium based on neural network
CN112904778B (en) * 2021-02-02 2022-04-15 东北林业大学 Wild animal intelligent monitoring method based on multi-dimensional information fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852363B1 (en) * 2012-09-27 2017-12-26 Google Inc. Generating labeled images
CN109934176A (en) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 Pedestrian's identifying system, recognition methods and computer readable storage medium
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN110751675A (en) * 2019-09-03 2020-02-04 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713500B2 (en) * 2016-09-12 2020-07-14 Kennesaw State University Research And Service Foundation, Inc. Identification and classification of traffic conflicts using live video images
CN109376786A (en) * 2018-10-31 2019-02-22 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and readable storage medium storing program for executing
CN110163301A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of classification method and device of image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852363B1 (en) * 2012-09-27 2017-12-26 Google Inc. Generating labeled images
CN109934176A (en) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 Pedestrian's identifying system, recognition methods and computer readable storage medium
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN110751675A (en) * 2019-09-03 2020-02-04 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550490A (en) * 2022-02-22 2022-05-27 北京信路威科技股份有限公司 Parking space statistical method and system for parking lot, computer equipment and storage medium
CN114550490B (en) * 2022-02-22 2023-12-22 北京信路威科技股份有限公司 Parking space statistics method, system, computer equipment and storage medium of parking lot
CN117692767A (en) * 2024-02-02 2024-03-12 深圳市积加创新技术有限公司 Low-power consumption monitoring system based on scene self-adaptive dynamic time-sharing strategy
CN117692767B (en) * 2024-02-02 2024-06-11 深圳市积加创新技术有限公司 Low-power consumption monitoring system based on scene self-adaptive dynamic time-sharing strategy

Also Published As

Publication number Publication date
CN110751675B (en) 2023-08-11
CN110751675A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
WO2021043074A1 (en) Urban pet motion trajectory monitoring method based on image recognition, and related devices
WO2021043073A1 (en) Urban pet movement trajectory monitoring method based on image recognition and related devices
US11232327B2 (en) Smart video surveillance system using a neural network engine
JP6488083B2 (en) Hybrid method and system of video and vision based access control for parking space occupancy determination
CN111507989A (en) Training generation method of semantic segmentation model, and vehicle appearance detection method and device
US20200125923A1 (en) System and Method for Detecting Anomalies in Video using a Similarity Function Trained by Machine Learning
US11475671B2 (en) Multiple robots assisted surveillance system
CN106791710A (en) Object detection method, device and electronic equipment
US20210056312A1 (en) Video blocking region selection method and apparatus, electronic device, and system
Guzhva et al. Now you see me: Convolutional neural network based tracker for dairy cows
US20230060211A1 (en) System and Method for Tracking Moving Objects by Video Data
CN105844659A (en) Moving part tracking method and device
US20190096066A1 (en) System and Method for Segmenting Out Multiple Body Parts
CN112836683B (en) License plate recognition method, device, equipment and medium for portable camera equipment
CN111985452B (en) Automatic generation method and system for personnel movement track and foot drop point
CN114360261B (en) Vehicle reverse running identification method and device, big data analysis platform and medium
Ng et al. Outdoor illegal parking detection system using convolutional neural network on Raspberry Pi
CN113689475A (en) Cross-border head trajectory tracking method, equipment and storage medium
JP2021106330A (en) Information processing apparatus, information processing method, and program
CN115661521A (en) Fire hydrant water leakage detection method and system, electronic equipment and storage medium
CN114038040A (en) Machine room inspection monitoring method, device and equipment
CN113012223A (en) Target flow monitoring method and device, computer equipment and storage medium
CN112153341A (en) Task supervision method, device and system, electronic equipment and storage medium
Ouseph et al. Machine Learning Based Smart Parking Management for Intelligent Transportation Systems
CN112989892A (en) Animal monitoring method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20860659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20860659

Country of ref document: EP

Kind code of ref document: A1