CN115424211A - Civilized dog raising terminal operation method and device based on big data and terminal - Google Patents

Civilized dog raising terminal operation method and device based on big data and terminal Download PDF

Info

Publication number
CN115424211A
CN115424211A CN202211213842.6A CN202211213842A CN115424211A CN 115424211 A CN115424211 A CN 115424211A CN 202211213842 A CN202211213842 A CN 202211213842A CN 115424211 A CN115424211 A CN 115424211A
Authority
CN
China
Prior art keywords
dog
target
abnormal behavior
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211213842.6A
Other languages
Chinese (zh)
Other versions
CN115424211B (en
Inventor
宋程
刘保国
胡金有
吴浩
梁开岩
郭玮鹏
李海
巩京京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingchong Kingdom Beijing Technology Co ltd
Original Assignee
Xingchong Kingdom Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingchong Kingdom Beijing Technology Co ltd filed Critical Xingchong Kingdom Beijing Technology Co ltd
Priority to CN202211213842.6A priority Critical patent/CN115424211B/en
Publication of CN115424211A publication Critical patent/CN115424211A/en
Application granted granted Critical
Publication of CN115424211B publication Critical patent/CN115424211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of dog management, in particular to a civilized dog breeding supervision method, and specifically relates to a civilized dog breeding terminal operation method, device and terminal based on big data; the method comprises the steps of obtaining video images in an area, determining the abnormal behavior of a dog based on the obtained images, determining the abnormal behavior of the dog and a target dog image in a configured dog database after the abnormal behavior of the dog corresponding to the obtained abnormal behavior of the dog is identified based on the determined dog, obtaining corresponding user information based on the determined target dog image, sending the abnormal behavior of the dog to a corresponding user side, and achieving management of the abnormal behavior of the dog.

Description

Civilized dog raising terminal operation method and device based on big data and terminal
Technical Field
The application relates to the technical field of dog management, in particular to a civilized dog breeding supervision method, and specifically relates to a civilized dog breeding terminal operation method, device and terminal based on big data.
Background
In recent years, with the improvement of living standard, more and more residents in cities begin to feed pet dogs to relieve the pressure of life in cities, and lonely is relieved by the accompany of pet dogs. However, the population density of China is high, pet dogs and residents often share the same public area, the dogs are strolled without a rope, the pet dogs are abandoned to cause wandering dogs, and irregular dog raising behaviors such as lack of pet dog immunity and the like are also frequent. Therefore, dogs hurt people, people and people contradiction is caused due to the dogs, the security incidents are continuous, and even rabies causes the problem of life safety of residents. How to construct a system scientific standard for guiding residents to house pet dogs is an urgent need.
In the prior art, management for the dog is mainly performed through an external device configured on the dog, for example, in the prior art, a heart rate acquisition device and a motion sensor of a key joint are configured to acquire real-time physiological data of the dog, and the data are acquired and compared to identify abnormal behaviors of the dog. However, in practical use, the number of dogs wearing the peripheral device is small, and the peripheral device is easily damaged in natural environment, so that the use effect of the peripheral device is reduced. Even if the external device is provided to acquire the abnormal behavior of the dog, the abnormal behavior of the dog cannot be managed, and the abnormal behavior cannot be reminded and handled.
Disclosure of Invention
In order to solve the technical problems, the application provides a civilized canine terminal operation method, a civilized canine terminal operation device and a civilized canine terminal operation terminal based on big data, which can realize identification of abnormal behaviors of a canine by using an existing external information acquisition device in an outdoor environment, and send corresponding information and manage the canine based on the identified abnormal behaviors.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in the first aspect, the terminal is communicated with a dog database, a community property management terminal, a user terminal and a video acquisition terminal, the dog database is configured with dog basic information, and the dog basic information comprises basic images and associated information of corresponding dogs; the operation method comprises the following steps: acquiring a video image in a target area based on the video acquisition terminal; identifying the video image based on a preset detection frame to obtain an image containing dogs; acquiring a video to be detected containing the images of the dogs and a plurality of images of the dogs; carrying out abnormal behavior detection on the dogs in the multiple videos to be detected based on a preset abnormal behavior detection model of the dogs to obtain abnormal behaviors of the dogs; comparing the dog images in the abnormal-behavior dogs with the plurality of basic images in the dog database to obtain target dog images; determining a dog owner based on the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into a storage space of the corresponding dog in the dog database based on a time sequence mode.
In a first implementation manner of the first aspect, identifying the video image based on a preset detection frame to obtain an image containing a dog, includes the following steps: dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a central point, and each central point corresponds to an object type; selecting all quasi-target central points corresponding to target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames; and removing the weight of the plurality of quasi-target frames to obtain the target object frame containing the dog.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the method for acquiring a video to be detected containing the dog images and a plurality of dog images includes: labeling the target object frame, and acquiring a to-be-detected video containing the target object frame from the video image based on labeling information; extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target detection frames; and extracting images in the target detection frames in the video frames with the processing function to obtain a plurality of images of the dog.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, performing state detection on a dog in a plurality of dog images based on a preset dog abnormal behavior detection model to obtain abnormal behaviors of the dog; the abnormal behavior model of the dog comprises an abnormal behavior recognition network and a classifier, the abnormal behavior recognition network meets the network convergence requirement, the abnormal behavior recognition network is used for acquiring abnormal behavior characteristics in a plurality of images of the dog, and the classifier is used for classifying the abnormal behavior characteristics to determine abnormal behaviors, and the abnormal behavior model specifically comprises the following steps: identifying a plurality of the images of the dogs according to an abnormal behavior identification network meeting the network convergence requirement to obtain the corresponding abnormal behavior characteristics in the plurality of images of the dogs; and classifying the abnormal behavior features according to a classifier meeting the training result requirement to obtain a classification label, and determining the abnormal behavior based on the classification label.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner, the training method of the classifier includes: configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derived dog, labeling and distributing a classification label for each sample image, performing feature extraction on the plurality of sample images based on an abnormal behavior recognition network meeting network convergence requirements to obtain a feature value of each sample image, performing normalization processing on the classification labels and the feature values, converting the normalization processing into a unified format to obtain training data, training the initial classifier based on the training data, and obtaining the trained classifier after the output precision of the classifier reaches preset precision.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner, comparing a plurality of dog images in a dog with an abnormal behavior with a plurality of base images in the dog database to obtain a target dog image, includes the following steps: processing the dogs based on a binary method to obtain a plurality of pre-processing image outlines, obtaining area numerical values of the pre-processing image outlines, and comparing the area numerical values to determine an optimal image; acquiring a target characteristic diagram of the optimal image; acquiring a plurality of target detection points in the target feature map; acquiring coordinate parameters of a plurality of target detection points, and acquiring relative distances between the plurality of target detection points based on the plurality of coordinate parameters; and comparing the relative distance of the plurality of target detection points with the relative distance of the plurality of target detection points in the dog database to obtain the target dog image.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner, the method for obtaining the target dog image based on comparing the relative distances between the multiple target detection points and the relative distances between the multiple target detection points in the dog database specifically includes: comparing the similarity of the relative distance of any target detection point with the relative distance of the corresponding target detection point in the dog database to obtain the similarity of the relative distances of the plurality of target detection points; fusing the similarity of the relative distances of the plurality of target detection points to obtain final similarity; and determining the corresponding target dog image based on the final similarity.
With reference to the third possible implementation manner of the first aspect, in a seventh possible implementation manner, sending a reminding message to a corresponding user terminal based on a determined dog owner, includes the following steps: and determining corresponding reminding information based on the label of the abnormal behavior, determining a reminding mode based on the reminding type, and sending the reminding information to the corresponding user terminal based on the determined reminding mode.
In a second aspect, a civilized dog raising terminal operation device based on big data comprises: the video image acquisition module is used for acquiring a video image in a target area based on the video acquisition terminal; the image acquisition module is used for identifying the video image based on a preset detection frame to obtain an image containing a dog; the information acquisition module is used for acquiring a video to be detected containing the dog images and a plurality of dog images; the abnormal behavior acquisition module is used for detecting the abnormal behavior of the dogs in the multiple videos to be detected based on a preset dog abnormal behavior detection model to obtain the abnormal behavior of the dogs; the target dog image acquisition module is used for comparing the dog images in the abnormal dogs with the plurality of basic images in the dog database to obtain target dog images; and the information processing module is used for determining the dog owner according to the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into the storage space of the corresponding dog in the dog database based on a time sequence mode.
In a third aspect, a terminal device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to any of the above when executing the computer program
In the technical scheme provided by the embodiment of the application, the video image in the area is acquired, the abnormal behavior is determined based on the acquired dog in the image, the abnormal behavior is determined based on the identification that the determined dog acquires the abnormal behavior corresponding to the dog, the image of the abnormal behavior and the image of the target dog in the configured dog database are determined, the corresponding user information is obtained based on the image of the target dog after the determination, the abnormal behavior of the dog is sent to the corresponding user side, and the management of the abnormal behavior of the dog is realized. According to the embodiment, the result can be accurately judged through the configured model and the comparison method, and management of the dog can be realized based on the configured information reminding method. .
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
The methods, systems, and/or processes of the figures are further described in accordance with the exemplary embodiments. These exemplary embodiments will be described in detail with reference to the drawings. These exemplary embodiments are non-limiting exemplary embodiments in which example numbers represent similar mechanisms throughout the various views of the drawings.
Fig. 1 is a schematic structural diagram of a system provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
Fig. 3 is a flowchart of a method for operating a big data-based civilized dog terminal according to some embodiments of the present disclosure.
Fig. 4 is a block diagram of a big data-based civilized dog terminal operating device according to some embodiments of the present application.
Detailed Description
In order to better understand the technical solutions of the present application, the following detailed descriptions are provided with accompanying drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and in a case of no conflict, the technical features in the embodiments and examples of the present application may be combined with each other.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant guidance. It will be apparent, however, to one skilled in the art that the present application may be practiced without these specific details. In other instances, well-known methods, procedures, systems, compositions, and/or circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present application.
Flowcharts are used herein to illustrate the implementations performed by systems according to embodiments of the present application. It should be expressly understood that the processes performed by the flowcharts may be performed out of order. Rather, these implementations may be performed in the reverse order or simultaneously. In addition, at least one other implementation may be added to the flowchart. One or more implementations may be deleted from the flowchart.
Before further detailed description of the embodiments of the present invention, terms and expressions referred to in the embodiments of the present invention are described, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations.
(1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
(2) Based on the condition or state on which the operation to be performed depends, when the condition or state on which the operation depends is satisfied, the operation or operations to be performed may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
(3) A convolutional neural network is a mathematical or computational model that mimics the structure and function of a biological neural network (the central nervous system of an animal, particularly the brain) and is used to estimate or approximate functions.
(4) The classifier is a general term of a method for classifying samples in data mining, and includes algorithms such as decision trees, logistic regression, naive bayes, neural networks and the like, which is referred to as a neural network algorithm in this embodiment.
The abnormal behavior identification method provided by the embodiment of the invention relates to the field of Artificial Intelligence (AI), and the artificial intelligence technology is a comprehensive subject and relates to the field which is wide and has both hardware level technology and software level technology. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
According to the technical scheme provided by the embodiment of the application, the main application scene is the identification of the abnormal behavior of the dog in the target area, and the scene configured at present in the target area is the cell, namely, the technical scheme provided by the embodiment is mainly used for identifying the abnormal behavior of the dog in the cell, and the corresponding abnormal behavior of the dog sends the reminding message to the corresponding user in the corresponding reminding mode, so that the management of the dog is realized. And recording and storing the abnormal behaviors of the user's dogs, and grading the dogs at a set time stage, wherein the grading basis is the number of abnormal behaviors of the dogs. The abnormal behavior of the dog and the grade of the dog are used for managing the dog and the corresponding user of the dog, and the management mode can comprise the improvement of property fees and the corresponding fine. Of course, the action for the fine is to send the abnormal action of the dog to the city management unit, and to carry out the fine on the corresponding person by the city management unit with the right of authority.
In this embodiment, identification of abnormal behavior for a dog is based on a video image, and the video image may be acquired by a video monitoring device configured in a cell or a community, so that cost may be increased by identifying abnormal behavior without configuring an additional device. The identification aiming at the abnormal behavior mainly comprises the following processes: the first process is to extract a video frame containing dogs in the video image, identify the dogs in the video frame to obtain an identification result, and classify abnormal behaviors based on the identification result to obtain a classification result of the abnormal behaviors.
Referring to fig. 1, based on the above technical background, the embodiment of the present application provides a dog operation system 10, which includes a terminal device 200, wherein the terminal device mainly implements operation management of dogs, and further includes a dog database 100, a cell property management terminal 300, a user terminal 400, and a video acquisition terminal 500, which communicate with the terminal device. In this embodiment, the canine database is configured with the intra-cell canine basic information, where the basic information includes basic images of corresponding canines and associated information, where the associated information is information used for understanding the situation of the canines, such as user information, vaccination information, and age information corresponding to the canines. The community property management terminal is used for a community property manager. The user terminal is used for receiving information sent by the terminal device, wherein the terminal device sends the abnormal behavior information of the dog corresponding to the user terminal, and the user terminal is used for receiving the information and can also receive an information acquisition command and other push information pushed by the community property management terminal. The video acquisition terminal is used for acquiring video images in the cell and sending the video images to the terminal equipment for identifying abnormal behaviors. In this embodiment, the system may further include a public security dog registration and record terminal 600, an urban management audit law enforcement management terminal 700, and a pet hospital management terminal 800, where the terminals are used to implement corresponding management responsibilities and functions, and the terminal device may obtain information open in the terminals through a communication interface, so as to implement management of dogs.
In this embodiment, the manner for sending information is implemented based on a network, and an association relationship needs to be established between the user terminal and other terminals before the terminal device performs an application, and in particular, the association between the terminal device and the user terminal is implemented by registering with respect to the user terminal. The terminal device can be used for a plurality of user terminals, and the user terminals communicate with the terminal device through passwords and other encryption modes.
Based on the above technical background, referring to fig. 2, a terminal device 200 provided in an embodiment of the present application includes a memory 210, a processor 220, and a computer program stored in the memory and executable on the processor, where the processor executes a dog keeping terminal operating method, and is capable of implementing abnormal behavior identification and management of dogs.
In this embodiment, the terminal may be a server, and includes a memory, a processor, and a communication unit with respect to a physical structure of the server. The memory, processor and communication unit components are in direct or indirect electrical communication with each other to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory is used for storing specific information and programs, and the communication unit is used for sending the processed information to the corresponding user side.
In the embodiment, the storage module is divided into two storage areas, wherein one storage area is a program storage unit, and the other storage area is a data storage unit. The program storage unit is equivalent to a firmware area, the read-write authority of the area is set to be a read-only mode, and data stored in the area cannot be erased and changed. The data in the data storage unit can be erased or read and written, and when the capacity of the data storage area is full, the newly written data can overwrite the earliest historical data.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP)), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 3, in the present embodiment, a method for operating a civilized canine terminal for big data is provided, which includes the following steps:
and S310, acquiring a video image in the target area based on the video acquisition terminal.
In this embodiment, the identification of the abnormal behavior of the dog is mainly based on the video image, and the acquisition of the video image is performed based on the video capture terminal, but in this embodiment, the video capture terminal is only required to be configured with the monitoring video in the existing cell, and no additional video capture device is required to be set specifically, so that the identification is performed on the daily real-time video image information captured by the video capture terminal.
And S320, identifying the video image based on a preset detection frame to obtain an image containing the dog.
In the embodiment, the basis of identifying the abnormal behavior of the dog is to acquire the image containing the dog in the video image, and in the method, the video image is mainly identified based on the detection frame, so as to obtain the target image containing the dog image. Aiming at the problems, the method mainly comprises the following steps:
dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a central point, and each central point corresponds to an object type. The acquired video images are classified into people, dogs, numbers, buildings and the like. Firstly, dividing a current frame in a video image into N × M regions according to lines and rows, respectively identifying and anchoring each region, obtaining A anchored frames with different sizes in each region, wherein each frame corresponds to a probability value and a central point, each central point corresponds to an object type, and B types are total.
And selecting all quasi-target central points corresponding to the target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames. The method mainly comprises the steps of selecting all quasi-target central points corresponding to dogs to be reserved in a current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding dog to obtain a plurality of quasi-target frames, wherein the length and width of the plurality of quasi-target frames may be exactly the same as the height and width of the dog, and may exceed the height and width of the dog.
And carrying out duplication elimination on the plurality of quasi-target frames to obtain a target object frame containing the dog. Aiming at the step, the duplication of a plurality of quasi-target frames of the selected dog is mainly removed, and a target object frame is obtained. In the invention, by adding the category screening before the duplicate removal technology, the uninteresting categories can be screened out, the calculation amount of NMS is reduced, the data processing speed is improved, and the time is saved.
And S330, acquiring the video to be detected containing the dog images and a plurality of dog images.
In this embodiment, a video including a dog in a video image is acquired for the target detection frame, and the video including the dog is acquired based on the video including the dog, so that the subsequent identification of abnormal behavior of the dog and the comparison of the basic images of the dog in the dog database are mainly realized for this step, and the target dog is obtained.
Specifically, in the present embodiment, for this process, the following method is included: labeling the target object frame, and acquiring a to-be-detected video containing the target object frame from the video image based on labeling information; extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target detection frames; and extracting images in the target detection frames in the video frames with the processing function to obtain a plurality of images of the dog.
In this embodiment, the specific information is mainly labeled for the target detection box, where the specific information may be performed through preset keywords, where the keywords may be easily recognized keywords such as "dog", and "target", and may also be special text symbols. The annotation is used for obtaining the specific image containing the dog in the whole video image subsequently, and the video image containing the dog is determined based on the target detection frame, so that the video containing the dog can be rapidly obtained in the video image after the annotation is carried out on the target detection frame.
And step S340, carrying out abnormal behavior detection on the dogs in the plurality of videos to be detected based on a preset abnormal behavior detection model of the dogs to obtain the abnormal behaviors of the dogs.
In this embodiment, the abnormal behavior detection model for the dog is obtained through a dog abnormal behavior detection model, and the dog abnormal behavior detection model for the dog includes an abnormal behavior recognition network and a classifier, which meet network convergence requirements, the abnormal behavior recognition network is used for obtaining abnormal behavior features in a plurality of images of the dog, and the classifier is used for classifying the abnormal behavior features to determine abnormal behaviors.
The following procedure is included for this step:
and identifying the plurality of images of the dogs according to an abnormal behavior identification network meeting the network convergence requirement to obtain the corresponding abnormal behavior characteristics in the plurality of images of the dogs.
And classifying the abnormal behavior features according to a classifier meeting the training result requirement to obtain a classification label, and determining the abnormal behavior based on the classification label.
In this embodiment, the obtaining based on the model training of the abnormal behavior recognition network satisfying the network convergence requirement is performed, and the training method is mainly implemented based on the configured initial abnormal behavior recognition network and the adjustment of the weight value of the initial abnormal behavior recognition network, and specifically includes the following steps:
the method comprises the steps of configuring an initial abnormal behavior recognition network, wherein the initial abnormal behavior recognition network is a convolutional neural network in the embodiment, the convolutional neural network in the embodiment comprises a data input layer, a convolutional layer and a pooling layer, a feature mapping layer mainly comprises a full-connection layer and an output layer, and the canine facial features are extracted from low to high by using 4 convolutional layers. In the network, two different pooling layers are provided, namely an integrated pooling layer and a spatial pyramid pooling layer, wherein the integrated pooling layers are respectively placed behind the first two convolution layers, an SPP layer in the pyramid pooling layer is placed in front of the last convolution layer and the convolution layer, and comprises 1x1,2x2 and 3x3 pooling windows, and a splicing layer is also arranged behind the SPP layer and is used for splicing the output of 3 pooling windows; the classification network is composed of 3 fully-connected layers, mainly maps a feature space to a plurality of discrete labels, and has two output layers, namely a processing layer for performing normalization processing and loss function processing and an Accuracy layer, wherein the normalization processing and the processing based on the loss function are used for calculating loss in the network and for back propagation, the Accuracy layer is used for calculating the Accuracy of a verification set, and the activation functions of the convolutional neural network all use a PReLU function in the embodiment.
Aiming at meeting the requirement of network convergence, the weight value of the initial abnormal behavior recognition network is adjusted, and the method comprises the following steps:
and setting an initial learning rate, iterating the initial detection model based on the initial learning rate until a loss function is in a convergence state, and updating a weight value in the initial detection model based on random gradient descent to obtain a target abnormal behavior identification network.
In this embodiment, the initial learning rate is set to 0.0001, the initial learning rate is multiplied by 0.1 every ten thousand iterations, the weight values in the network are updated by using a random gradient descent method, and the batch size of the convolutional neural network is set to 128 according to the size of the video memory, that is, 128 samples are taken from the training set for training each time.
In this embodiment, the obtaining method for the classifier is also obtained based on a training method, and the training method for the classification includes the following methods:
configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derived dog, labeling and distributing a classification label for each sample image, performing feature extraction on the plurality of sample images based on an abnormal behavior recognition network meeting network convergence requirements to obtain a feature value of each sample image, performing normalization processing on the classification labels and the feature values, converting the normalization processing into a unified format to obtain training data, training the initial classifier based on the training data, and obtaining the trained classifier after the output precision of the classifier reaches preset precision.
And S350, comparing the dog image in the abnormal-behavior dog with the plurality of basic images in the dog database to obtain a target dog image.
In the present embodiment, the images of the dog obtained in step S330 are a plurality of images, and the obtaining of the optimal images is required for the plurality of obtained images, where the optimal images are the images of the dog that can best explain the image of the dog. The determination of the optimal image for the process is based on the outline area of the dog image, and the acquisition of the outline area of the dog image is based on the gray scale map of the dog image. The method specifically comprises the steps of processing a plurality of dogs based on a binary method to obtain a plurality of pre-processing image outlines, obtaining area values of the plurality of pre-processing image outlines, and comparing the plurality of area values to determine an optimal image.
And acquiring a target feature map aiming at the acquired optimal image, acquiring a plurality of target detection points in the target feature map, acquiring coordinate parameters of the plurality of target detection points, acquiring relative distances of the plurality of target detection points based on the plurality of coordinate parameters, and comparing the relative distances of the plurality of target detection points with the relative distances of the plurality of target detection points in the dog database to obtain the target dog image.
And for the above method, comprising the following processes:
and comparing the similarity of the relative distance of any one target detection point with the relative distance of the corresponding target detection point in the canine database to obtain the similarity of the relative distances of the plurality of target detection points.
And fusing the similarity of the relative distances of the plurality of target detection points to obtain the final similarity.
And determining the corresponding target dog image based on the final similarity.
In this embodiment, the main process is to obtain the corresponding target dog image by comparing the similarity.
And S360, determining a dog owner based on the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into a storage space of the corresponding dog in the dog database based on a time sequence mode.
According to the method, corresponding reminding information is mainly determined based on the label of the abnormal behavior, a reminding mode is determined based on the reminding type, and the reminding information is sent to the corresponding user terminal based on the determined reminding mode. In this embodiment, the determination of the abnormal level of the abnormal behavior of the dog can be realized by the configured evaluation tag for the acquired abnormal behavior of the dog, and the manner of sending the reminding information is determined based on the abnormal level. For example, if the local excretion behavior of a dog is found, the grade tags set based on the configured local excretion behavior are three grades, and the way of sending the reminding information can be determined as sending the information in the user terminal aiming at the three grades.
And the abnormal behavior is stored in a storage subspace arranged in the corresponding target dog, the corresponding dog can be wholly graded through a set time interval, and subsequent management behaviors are carried out based on a staged grading result.
As for the configuration of this method, a device configuration that can virtualize a plurality of implemented processes in the process is provided, and referring to fig. 4, this embodiment provides a civilized canine terminal operation device 400 based on big data for this configuration mode, including: a video image obtaining module 410, configured to obtain a video image in a target area based on the video capturing terminal. And the image acquisition module 420 is configured to identify the video image based on a preset detection frame to obtain an image containing a dog. And the information acquisition module 430 is configured to acquire the video to be detected containing the images of the dogs and a plurality of images of the dogs. The abnormal behavior obtaining module 440 is configured to perform abnormal behavior detection on the dogs in the multiple videos to be detected based on a preset dog abnormal behavior detection model, so as to obtain abnormal behaviors of the dogs. The target dog image obtaining module 450 is configured to compare the dog image of the abnormal dog with the plurality of basic images in the dog database to obtain the target dog image. The information processing module 460 is configured to determine a dog owner from the associated information corresponding to the target dog image, send a reminding message to the corresponding user terminal based on the determined dog owner, and store the abnormal behavior information in the storage space of the corresponding dog in the dog database based on a time sequence manner.
Based on the system, the method and the device provided by the embodiment, the video image in the area is acquired, the determination is performed based on the dog in the acquired image, the identification of the abnormal behavior of the corresponding dog is acquired based on the determined dog, the dog with the abnormal behavior is determined based on the identification of the abnormal behavior of the corresponding dog and the target dog image in the configured dog database, the corresponding user information is obtained based on the determined target dog image, the abnormal behavior of the dog is sent to the corresponding user terminal, and the management of the abnormal behavior of the dog is realized. According to the embodiment, the result can be accurately judged through the configured model and the comparison method, and management of the dog can be realized based on the configured information reminding method.
The apparatus provided in this embodiment may be based on a computer product in at least one computer-readable medium, where the product includes computer-readable program code, in addition to the configuration form of the apparatus provided in this embodiment.
A computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the execution of aspects of the present application may be written in any combination of one or more programming languages, including object oriented programming, such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, or similar conventional programming languages, such as the "C" programming language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, dynamic programming languages, such as Python, ruby, and Groovy, or other programming languages. The programming code may execute entirely on the user's computer, as a stand-alone software package, partly on the user's computer, partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
It should be understood that the technical terms which are not noun-nounced in the above-mentioned contents are not limited to the meanings which can be clearly determined by those skilled in the art from the above-mentioned disclosures.
The skilled person can determine some preset, reference, predetermined, set and preference labels of technical features/technical terms, such as threshold, threshold interval, threshold range, etc., without any doubt according to the above disclosure. For some technical characteristic terms which are not explained, the technical solution can be clearly and completely implemented by those skilled in the art by reasonably and unambiguously deriving the technical solution based on the logical relations in the previous and following paragraphs. The prefixes of unexplained technical feature terms, such as "first," "second," "example," "target," and the like, may be unambiguously derived and determined from the context. Suffixes of technical feature terms not explained, such as "set", "list", etc., can also be derived and determined unambiguously from the preceding and following text.
The above disclosure of the embodiments of the present application will be apparent to those skilled in the art from the above disclosure. It should be understood that the process of deriving and analyzing technical terms, which are not explained, by those skilled in the art based on the above disclosure is based on the contents described in the present application, and thus the above contents are not an inventive judgment of the overall scheme.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered as illustrative and not restrictive of the application. Various modifications, adaptations, and alternatives may occur to one skilled in the art, though not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested herein and are intended to be within the spirit and scope of the exemplary embodiments of this application.
Also, this application uses specific terminology to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of at least one embodiment of the present application may be combined as appropriate.
In addition, those skilled in the art will recognize that the various aspects of the application may be illustrated and described in terms of several patentable species or contexts, including any new and useful combination of procedures, machines, articles, or materials, or any new and useful modifications thereof. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as a "unit", "component", or "system".
Additionally, the order of the process elements and sequences described herein, the use of numerical letters, or other designations are not intended to limit the order of the processes and methods unless otherwise indicated in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it should be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware means, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
It should also be appreciated that in the foregoing description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of at least one embodiment of the invention. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.

Claims (10)

1. A civilized dog raising terminal operation method based on big data is characterized in that the terminal is communicated with a dog database, a community property management terminal, a user terminal and a video acquisition terminal, the dog database is configured with dog basic information, and the dog basic information comprises basic images and associated information of corresponding dogs; the operation method comprises the following steps:
acquiring a video image in a target area based on the video acquisition terminal;
identifying the video image based on a preset detection frame to obtain an image containing dogs;
acquiring a video to be detected containing the dog images and a plurality of dog images;
carrying out abnormal behavior detection on the dogs in the multiple videos to be detected based on a preset abnormal behavior detection model of the dogs to obtain abnormal behaviors of the dogs;
comparing the dog images in the abnormal-behavior dogs with the plurality of basic images in the dog database to obtain target dog images;
determining a dog owner based on the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into a storage space of the corresponding dog in the dog database based on a time sequence mode.
2. The operating method of the civilized canine terminal based on big data of claim 1, wherein the video image is identified based on a preset detection frame to obtain an image containing a canine, and the operating method comprises the following steps:
dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a central point, and each central point corresponds to an object type;
selecting all quasi-target central points corresponding to target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames;
and removing the weight of the plurality of quasi-target frames to obtain the target object frame containing the dog.
3. The operating method of the civilized canine terminal based on big data of claim 2, wherein the obtaining of the video to be detected containing the canine images and the plurality of canine images comprises the following steps:
labeling the target object frame, and acquiring a to-be-detected video containing the target object frame from the video image based on labeling information;
extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target detection frames;
and extracting images in the target detection frames in the video frames with the processing function to obtain a plurality of images of the dog.
4. The operating method of the civilized canine terminal based on big data as claimed in claim 3, wherein the abnormal behavior of the canine is obtained by performing state detection on the canine in a plurality of images based on a preset canine abnormal behavior detection model; the canine abnormal behavior model comprises an abnormal behavior recognition network meeting network convergence requirements and a classifier, wherein the abnormal behavior recognition network is used for acquiring abnormal behavior characteristics in a plurality of canine images, and the classifier is used for classifying the abnormal behavior characteristics to determine abnormal behaviors, and specifically comprises the following steps:
identifying a plurality of the images of the dogs according to an abnormal behavior identification network meeting the network convergence requirement to obtain the corresponding abnormal behavior characteristics in the plurality of images of the dogs;
and classifying the abnormal behavior features according to a classifier meeting the training result requirement to obtain a classification label, and determining the abnormal behavior based on the classification label.
5. The operating method of the big data-based civilized dog terminal according to claim 4, wherein the training method of the classifier comprises the following steps:
configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derivative dog, labeling each sample image, distributing a classification label, extracting the characteristics of the sample images based on an abnormal behavior recognition network meeting the network convergence requirement to obtain the characteristic value of each sample image, carrying out normalization processing on the classification labels and the characteristic values, converting the normalization processing into a uniform format to obtain training data, training the initial classifier based on the training data, and obtaining the trained classifier after the output precision of the classifier reaches the preset precision.
6. The operation method of the civilized canine terminal based on the big data as claimed in claim 5, wherein the step of comparing the plurality of images of the abnormally behaving canine with the plurality of basic images in the canine database to obtain the target image of the canine comprises the following steps:
processing the dogs based on a binary method to obtain a plurality of pre-processing image outlines, obtaining area numerical values of the pre-processing image outlines, and comparing the area numerical values to determine an optimal image;
acquiring a target characteristic diagram of the optimal image;
acquiring a plurality of target detection points in the target feature map;
acquiring coordinate parameters of a plurality of target detection points, and acquiring relative distances of the plurality of target detection points based on the plurality of coordinate parameters;
and comparing the relative distance of the plurality of target detection points with the relative distance of the plurality of target detection points in the dog database to obtain the target dog image.
7. The operation method of the civilized canine terminal based on big data as claimed in claim 6, wherein the method for the target canine image based on the comparison between the relative distance between the plurality of target detection points and the relative distance between the plurality of target detection points in the canine database comprises:
comparing the similarity of the relative distance of any one target detection point with the relative distance of the corresponding target detection point in the dog database to obtain the similarity of the relative distances of the plurality of target detection points;
fusing the similarity of the relative distances of the plurality of target detection points to obtain final similarity;
and determining the corresponding target dog image based on the final similarity.
8. The operating method of the civilized dog raising terminal based on the big data as claimed in claim 4, wherein the method of sending the reminding information to the corresponding user terminal based on the determined dog owner comprises the following steps:
and determining corresponding reminding information based on the label of the abnormal behavior, determining a reminding mode based on the reminding type, and sending the reminding information to the corresponding user terminal based on the determined reminding mode.
9. The utility model provides a civilized dog raising terminal operation device based on big data which characterized in that includes:
the video image acquisition module is used for acquiring a video image in a target area based on the video acquisition terminal;
the image acquisition module is used for identifying the video image based on a preset detection frame to obtain an image containing a dog;
the information acquisition module is used for acquiring a video to be detected containing the dog images and a plurality of dog images;
the abnormal behavior acquisition module is used for detecting the abnormal behavior of the dogs in the multiple videos to be detected based on a preset dog abnormal behavior detection model to obtain the abnormal behavior of the dogs;
the target dog image acquisition module is used for comparing the dog images in the abnormal dogs with the plurality of basic images in the dog database to obtain target dog images;
and the information processing module is used for determining a dog owner according to the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into a storage space of the corresponding dog in the dog database based on a time sequence mode.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
CN202211213842.6A 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal Active CN115424211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211213842.6A CN115424211B (en) 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211213842.6A CN115424211B (en) 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal

Publications (2)

Publication Number Publication Date
CN115424211A true CN115424211A (en) 2022-12-02
CN115424211B CN115424211B (en) 2023-05-23

Family

ID=84206899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211213842.6A Active CN115424211B (en) 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal

Country Status (1)

Country Link
CN (1) CN115424211B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314064A1 (en) * 2011-06-13 2012-12-13 Sony Corporation Abnormal behavior detecting apparatus and method thereof, and video monitoring system
CN111275014A (en) * 2020-02-28 2020-06-12 恒大智慧科技有限公司 Community pet management method, community server and storage medium
CN111447410A (en) * 2020-03-24 2020-07-24 安徽工程大学 Dog state identification monitoring system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314064A1 (en) * 2011-06-13 2012-12-13 Sony Corporation Abnormal behavior detecting apparatus and method thereof, and video monitoring system
CN111275014A (en) * 2020-02-28 2020-06-12 恒大智慧科技有限公司 Community pet management method, community server and storage medium
CN111447410A (en) * 2020-03-24 2020-07-24 安徽工程大学 Dog state identification monitoring system and method

Also Published As

Publication number Publication date
CN115424211B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN109801260B (en) Livestock number identification method and device, control device and readable storage medium
CN112740196A (en) Recognition model in artificial intelligence system based on knowledge management
CN109816200B (en) Task pushing method, device, computer equipment and storage medium
US11275970B2 (en) Systems and methods for distributed data analytics
CN111813997A (en) Intrusion analysis method, device, equipment and storage medium
CN107133629B (en) Picture classification method and device and mobile terminal
US20240087368A1 (en) Companion animal life management system and method therefor
CN111539317A (en) Vehicle illegal driving detection method and device, computer equipment and storage medium
CN111582219B (en) Intelligent pet management system
CN114648680A (en) Training method, device, equipment, medium and program product of image recognition model
CN113591512A (en) Method, device and equipment for hair identification
CN113536946B (en) Self-supervision pedestrian re-identification method based on camera relationship
CN114120090A (en) Image processing method, device, equipment and storage medium
KR102230559B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN115424211B (en) Civilized dog raising terminal operation method and device based on big data and terminal
KR102342495B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN109686110A (en) Parking stall sky expires condition discrimination method and apparatus
CN111259832B (en) Method, device, machine-readable medium and system for identifying dogs
CN109993191B (en) Information processing method and device, electronic device and storage medium
Ogawa et al. Identifying Parking Lot Occupancy with YOLOv5
KR102655958B1 (en) System for feeding multiple dogs using machine learning and method therefor
CN116863298B (en) Training and early warning sending method, system, device, equipment and medium
CN115240230A (en) Canine face detection model training method and device, and detection method and device
CN117788877A (en) Data processing method and device
CN115546830A (en) Missing dog searching method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant