CN115424211B - Civilized dog raising terminal operation method and device based on big data and terminal - Google Patents

Civilized dog raising terminal operation method and device based on big data and terminal Download PDF

Info

Publication number
CN115424211B
CN115424211B CN202211213842.6A CN202211213842A CN115424211B CN 115424211 B CN115424211 B CN 115424211B CN 202211213842 A CN202211213842 A CN 202211213842A CN 115424211 B CN115424211 B CN 115424211B
Authority
CN
China
Prior art keywords
canine
target
dogs
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211213842.6A
Other languages
Chinese (zh)
Other versions
CN115424211A (en
Inventor
宋程
刘保国
胡金有
吴浩
梁开岩
郭玮鹏
李海
巩京京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingchong Kingdom Beijing Technology Co ltd
Original Assignee
Xingchong Kingdom Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingchong Kingdom Beijing Technology Co ltd filed Critical Xingchong Kingdom Beijing Technology Co ltd
Priority to CN202211213842.6A priority Critical patent/CN115424211B/en
Publication of CN115424211A publication Critical patent/CN115424211A/en
Application granted granted Critical
Publication of CN115424211B publication Critical patent/CN115424211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The application relates to the technical field of canine management, in particular to a civilized canine keeping supervision method, and specifically relates to a civilized canine keeping terminal operation method, device and terminal based on big data; the method comprises the steps of obtaining video images in an area, determining based on dogs in the obtained images, determining based on the determined dogs to obtain abnormal behavior identification of corresponding dogs, and obtaining corresponding user information based on the determined target dogs in a configured dogs database, and sending the abnormal behavior of the dogs to a corresponding user terminal, so that the abnormal behavior management of the dogs is realized.

Description

Civilized dog raising terminal operation method and device based on big data and terminal
Technical Field
The application relates to the technical field of canine management, in particular to a civilized canine keeping supervision method, and specifically relates to a civilized canine keeping terminal operation method, device and terminal based on big data.
Background
In recent years, with the improvement of living standard, residents in cities increasingly start to feed pet dogs to relieve urban living pressure, and the lonely is relieved by accompany the pet dogs. However, the population density of China is high, pet dogs and residents are often co-located in the same public area, the dogs are not pulled by a rope, the dogs are abandoned to cause wandering, and irregular dog raising behaviors such as no immunization of the pet dogs are also frequent. Thus, the dog hurts people, people and people contradiction is caused by the dog, the security event is continuous, and even the life safety of residents is caused by rabies. The guidance of how to build a system scientific specification to the resident of keeping pet dogs has become an urgent need.
In the prior art, the management of the dogs is mainly performed through a peripheral device configured on the dogs, for example, in the prior art, real-time physiological data of the dogs are obtained through configuration of a heart rate acquisition device and a motion sensor of a key joint, and abnormal behaviors of the dogs are identified through the data obtaining and comparison. However, in practical use, the number of dogs wearing the peripheral devices is small, and the peripheral devices are easily damaged in natural environments, so that the use effect of the peripheral devices is reduced. In addition, even if the peripheral device can acquire abnormal behaviors of dogs, the peripheral device cannot manage the abnormal behaviors of dogs, and prompt and process the abnormal behaviors cannot be achieved.
Disclosure of Invention
In order to solve the technical problems, the application provides a civilized dog raising terminal operation method, device and terminal based on big data, which can realize the identification of abnormal behaviors of dogs by using an existing external information acquisition device of the outdoor environment in the outdoor environment, and send corresponding information and manage subsequent dogs based on the identified abnormal behaviors.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
according to the method for operating the civilized canine-keeping terminal based on big data, the terminal is communicated with a canine database, a community property management terminal, a user terminal and a video acquisition terminal, wherein canine basic information is configured in the canine database, and the canine basic information comprises basic images and associated information corresponding to dogs; the operation method comprises the following steps: collecting video images in a target area based on the video collecting terminal; identifying the video image based on a preset detection frame to obtain an image containing dogs; acquiring a video to be detected containing the canine images and a plurality of canine images; detecting abnormal behaviors of dogs in the videos to be detected based on a preset abnormal behavior detection model of the dogs, so as to obtain abnormal behaviors of the dogs; comparing the canine images in the dogs with abnormal behaviors with a plurality of basic images in the canine database to obtain target canine images; and determining a dog owner based on the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into a storage space of the corresponding dog in the dog database based on a time sequence mode.
In a first implementation manner of the first aspect, the identifying the video image based on a preset detection frame to obtain an image containing dogs includes the following steps: dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a center point, and each center point corresponds to an object type; selecting all quasi-target center points corresponding to target object types to be reserved in the current frame, and carrying out frame selection again according to preset length and width of each quasi-target center point and corresponding object types to obtain a plurality of quasi-target frames; and performing de-duplication on the multiple quasi-target frames to obtain a target object frame containing dogs.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the method for acquiring a video to be detected including the canine image and multiple canine images includes the following steps: labeling the target object frame, and acquiring a video to be detected containing the target object frame from the video image based on labeling information; extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target detection frames; and extracting images in the target detection frames in the multiple band processing video frames to obtain multiple canine images.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, performing state detection on dogs in the multiple dog images based on a preset abnormal behavior detection model of the dogs to obtain abnormal behaviors of the dogs; the canine abnormal behavior model comprises an abnormal behavior recognition network meeting the network convergence requirement and a classifier, wherein the abnormal behavior recognition network term is used for acquiring abnormal behavior characteristics in a plurality of canine images, and the classifier is used for classifying the abnormal behavior characteristics to determine abnormal behaviors and concretely comprises the following steps: identifying a plurality of canine images according to an abnormal behavior identification network meeting network convergence requirements, and obtaining abnormal behavior characteristics in the corresponding canine images; and classifying the abnormal behavior features according to the classifier meeting the training result requirement to obtain a classification label, and determining the abnormal behavior based on the classification label.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner, the training method of the classifier includes: configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derivative canine, labeling each sample image, distributing classification labels, extracting features of the plurality of sample images based on an abnormal behavior recognition network meeting network convergence requirements to obtain a feature value of each sample image, normalizing the classification labels and the feature values, converting the normalization processing and the feature values into a unified format to obtain training data, training the initial classifier based on the training data until the output precision of the classifier reaches the preset precision, and obtaining the trained classifier.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner, comparing a plurality of canine images in a canine with abnormal behaviors with a plurality of basic images in the canine database to obtain a target canine image, including the following method: processing a plurality of dogs on the basis of a binary method to obtain a plurality of preprocessed image contours, obtaining area values of the preprocessed image contours, and comparing the area values to determine an optimal image; acquiring a target feature map of the optimal image; acquiring a plurality of target detection points in the target feature map; acquiring coordinate parameters of a plurality of target detection points, and acquiring the relative distances of the plurality of target detection points based on the plurality of coordinate parameters; and comparing the relative distances of the plurality of target detection points with the relative distances of the plurality of target detection points in the canine database, and obtaining a target canine image.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner, the target canine image is based on comparing the relative distances of the multiple target detection points with the relative distances of the multiple target detection points in the canine database, and specifically includes the following method: comparing the similarity of the relative distance of any target detection point with the relative distance of the corresponding target detection point in the canine database to obtain the similarity of the relative distances of a plurality of target detection points; fusing the similarity of the relative distances of a plurality of target detection points to obtain final similarity; and determining the corresponding target canine image based on the final similarity.
With reference to the third possible implementation manner of the first aspect, in a seventh possible implementation manner, based on the determined canine owner, sending reminding information to the corresponding user terminal, the method includes: and determining corresponding reminding information based on the labels of the abnormal behaviors, determining a reminding mode based on the reminding type, and sending the reminding information to the corresponding user terminal based on the determined reminding mode.
In a second aspect, a civilized canine keeping terminal operation device based on big data, comprising: the video image acquisition module is used for acquiring video images in a target area based on the video acquisition terminal; the image acquisition module is used for identifying the video image based on a preset detection frame to obtain an image containing dogs; the information acquisition module is used for acquiring a video to be detected containing the canine images and a plurality of canine images; the abnormal behavior acquisition module is used for detecting abnormal behaviors of dogs in the videos to be detected based on a preset abnormal behavior detection model of the dogs to obtain abnormal behaviors of the dogs; the target canine image acquisition module is used for comparing canine images in the dogs with abnormal behaviors with a plurality of basic images in the canine database to obtain target canine images; the information processing module is used for determining the canine owner of the associated information corresponding to the target canine image, sending reminding information to the corresponding user terminal based on the determined canine owner, and storing the abnormal behavior information into a storage space of the corresponding canine in the canine database based on a time sequence mode.
In a third aspect, a terminal device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing a method as claimed in any one of the preceding claims when executing the computer program
According to the technical scheme, the video image in the area is acquired, the determination is carried out based on the dogs in the acquired image, the determination is carried out based on the dogs with abnormal behaviors and the target dogs in the configured dogs database after the identification of the abnormal behaviors corresponding to the dogs is carried out based on the determined dogs, the corresponding user information is obtained based on the determined target dogs, the abnormal behaviors of the dogs are sent to the corresponding user side, and the management of the abnormal behaviors of the dogs is achieved. According to the method, accurate judgment of the result can be achieved through the configured model and the comparison method, and management of dogs is achieved based on the configured information reminding method. .
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
The methods, systems, and/or programs in the accompanying drawings will be described further in terms of exemplary embodiments. These exemplary embodiments will be described in detail with reference to the drawings. These exemplary embodiments are non-limiting exemplary embodiments, wherein the exemplary numbers represent like mechanisms throughout the various views of the drawings.
Fig. 1 is a schematic structural diagram of a system provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
Fig. 3 is a flow chart of a method of operation of a big data based civilized canine districts terminal shown in some embodiments of the present application.
Fig. 4 is a block diagram of a civilized canine districts operation device based on big data, as shown in some embodiments of the present application.
Detailed Description
In order to better understand the technical solutions described above, the following detailed description of the technical solutions of the present application is provided through the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and embodiments of the present application are detailed descriptions of the technical solutions of the present application, and not limit the technical solutions of the present application, and the technical features of the embodiments and embodiments of the present application may be combined with each other without conflict.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it will be apparent to one skilled in the art that the present application may be practiced without these details. In other instances, well-known methods, procedures, systems, components, and/or circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present application.
The flowcharts are used in this application to describe implementations performed by systems according to embodiments of the present application. It should be clearly understood that the execution of the flowcharts may be performed out of order. Rather, these implementations may be performed in reverse order or concurrently. Additionally, at least one other execution may be added to the flowchart. One or more of the executions may be deleted from the flowchart.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
(1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
(2) Based on the conditions or states that are used to represent the operations that are being performed, one or more of the operations that are being performed may be in real-time or with a set delay when the conditions or states that are being relied upon are satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
(3) Convolutional neural networks, a mathematical or computational model that mimics the structure and function of biological neural networks (the central nervous system of animals, particularly the brain), are used to estimate or approximate functions.
(4) Classifier, which is a generic term of a method for classifying samples in data mining, including algorithms such as decision tree, logistic regression, naive bayes, neural network, etc., and in this embodiment, refers to neural network algorithm.
The abnormal behavior recognition method provided by the embodiment of the invention relates to the field of artificial intelligence (ArtificialIntelligence, AI), wherein an artificial intelligence technology is a comprehensive subject, and relates to the field of technology with a hardware level and technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
According to the technical scheme provided by the embodiment of the application, the main application scene is the recognition of the abnormal behavior of the dogs in the target area, the scene configured at present aiming at the target area is the cell, namely the technical scheme provided by the embodiment is mainly used for recognizing the abnormal behavior of the dogs in the cell, and the corresponding abnormal behavior of the dogs is sent to the corresponding user in a corresponding reminding mode, so that the management of the dogs is realized. And the abnormal behaviors of the dogs of the user are recorded and stored, and the dogs are graded in a set time period, wherein the grading basis is the number of the abnormal behaviors of the dogs. Based on the abnormal behavior of the dogs and the ratings of the dogs, the method is used for managing the dogs and the corresponding users of the dogs, and the management mode can comprise the steps of increasing the property fees and carrying out corresponding fines. Of course, the behavior aiming at the fine is to send the abnormal behavior of the canine to the city management unit, and the corresponding personnel are fine by the city management unit with the legal right.
In this embodiment, the identification of abnormal behavior for dogs is based on video images, and the identification of abnormal behavior for dogs can be increased by acquiring video images through video monitoring devices configured in a cell or community and by not configuring additional devices. The recognition for abnormal behavior mainly includes the following processes: the first process is to extract a video frame containing dogs in a video image, identify the behaviors of dogs in the video frame to obtain an identification result, and classify abnormal behaviors based on the identification result to obtain a classification result of the abnormal behaviors.
Referring to fig. 1, based on the above technical background, an embodiment of the present application provides a canine operation system 10, which includes a terminal device 200, wherein the terminal device mainly implements canine operation management, and further includes a canine database 100, a community property management terminal 300, a user terminal 400, and a video acquisition terminal 500, which are in communication with the terminal device. In this embodiment, the canine database is configured with basic information of a canine in a cell, where the basic information includes basic images corresponding to the canine and associated information, and the associated information is information for knowing the canine condition, such as user information, vaccination information, age information, and the like, corresponding to the canine. The community property management terminal is used for a community property manager. The user terminal is used for receiving the information sent by the terminal equipment, wherein the terminal equipment sends the abnormal behavior information of the dogs corresponding to the user terminal, and the user terminal is used for receiving the information and can also receive the information acquisition command and other push information which are pushed by the community property management terminal. The video acquisition terminal is used for acquiring video images in the cell and sending the video images to the terminal equipment for identifying abnormal behaviors. In this embodiment, the system may be further configured with a public security dog registration terminal 600, a city management inspection law enforcement management terminal 700, and a pet hospital management terminal 800, where the above multiple terminals are used to implement corresponding management responsibilities and functions, and the terminal device may obtain information of the above terminals that is open in the middle through a communication interface, so as to implement dog management.
In this embodiment, the manner of sending information is implemented based on a network, and an association relationship needs to be established between the user terminal and other terminals before the terminal device performs application, and in particular, the association between the terminal device and the user terminal is implemented by the manner of registration with respect to the user terminal. Wherein the terminal device may be directed to a plurality of user terminals and the user terminals communicate with the terminal device by means of passwords and other encryption.
Based on the above technical background, referring to fig. 2, the terminal device 200 provided in the embodiment of the present application includes a memory 210, a processor 220, and a computer program stored in the memory and capable of running on the processor, where the processor executes the method for operating a canine keeping terminal, so as to identify abnormal behaviors of a canine and manage the canine.
In this embodiment, the terminal may be a server, and includes a memory, a processor, and a communication unit for the physical structure of the server. The memory, the processor and the communication unit are electrically connected with each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory is used for storing specific information and programs, and the communication unit is used for sending the processed information to the corresponding user side.
In this embodiment, the storage module is divided into two storage areas, where one storage area is a program storage unit and the other storage area is a data storage unit. The program storage unit is equivalent to a firmware area, the read-write authority of the area is set to be in a read-only mode, and the data stored in the area can not be erased and changed. And the data in the data storage unit can be erased or read and written, and when the capacity of the data storage area is full, the newly written data can cover the earliest historical data.
The Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Ele ultrasound ric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also Digital Signal Processors (DSPs)), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 3, in this embodiment, a method for operating a civilized canine distemper terminal with big data is provided, which includes the following steps:
and S310, acquiring video images in a target area based on the video acquisition terminal.
In this embodiment, the identification of the abnormal behavior of the dogs is mainly based on the video image, and the acquisition of the video image is based on the video acquisition terminal, but in this embodiment, the video acquisition terminal is only required to be a monitoring video configured in the existing cell, no additional video acquisition equipment is required to be specifically set, and the identification is required to be performed on the daily real-time video image information acquired by the video acquisition terminal.
And S320, identifying the video image based on a preset detection frame to obtain an image containing dogs.
In this embodiment, the basis of abnormal behavior recognition for dogs is to acquire an image containing dogs in a video image, and in this method, the video image is mainly recognized based on setting a detection frame to obtain a target image containing dogs. For this, the method mainly comprises the following steps:
dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a center point, and each center point corresponds to an object type. The video image is acquired because the video image contains the types of people, dogs, numbers, buildings and the like. Firstly, dividing a current frame in a video image into N.M regions according to rows and columns, respectively identifying and anchoring each region, obtaining A anchored frames with different sizes in each region, wherein each frame corresponds to a probability value and a central point, and each central point corresponds to an object type, and totally has B types.
And selecting all quasi-target center points corresponding to the target object types to be reserved in the current frame, and re-framing according to the preset length and width of each quasi-target center point and the corresponding object types to obtain a plurality of quasi-target frames. For this step, mainly, selecting all quasi-target center points corresponding to dogs to be kept in the current frame, and re-selecting frames according to each quasi-target center point and the preset length and width of the corresponding dogs to obtain a plurality of quasi-target frames, wherein the length and width of the plurality of quasi-target frames may be exactly the same as the height and width of the dogs, or may exceed the height and width of the dogs.
And performing de-duplication on the multiple quasi-target frames to obtain a target object frame containing dogs. For this step, multiple target frames of the dog are selected for de-duplication to obtain target object frames. In the invention, the variety screening is added before the deduplication technology, so that the uninteresting variety can be screened out, the calculated amount of the NMS is reduced, the data processing speed is improved, and the time is saved.
S330, obtaining a video to be detected containing the canine images and a plurality of canine images.
In this embodiment, the target detection frame is used to acquire the video containing the dogs in the video image, and acquire the video containing the dogs based on the video containing the dogs, so that the subsequent recognition of abnormal behaviors of the dogs and the comparison of the basic images of the dogs in the dogs database are mainly realized for this step, and the target dogs are obtained.
Specifically, in the present embodiment, for this process, the following method is included: labeling the target object frame, and acquiring a video to be detected containing the target object frame from the video image based on labeling information; extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target detection frames; and extracting images in the target detection frames in the multiple band processing video frames to obtain multiple canine images.
In this embodiment, the labeling of specific information is mainly performed for the target detection frame, where the specific information may be performed by preset keywords, where the keywords may be keywords that are easy to identify, such as "dogs", "targets", and the like, and may also be special text symbols. The labeling is used for obtaining a specific image containing the dogs in the whole video image, and the video image containing the dogs is determined based on the target detection frame, so that the video image containing the dogs can be rapidly obtained in the video image after the labeling aiming at the target detection frame.
And S340, detecting abnormal behaviors of dogs in the videos to be detected based on a preset abnormal behavior detection model of the dogs, and obtaining abnormal behaviors of the dogs.
In this embodiment, the abnormal behavior of the dog is obtained through a abnormal behavior detection model of the dog, and the abnormal behavior detection model of the dog includes an abnormal behavior recognition network and a classifier, wherein the abnormal behavior recognition network term is used for obtaining abnormal behavior characteristics in a plurality of images of the dog, and the classifier is used for classifying the abnormal behavior characteristics to determine the abnormal behavior.
The method comprises the following steps:
and identifying the plurality of canine images according to an abnormal behavior identification network meeting the network convergence requirement, and obtaining the abnormal behavior characteristics in the corresponding plurality of canine images.
And classifying the abnormal behavior features according to the classifier meeting the training result requirement to obtain a classification label, and determining the abnormal behavior based on the classification label.
In this embodiment, the method for obtaining the abnormal behavior recognition network according to the mode of obtaining based on model training that meets the network convergence requirement is mainly implemented based on the configured initial abnormal behavior recognition network and adjusting the weight value of the initial abnormal behavior recognition network, and specifically includes the following steps:
The initial abnormal behavior recognition network is configured, and in the embodiment, the initial abnormal behavior recognition network is a convolutional neural network, and in the embodiment, the convolutional neural network comprises a data input layer, a convolutional layer and a pooling layer, wherein the feature mapping layer mainly comprises a full-connection layer and an output layer, and the canine facial features are extracted from low to high by using 4 convolutional layers. In the network, two different pooling layers are respectively arranged behind the first two convolution layers, namely a comprehensive pooling layer and a spatial pyramid pooling layer, an SPP layer in the pyramid pooling layer is arranged in front of the last convolution layer and the convolution layer, a pooling window containing 1x1,2x2 and 3x3 is arranged behind the SPP layer, and a splicing layer is also arranged behind the SPP layer and used for splicing the output of the 3 pooling windows; the classification network is composed of 3 fully-connected layers, wherein the feature space is mainly mapped to a plurality of discrete labels, the total number of the output layers is two, and the output layers are a processing layer and an Accuracy layer for carrying out normalization processing and loss function processing respectively, wherein the normalization processing and the loss function-based processing are used for calculating losses in the network and for counter-propagation, the Accuracy layer is used for calculating the Accuracy of a verification set, and in the embodiment, the PReLU function is used as an activation function of the convolutional neural network.
Aiming at meeting the requirement of network convergence, the method for adjusting the weight value of the initial abnormal behavior identification network comprises the following steps:
setting an initial learning rate, iterating the initial detection model based on the initial learning rate until a loss function is in a convergence state, and updating a weight value in the initial detection model based on random gradient descent to obtain a target abnormal behavior recognition network.
In this embodiment, for the initial learning rate set to 0.0001, each ten thousand times, the initial learning rate is multiplied by 0.1, the weight value in the network is updated by using a random gradient descent method, and the batch size of the convolutional neural network is set to 128 according to the size of the video memory, that is, 128 samples are taken in the training set for training in each training.
In this embodiment, the acquisition mode for the classifier is also obtained based on the training mode, and the training method for the classification includes the following methods:
configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derivative canine, labeling each sample image, distributing classification labels, extracting features of the plurality of sample images based on an abnormal behavior recognition network meeting network convergence requirements to obtain a feature value of each sample image, normalizing the classification labels and the feature values, converting the normalization processing and the feature values into a unified format to obtain training data, training the initial classifier based on the training data until the output precision of the classifier reaches the preset precision, and obtaining the trained classifier.
S350, comparing the canine images in the dogs with abnormal behaviors with a plurality of basic images in the canine database to obtain target canine images.
In this embodiment, the canine images obtained in step S330 are multiple images, and the optimal images are required to be obtained for the multiple images, where the optimal images are canine images that can best illustrate the canine image. And for this process the determination of the optimal image is based on the area of the outline of the canine image, and this is based on the gray scale map of the canine image for the acquisition of the area of the outline of the canine image. The method specifically comprises the following steps of processing a plurality of dogs on the basis of a binary method to obtain a plurality of preprocessed image outlines, obtaining area values of the preprocessed image outlines, and comparing the area values to determine an optimal image.
And acquiring a target feature map aiming at the acquired optimal image, acquiring a plurality of target detection points in the target feature map, acquiring coordinate parameters of the plurality of target detection points, acquiring relative distances of the plurality of target detection points based on the plurality of coordinate parameters, and comparing the relative distances of the plurality of target detection points with the relative distances of the plurality of target detection points in the canine database to obtain a target canine image.
With respect to the above method, the method comprises the following steps:
and comparing the similarity of the relative distance of any target detection point with the relative distance of the corresponding target detection point in the canine database, and obtaining the similarity of the relative distances of a plurality of target detection points.
And fusing the similarity of the relative distances of the plurality of target detection points to obtain final similarity.
And determining the corresponding target canine image based on the final similarity.
In this embodiment, the main process is to obtain the corresponding target canine image through similarity comparison.
S360, determining a dog owner based on the associated information corresponding to the target dog image, sending reminding information to the corresponding user terminal based on the determined dog owner, and storing the abnormal behavior information into a storage space of the corresponding dog in the dog database based on a time sequence mode.
For the method, corresponding reminding information is determined mainly based on the labels of the abnormal behaviors, a reminding mode is determined based on the reminding type, and the reminding information is sent to the corresponding user terminal based on the determined reminding mode. In this embodiment, the determination of the abnormal grade of the abnormal behavior of the dog can be realized by the configured evaluation tag for the obtained abnormal behavior of the dog, and the sending mode of the reminding information is determined based on the abnormal grade. For example, if the wherever-excreted behavior of the canine is found, the class label set based on the configured wherever-excreted behavior is three-class, and the way for sending the reminding information can be determined to be sending the information in the user side according to the three-class label.
And, save the abnormal behavior to the storage subspace set in the corresponding target dogs, and can grade the corresponding dogs as a whole through the set time interval, and carry out the subsequent management behavior based on the grading result of the stage.
For the configuration of this method, referring to fig. 4, this embodiment provides a civilized canine districts operation device 470 based on big data, which includes: the video image acquisition module 410 is configured to acquire a video image in a target area based on the video acquisition terminal. The image obtaining module 420 is configured to identify the video image based on a preset detection frame, so as to obtain an image containing dogs. The information obtaining module 430 is configured to obtain a video to be detected containing the canine images and a plurality of canine images. The abnormal behavior acquisition module 440 is configured to detect abnormal behaviors of dogs in the plurality of videos to be detected based on a preset abnormal behavior detection model of dogs, so as to obtain abnormal behaviors of dogs. And the target canine image acquisition module 450 is used for comparing canine images in the dogs with abnormal behaviors with a plurality of basic images in the canine database to obtain target canine images. The information processing module 460 is configured to determine a canine owner from the associated information corresponding to the target canine image, send a reminder to the corresponding user terminal based on the determined canine owner, and store the abnormal behavior information into the storage space of the corresponding canine in the canine database based on a time sequence manner.
Based on the system, the method and the device provided by the embodiment, the video image in the area is acquired, the determination is performed based on the dogs in the acquired image, the determination is performed based on the dogs with abnormal behaviors corresponding to the dogs after the identification of the abnormal behaviors is performed based on the determined dogs, the corresponding user information is obtained based on the determined target dogs, and the abnormal behaviors of the dogs are sent to the corresponding user side, so that the management of the abnormal behaviors of the dogs is realized. According to the method, accurate judgment of the result can be achieved through the configured model and the comparison method, and management of dogs is achieved based on the configured information reminding method.
For the apparatus provided in this embodiment, in addition to the configuration form of the apparatus provided in this embodiment, a computer product in at least one computer readable medium may be based, where the product includes computer readable program codes.
The computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer readable signal medium may be propagated through any suitable medium including radio, electrical, fiber optic, RF, or the like, or any combination of the foregoing.
Computer program code required for execution of aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, etc., or similar conventional programming languages such as the "C" programming language, visual Basic, fortran2003, perl, COBOL 2002, php, abap, dynamic programming languages such as Python, ruby and Groovy or other programming languages. The programming code may execute entirely on the user's computer, or as a stand-alone software package, or partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as software as a service (SaaS).
It is to be understood that the terminology which is not explained by terms of nouns in the foregoing description is not intended to be limiting, as those skilled in the art can make any arbitrary deduction from the foregoing disclosure.
The person skilled in the art can undoubtedly determine technical features/terms of some preset, reference, predetermined, set and preference labels, such as threshold values, threshold value intervals, threshold value ranges, etc., from the above disclosure. For some technical feature terms which are not explained, a person skilled in the art can reasonably and unambiguously derive based on the logical relation of the context, so that the technical scheme can be clearly and completely implemented. The prefixes of technical feature terms, such as "first", "second", "example", "target", etc., which are not explained, can be unambiguously deduced and determined from the context. Suffixes of technical feature terms, such as "set", "list", etc., which are not explained, can also be deduced and determined unambiguously from the context.
The foregoing of the disclosure of the embodiments of the present application will be apparent to and complete with respect to those skilled in the art. It should be appreciated that the process of deriving and analyzing technical terms not explained based on the above disclosure by those skilled in the art is based on what is described in the present application, and thus the above is not an inventive judgment of the overall scheme.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Meanwhile, the present application uses specific terminology to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics of at least one embodiment of the present application may be combined as suitable.
In addition, those of ordinary skill in the art will understand that the various aspects of the present application may be illustrated and described in terms of several patentable categories or cases, including any novel and useful processes, machines, products, or combinations of materials, or any novel and useful improvements thereto. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "unit," component, "or" system.
Furthermore, the order in which the processing elements and sequences are described, the use of numerical letters, or other designations are used is not intended to limit the order in which the processes and methods of the present application are performed, unless specifically indicated in the claims. While in the foregoing disclosure there has been discussed, by way of various examples, some embodiments of the invention which are presently considered to be useful, it is to be understood that this detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments of this application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
It should also be appreciated that in the foregoing description of the embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of at least one of the embodiments of the invention. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.

Claims (4)

1. The civilized canine-keeping terminal operation method based on big data is characterized in that the terminal is communicated with a canine database, a community property management terminal, a user terminal and a video acquisition terminal, wherein the canine database is configured with canine basic information, and the canine basic information comprises basic images and associated information corresponding to dogs; the operation method comprises the following steps:
collecting video images in a target area based on the video collecting terminal;
identifying the video image based on a preset detection frame to obtain an image containing dogs;
acquiring a video to be detected containing the canine images and a plurality of canine images;
detecting abnormal behaviors of dogs in the videos to be detected based on a preset abnormal behavior detection model of the dogs, so as to obtain abnormal behaviors of the dogs;
comparing the canine images in the dogs with abnormal behaviors with a plurality of basic images in the canine database to obtain target canine images;
determining a canine owner based on the associated information corresponding to the target canine image, sending reminding information to the corresponding user terminal based on the determined canine owner, and storing the abnormal behavior information into a storage space of the corresponding canine in the canine database based on a time sequence mode;
Dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a center point, and each center point corresponds to an object type;
selecting all quasi-target center points corresponding to target object types to be reserved in the current frame, and carrying out frame selection again according to preset length and width of each quasi-target center point and corresponding object types to obtain a plurality of quasi-target frames;
performing de-duplication on the multiple quasi-target frames to obtain a target object frame containing dogs;
labeling the target object frame, and acquiring a video to be detected containing the target object frame from the video image based on labeling information;
extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target object frames;
extracting images in target object frames in a plurality of video frames to be processed to obtain a plurality of canine images;
identifying a plurality of canine images according to an abnormal behavior identification network meeting network convergence requirements, and obtaining abnormal behavior characteristics in the corresponding canine images;
Classifying the abnormal behavior features according to a classifier meeting training result requirements to obtain a classification label, and determining abnormal behaviors based on the classification label;
configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derivative canine, labeling each sample image, distributing classification labels, carrying out feature extraction on the plurality of sample images based on an abnormal behavior recognition network meeting network convergence requirements to obtain a feature value of each sample image, carrying out normalization processing on the classification labels and the feature values, converting the normalization processing and the feature values into a unified format to obtain training data, training the initial classifier based on the training data until the output precision of the classifier reaches the preset precision, and obtaining a trained classifier;
processing a plurality of dogs on the basis of a binary method to obtain a plurality of preprocessed image contours, obtaining area values of the preprocessed image contours, and comparing the area values to determine an optimal image;
acquiring a target feature map of the optimal image;
acquiring a plurality of target object points in the target feature map;
acquiring coordinate parameters of a plurality of target object points, and acquiring the relative distances of the plurality of target object points based on the plurality of coordinate parameters;
Comparing the relative distances of the plurality of target object points with the relative distances of the plurality of target object points in the canine database, and a target canine image;
comparing the similarity of the relative distance of any target object point with the relative distance of the corresponding target object point in the canine database to obtain the similarity of the relative distances of a plurality of target object points;
fusing the similarity of the relative distances of a plurality of target object points to obtain final similarity;
and determining the corresponding target canine image based on the final similarity.
2. The big data-based civilized canine districts operation method according to claim 1, wherein based on the determined canine owners, sending reminding information to the corresponding user terminals, comprising the following steps:
and determining corresponding reminding information based on the labels of the abnormal behaviors, determining a reminding mode based on the reminding information, and sending the reminding information to the corresponding user terminal based on the determined reminding mode.
3. Civilized dog raising terminal operation device based on big data, characterized by comprising:
the video image acquisition module is used for acquiring video images in the target area based on the video acquisition terminal;
The image acquisition module is used for identifying the video image based on a preset detection frame to obtain an image containing dogs;
the information acquisition module is used for acquiring a video to be detected containing the canine images and a plurality of canine images;
the abnormal behavior acquisition module is used for detecting abnormal behaviors of dogs in the videos to be detected based on a preset abnormal behavior detection model of the dogs to obtain abnormal behaviors of the dogs;
the target canine image acquisition module is used for comparing canine images in the dogs with abnormal behaviors with a plurality of basic images in the canine database to obtain target canine images;
the information processing module is used for determining a canine owner from the associated information corresponding to the target canine image, sending reminding information to a corresponding user terminal based on the determined canine owner, and storing the abnormal behavior information into a storage space of a corresponding canine in the canine database based on a time sequence mode; dividing a current frame in the video image into a plurality of areas, respectively identifying and anchoring each area, and obtaining a plurality of detection frames in each area; each detection frame corresponds to a probability value and a center point, and each center point corresponds to an object type;
Selecting all quasi-target center points corresponding to target object types to be reserved in the current frame, and carrying out frame selection again according to preset length and width of each quasi-target center point and corresponding object types to obtain a plurality of quasi-target frames;
performing de-duplication on the multiple quasi-target frames to obtain a target object frame containing dogs;
labeling the target object frame, and acquiring a video to be detected containing the target object frame from the video image based on labeling information;
extracting video frames in the video to be detected to obtain a plurality of video frames to be processed containing a plurality of target object frames;
extracting images in target object frames in a plurality of video frames to be processed to obtain a plurality of canine images;
identifying a plurality of canine images according to an abnormal behavior identification network meeting network convergence requirements, and obtaining abnormal behavior characteristics in the corresponding canine images;
classifying the abnormal behavior features according to a classifier meeting training result requirements to obtain a classification label, and determining abnormal behaviors based on the classification label;
configuring a pre-trained initial classifier, obtaining a plurality of sample images of a derivative canine, labeling each sample image, distributing classification labels, carrying out feature extraction on the plurality of sample images based on an abnormal behavior recognition network meeting network convergence requirements to obtain a feature value of each sample image, carrying out normalization processing on the classification labels and the feature values, converting the normalization processing and the feature values into a unified format to obtain training data, training the initial classifier based on the training data until the output precision of the classifier reaches the preset precision, and obtaining a trained classifier;
Processing a plurality of dogs on the basis of a binary method to obtain a plurality of preprocessed image contours, obtaining area values of the preprocessed image contours, and comparing the area values to determine an optimal image;
acquiring a target feature map of the optimal image;
acquiring a plurality of target object points in the target feature map;
acquiring coordinate parameters of a plurality of target object points, and acquiring the relative distances of the plurality of target object points based on the plurality of coordinate parameters;
comparing the relative distances of the plurality of target object points with the relative distances of the plurality of target object points in the canine database, and a target canine image;
comparing the similarity of the relative distance of any target object point with the relative distance of the corresponding target object point in the canine database to obtain the similarity of the relative distances of a plurality of target object points;
fusing the similarity of the relative distances of a plurality of target object points to obtain final similarity;
and determining the corresponding target canine image based on the final similarity.
4. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to claim 1 or 2 when executing the computer program.
CN202211213842.6A 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal Active CN115424211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211213842.6A CN115424211B (en) 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211213842.6A CN115424211B (en) 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal

Publications (2)

Publication Number Publication Date
CN115424211A CN115424211A (en) 2022-12-02
CN115424211B true CN115424211B (en) 2023-05-23

Family

ID=84206899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211213842.6A Active CN115424211B (en) 2022-09-30 2022-09-30 Civilized dog raising terminal operation method and device based on big data and terminal

Country Status (1)

Country Link
CN (1) CN115424211B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831442A (en) * 2011-06-13 2012-12-19 索尼公司 Abnormal behavior detection method and equipment and method and equipment for generating abnormal behavior detection equipment
CN111275014A (en) * 2020-02-28 2020-06-12 恒大智慧科技有限公司 Community pet management method, community server and storage medium
CN111447410A (en) * 2020-03-24 2020-07-24 安徽工程大学 Dog state identification monitoring system and method

Also Published As

Publication number Publication date
CN115424211A (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN109816200B (en) Task pushing method, device, computer equipment and storage medium
CN104239386A (en) Method and system for prioritizion of facial recognition matches
CN106372572A (en) Monitoring method and apparatus
CN110751675B (en) Urban pet activity track monitoring method based on image recognition and related equipment
CN105989174B (en) Region-of-interest extraction element and region-of-interest extracting method
US11429820B2 (en) Methods for inter-camera recognition of individuals and their properties
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
US11275970B2 (en) Systems and methods for distributed data analytics
CN111539317A (en) Vehicle illegal driving detection method and device, computer equipment and storage medium
CN110458214B (en) Driver replacement recognition method and device
CN112989334A (en) Data detection method for machine learning and related equipment
CN110533094B (en) Evaluation method and system for driver
CN113591512A (en) Method, device and equipment for hair identification
Jung et al. An AIoT Monitoring System for Multi-Object Tracking and Alerting.
CN115424211B (en) Civilized dog raising terminal operation method and device based on big data and terminal
KR102342495B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model
CN111259832B (en) Method, device, machine-readable medium and system for identifying dogs
CN114519900A (en) Riding method and device, electronic equipment and storage medium
CN113592902A (en) Target tracking method and device, computer equipment and storage medium
CN114359783A (en) Abnormal event detection method, device and equipment
CN111191066A (en) Image recognition-based pet identity recognition method and device
US20240153275A1 (en) Determining incorrect predictions by, and generating explanations for, machine learning models
CN115587896B (en) Method, device and equipment for processing canine medical insurance data
US20230360402A1 (en) Video-based public safety incident prediction system and method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant