CN109784208B - Image-based pet behavior detection method - Google Patents

Image-based pet behavior detection method Download PDF

Info

Publication number
CN109784208B
CN109784208B CN201811601328.3A CN201811601328A CN109784208B CN 109784208 B CN109784208 B CN 109784208B CN 201811601328 A CN201811601328 A CN 201811601328A CN 109784208 B CN109784208 B CN 109784208B
Authority
CN
China
Prior art keywords
pet
image
detection model
posture
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811601328.3A
Other languages
Chinese (zh)
Other versions
CN109784208A (en
Inventor
刘军
孙思琪
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN201811601328.3A priority Critical patent/CN109784208B/en
Publication of CN109784208A publication Critical patent/CN109784208A/en
Application granted granted Critical
Publication of CN109784208B publication Critical patent/CN109784208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a pet behavior detection method based on images, which comprises the following steps: firstly, shooting a plurality of image samples of each posture of a pet through a camera module, marking and classifying the pet posture in each image sample to obtain an image data set containing each posture of the pet; establishing a detection model, and setting the error rate and the iterative training times; importing the image data set into a detection model, and training the detection model according to the iteration training times preset in S2 to obtain a detection model for detecting pet behaviors; when the behavior of the pet is detected, the scene image of the pet is shot through the camera module, the shot scene image is input into the detection model, the information in the scene image is extracted by the detection model, and the probability that the scene image belongs to one of various postures of the pet is obtained. The invention provides a pet behavior detection method based on images, which helps a master to know the state of a pet at any time.

Description

Image-based pet behavior detection method
Technical Field
The invention relates to the field of computer science artificial intelligence. More particularly, the present invention relates to a pet behavior detection method based on images.
Background
At present, the quality of life of people is greatly improved, and not only the problem of temperature fullness is emphasized, but also the improvement of the quality of life is emphasized. Nowadays, more and more families raise pets such as cats and dogs, even consider the pets as one member of the family, and for many interesting behaviors of the pets, owners usually adopt monitoring cameras to shoot and record the pets for repeated watching in the future. In the daily process of the pet, people hope to know various behavior habits of the pet, define various postures of the pet in terms of people, and further estimate the state of the pet, besides more interesting interaction with the pet.
Disclosure of Invention
The invention aims to provide a pet behavior detection method based on images, and the pet behavior detection method based on images solves the problems.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided an image-based pet behavior detection method, comprising the steps of:
s1, shooting a plurality of image samples of each posture of a pet through a camera module, marking and classifying the pet posture in each image sample to obtain an image data set containing each posture of the pet;
s2, establishing a detection model, and setting the error rate and the iterative training times;
s3, importing the image data set into a detection model, and training the detection model according to the iteration training times preset in the S2 to obtain a detection model finally used for detecting pet behaviors;
s4, when the behavior of the pet is detected, the scene image of the pet is shot through the camera module, the shot scene image is input into the detection model, the information in the scene image is extracted by the detection model, and the probability that the scene image belongs to one of various postures of the pet is obtained.
Further, in S1, a specific process of obtaining the image data set is as follows:
the method comprises the steps of shooting 1200 image samples containing each posture of the pet through a camera module, labeling each image sample by using lableimg, wherein labeled information is the characteristic information of the posture, color, texture and outline of the pet in each image sample, and classifying according to the labeled characteristic information in each image sample to obtain an image data set containing each posture of the pet.
Further, the type of the pet used for making the image data set in S1 is any one or more of a hesky, a golden hair, a white cat and a orange cat, and the pet is marked to be in a sitting, lying and standing posture.
Further, the detection model in S2 is a model synthesized by MobileNet and SSD.
Further, the misjudgment rate in S2 is 0.00001, and the number of iterative training times is 40000.
Further, the specific process of obtaining the detection model finally used for detecting pet behavior in S3 is as follows:
step a, randomly dividing the image data set in the step S1 into a training set and a testing set;
step b, introducing the training set into the detection model, training the detection model according to preset iterative training times, and then testing the trained detection model by adopting the test set;
if the accuracy reaches the expected value, taking the trained detection model as a detection model for detecting pet behaviors;
if the accuracy rate does not reach the expected value, exchanging the training set and the test set in the step Sa, inputting the test set into the detection model, training according to the preset iterative training times, and then testing the trained detection model by adopting the training set;
and if the accuracy in the step b does not reach an expected value, repeating the step a and the step b until the accuracy of the trained detection model reaches the expected value.
Further, the camera module in S1 is a video camera.
Further, the scene image in S4 is a single photo or video.
The invention has the beneficial effects that: the invention can detect various behaviors of the pet in the picture or the video by the picture or the video shot by the owner through the detection model so as to help the owner to know the state of the pet at any time, increase the interestingness of the interaction between the owner and the pet, be beneficial to the emotion establishment between the pet and the owner, and improve the interest of people in raising the pet and the life quality of people; in addition, the detection model has the advantages of high detection speed, accurate detection result and the like.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of a pet behavior detection method based on images according to the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It is to be noted that the experimental methods described in the following embodiments are all conventional methods unless otherwise specified, and the reagents and materials described therein are commercially available unless otherwise specified; in the description of the present invention, the terms "lateral", "longitudinal", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
As shown in FIG. 1, the embodiment of the invention provides a pet behavior detection method based on images, which comprises the following steps:
s1, shooting a plurality of image samples of each posture of a pet through a camera module, marking and classifying the pet posture in each image sample to obtain an image data set containing each posture of the pet; the method comprises the following specific steps: the method comprises the steps of shooting 100 image samples of each behavior of sitting, lying and running of four kinds of pets including Husky, golden hair, white cat and orange cat through a camera module, shooting 1200 image samples in total, carrying out sequential chaos on the 1200 shot image samples before carrying out information labeling on each image sample so as to avoid the influence of personal subjective cognitive attitude on subsequent labeling of information in each image sample, then carrying out labeling on characteristic information of the pet such as attitude, color, texture and outline in each image sample by using labelImg, finally classifying the 1200 image samples according to the characteristic information labeled on each image sample, and storing the classified result into an xml format so as to obtain an image data set containing each posture of the pet.
S2, establishing a detection model, and setting the error rate and the iterative training times; the lower the error rate is, the more the iterative training times are, and the higher the accuracy of the detection model for detecting the behavior of the pet is. In this embodiment, the misjudgment rate is 0.00001, the iterative training frequency is 40000 times, and the iterative training frequency is significantly lower than that of the existing same detection model, which is beneficial to improving the CPU utilization efficiency of the detection device.
S3, importing the image data set into a detection model, and training the detection model according to the iteration training times preset in the S2 to obtain a detection model for detecting pet behaviors; the specific operation steps are as follows: taking 600 image samples randomly separated from 1200 image samples in S1 as a first training set, taking 600 image samples as a first test set, importing the first test set into a detection model, training for 40000 times, then testing the trained detection model by using the first training set, and taking the trained detection model as a detection model for detecting pet behaviors if the accuracy reaches an expected value; otherwise, exchanging the first training set and the first testing set randomly obtained in the step S1, inputting the first testing set into the detection model, training 40000 times, and then testing the trained detection model by adopting the first training set; if the detection result of the first test set after training the detection model does not reach the expected value, then 700 image samples randomly separated from 1200 image samples in the S1 are used as a second training set, and 500 image samples are used as a second test set, the steps are repeated to train and detect the detection model, if the obtained detection result does not reach the expected value, the second training set and the second test set are exchanged, the steps are repeated, and the accuracy of the detection model is detected; and repeating the steps until the accuracy of the final detection model reaches an expected value, and obtaining the detection model finally used for detecting pet behaviors. In this embodiment, the expected value is set to 0.95, the accuracy of the trained detection model reaches 0.9563, and the loss value is stabilized below 0.005.
S4, when the behavior of the pet needs to be detected, firstly, the image data set is led into the detection model, then, the scene image of the pet, which is shot by the camera module, is input into the detection model, the information in the scene image is extracted by the detection model and is matched with the information marked in the image data set, and the probability that the scene image belongs to a certain posture of a certain variety of pets in the image data set is obtained; the method comprises the steps that a scene image shot by a camera module can be single-picture information or video information, if the scene image is the single-picture information, the single-picture scene image is input into a detection model, the detection model extracts the posture, color, texture and contour characteristic information of a pet in the scene image and matches the posture, color, texture and contour information of the pet marked by an image data set in the detection model, the probability of a certain posture of a certain variety of pets in the scene image, which belongs to the image data set, is detected, and then the behavior of the pet in the scene image is detected; similarly, if the scene image is video information, the video information is firstly input into the detection model, the detection model extracts characteristic information such as the posture, color, texture and contour of the pet in each frame of image in the video, and matches the characteristic information with the information such as the posture, color, texture and contour of the pet marked by the image data set in the detection model, the probability of a certain posture of a certain variety of pets in the image data set in the scene image is detected, and then the behavior of the pet in the scene image is detected. In this embodiment, the detection model not only can detect information of a single picture, but also can detect scenes of each frame of image in a video, and has strong practicability and wide application range. In addition, the trained model has the advantages of small memory and light weight, can be transplanted to a mobile phone end through a configuration environment and used on the mobile phone, and is convenient for a master to detect the behavior of the pet at any time.
Preferably, the camera module in S1 is a video camera.
In the embodiment, the quality of the picture shot by the camera is higher. While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable to various fields of endeavor for which the invention may be embodied with additional modifications as would be readily apparent to those skilled in the art, and the invention is therefore not limited to the details given herein and to the embodiments shown and described without departing from the generic concept as defined by the claims and their equivalents.

Claims (7)

1. An image-based pet behavior detection method is characterized by comprising the following steps:
s1, shooting a plurality of image samples of each posture of a pet through a camera module, marking and classifying the pet posture in each image sample to obtain an image data set containing each posture of the pet;
s2, establishing a detection model, and setting the error rate and the iterative training times;
s3, importing the image data set into a detection model, and training the detection model according to the iteration training times preset in the S2 to obtain a detection model for detecting pet behaviors;
s4, when the behavior of the pet is detected, a scene image of the pet is shot through the camera module, the shot scene image is input into the detection model, the detection model extracts information in the scene image, and the probability that the scene image belongs to one of various postures of the pet is obtained;
the specific process of obtaining the detection model finally used for detecting the pet behavior in the S3 is as follows:
step a, randomly dividing the image data set in the step S1 into a training set and a testing set;
step b, importing the training set into a detection model, training the detection model according to preset iterative training times, and then testing the trained detection model by adopting the test set;
if the accuracy reaches the expected value, taking the trained detection model as a detection model for detecting pet behaviors;
if the accuracy does not reach the expected value, exchanging the training set and the test set in the step a, inputting the test set into the detection model, training according to the preset iterative training times, and then testing the trained detection model by adopting the training set;
and if the accuracy in the step b does not reach an expected value, repeating the step a and the step b until the accuracy of the trained detection model reaches the expected value.
2. The image-based pet behavior detection method according to claim 1, wherein in S1, the specific process of obtaining the image dataset is as follows:
the method comprises the steps of shooting 1200 image samples containing each posture of a pet through a camera shooting module, labeling each image sample by using lableimg, wherein labeled information is the characteristic information of the posture, color, texture and contour of the pet in each image sample, and classifying according to the labeled characteristic information in each image sample to obtain an image data set containing each posture of the pet.
3. The image-based pet behavior detection method as claimed in claim 2, wherein the pet species used for creating the image data set in S1 is any one or more of hastella, golden hair, white cat and orange cat, and the marked pet postures are sitting, lying and standing.
4. The image-based pet behavior detection method according to claim 1, wherein the detection model in S2 is a model synthesized by MobileNet and SSD.
5. The image-based pet behavior detection method according to claim 1, wherein the misjudgment rate in S2 is 0.00001, and the number of iterative training times is 40000.
6. The image-based pet behavior detection method according to claim 1, wherein the camera module in S1 is a video camera.
7. The image-based pet behavior detection method of claim 1, wherein the scene image in S4 is a single photo or video.
CN201811601328.3A 2018-12-26 2018-12-26 Image-based pet behavior detection method Active CN109784208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811601328.3A CN109784208B (en) 2018-12-26 2018-12-26 Image-based pet behavior detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811601328.3A CN109784208B (en) 2018-12-26 2018-12-26 Image-based pet behavior detection method

Publications (2)

Publication Number Publication Date
CN109784208A CN109784208A (en) 2019-05-21
CN109784208B true CN109784208B (en) 2023-04-18

Family

ID=66498468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811601328.3A Active CN109784208B (en) 2018-12-26 2018-12-26 Image-based pet behavior detection method

Country Status (1)

Country Link
CN (1) CN109784208B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785681B (en) * 2019-11-07 2024-03-08 杭州睿琪软件有限公司 Method and device for generating 3D image of pet
CN111914657B (en) * 2020-07-06 2023-04-07 浙江大华技术股份有限公司 Pet behavior detection method and device, electronic equipment and storage medium
CN111964227B (en) * 2020-07-27 2021-12-21 青岛海尔空调器有限总公司 Air circulation control method, system and device
CN112016537B (en) * 2020-10-27 2021-01-08 成都考拉悠然科技有限公司 Comprehensive mouse detection method based on computer vision
CN112162459B (en) * 2020-11-06 2021-04-27 广州悦享软件科技有限公司 Double-layer water mist projection method and device based on split type intelligent wearable equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678413B1 (en) * 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
CN106407711A (en) * 2016-10-10 2017-02-15 重庆科技学院 Recommendation method and recommendation system of pet feeding based on cloud data
CN108681752A (en) * 2018-05-28 2018-10-19 电子科技大学 A kind of image scene mask method based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7643655B2 (en) * 2000-11-24 2010-01-05 Clever Sys, Inc. System and method for animal seizure detection and classification using video analysis
CN100541523C (en) * 2007-09-29 2009-09-16 华为技术有限公司 A kind of object video recognition methods and system based on support vector machine
CN102509085A (en) * 2011-11-19 2012-06-20 江苏大学 Pig walking posture identification system and method based on outline invariant moment features
US20150359201A1 (en) * 2014-06-11 2015-12-17 Chris Kong Methods and Apparatus for Tracking and Analyzing Animal Behaviors
CN104969875A (en) * 2015-07-23 2015-10-14 中山大学深圳研究院 Pet behavior detection system based on image change
CN109064012A (en) * 2018-07-30 2018-12-21 合肥东恒锐电子科技有限公司 A kind of pet continues tracing management monitoring method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678413B1 (en) * 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
CN106407711A (en) * 2016-10-10 2017-02-15 重庆科技学院 Recommendation method and recommendation system of pet feeding based on cloud data
CN108681752A (en) * 2018-05-28 2018-10-19 电子科技大学 A kind of image scene mask method based on deep learning

Also Published As

Publication number Publication date
CN109784208A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109784208B (en) Image-based pet behavior detection method
Achour et al. Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on Convolutional Neural Networks (CNN)
Villa et al. Towards automatic wild animal monitoring: Identification of animal species in camera-trap images using very deep convolutional neural networks
Yu et al. Automated identification of animal species in camera trap images
CN109241317B (en) Pedestrian Hash retrieval method based on measurement loss in deep learning network
Oquab et al. Is object localization for free?-weakly-supervised learning with convolutional neural networks
Crowley et al. In search of art
CN107292298A (en) Ox face recognition method based on convolutional neural networks and sorter model
Bergamini et al. Multi-views embedding for cattle re-identification
CN111783576A (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN108564673A (en) A kind of check class attendance method and system based on Global Face identification
CN107330403B (en) Yak counting method based on video data
CN111178120A (en) Pest image detection method based on crop identification cascade technology
CN113435355A (en) Multi-target cow identity identification method and system
CN115049966A (en) GhostNet-based lightweight YOLO pet identification method
Lee et al. Fine-Grained Plant Identification using wide and deep learning model
KR102334338B1 (en) Action recognition method and device
CN112418327A (en) Training method and device of image classification model, electronic equipment and storage medium
CN110363111B (en) Face living body detection method, device and storage medium based on lens distortion principle
Park et al. Insect classification using Squeeze-and-Excitation and attention modules-a benchmark study
Wang et al. Pig face recognition model based on a cascaded network
Ueno et al. Automatically detecting and tracking free‐ranging Japanese macaques in video recordings with deep learning and particle filters
CN111597937B (en) Fish gesture recognition method, device, equipment and storage medium
CN115170942B (en) Fish behavior recognition method with multi-stage fusion of sound and vision
Wang et al. Crop pest detection by three-scale convolutional neural network with attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant