CN112699842A - Pet identification method, device, equipment and computer readable storage medium - Google Patents

Pet identification method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112699842A
CN112699842A CN202110043107.4A CN202110043107A CN112699842A CN 112699842 A CN112699842 A CN 112699842A CN 202110043107 A CN202110043107 A CN 202110043107A CN 112699842 A CN112699842 A CN 112699842A
Authority
CN
China
Prior art keywords
image
pet
training
target
yolo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110043107.4A
Other languages
Chinese (zh)
Inventor
皮人伟
李静涛
刘超
马绍秋
麦刘伟
唐嘉良
江宁
张胜利
袁华
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinghe Technology Co ltd
Original Assignee
Shanghai Jinghe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinghe Technology Co ltd filed Critical Shanghai Jinghe Technology Co ltd
Priority to CN202110043107.4A priority Critical patent/CN112699842A/en
Publication of CN112699842A publication Critical patent/CN112699842A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a pet identification method, a device, equipment and a computer readable storage medium, wherein the method comprises the steps of enhancing a pet mark image according to a generated confrontation network to obtain an enhanced training image; training the enhanced training image to obtain a lightweight target detection model YOLO and an image classification model based on an addition network AdderNet; detecting the pet in the collected image according to the YOLO model to obtain a target image only containing the pet; the target images are input into an image classification model based on AdderNet, the categories of the pets in the target images are obtained, the training cost of a neural network model is reduced, the hardware cost is reduced through YOLO and AdderNet, the reasoning speed is accelerated, and the real-time performance and the accuracy of intelligent community pet identification are guaranteed.

Description

Pet identification method, device, equipment and computer readable storage medium
Technical Field
The invention relates to the field of image processing, in particular to a pet identification method, a pet identification device, pet identification equipment and a computer-readable storage medium.
Background
In recent years, the deep neural network promotes the rapid development of artificial intelligence, and has achieved great success in the fields of image classification, face recognition, target detection and the like. The accuracy of these techniques has met the requirements of practical applications, but the existing advanced neural network models require a large number of training images to be trained to achieve satisfactory performance, and the trained neural network models have large parameters and calculation amounts, so that the requirements on equipment configuration are high.
At present, the smart community mainly identifies pets such as cats, dogs and the like based on intelligent cameras distributed in various areas of the community, and the devices have low memory and computing capacity and cannot meet the requirements of the prior advanced technology; meanwhile, as the types of the pets such as the cats, the dogs and the like are more, the types of the pets only can be more than 120, the problems of high cost and difficult acquisition exist when enough pet images are acquired as training data, and the neural network model which can meet the requirements of the smart community is difficult to train.
Disclosure of Invention
The invention aims to provide a pet identification method, a pet identification device, pet identification equipment and a computer readable storage medium, which can reduce the training cost of a neural network model, reduce the hardware cost through YOLO and AdderNet, accelerate the reasoning speed and ensure the real-time performance and the accuracy of intelligent community pet identification.
The technical scheme for solving the technical problems is as follows: a pet identification method, comprising:
enhancing the pet marked image according to the generated confrontation network to obtain an enhanced training image;
training the enhanced training image to obtain a lightweight target detection model YOLO and an image classification model based on an addition network AdderNet;
detecting the pet in the collected image according to the YOLO model to obtain a target image only containing the pet;
and inputting the target image into the image classification model based on AdderNet to obtain the category of the pet in the target image.
The invention has the beneficial effects that: training on the basis of a small amount of labeled pet images by adopting an image enhancement technology based on a generated confrontation network to obtain enhanced training images, and training a neural network model for detecting and identifying the pet by using the enhanced training images; a lightweight target detection neural network model YOLO is adopted, compared with other target detection models, the model has smaller requirements on computing resources and memory resources and higher reasoning speed, and the pet in the collected image is detected to obtain the image only containing the pet; the image classification model based on the lightweight convolutional neural network AdderNet is adopted to identify the detected image, the model parameters and the calculated amount are less, the identification accuracy is higher, and the method can be applied to equipment such as intelligent cameras of communities.
On the basis of the technical scheme, the invention can be further improved as follows:
further, the enhancing the pet marked image according to the generated confrontation network to obtain an enhanced training image comprises:
randomly generating a group of noise vectors and inputting the noise vectors into a generator for generating a countermeasure network to obtain an initial generated image;
inputting the initial generated image and the pet mark image into a discriminator for generating a countermeasure network to discriminate true and false so as to train the generation countermeasure network;
and after the training is finished, taking the target generation image and the pet mark image generated by the generator for generating the confrontation network as the enhanced training image.
The beneficial effect of adopting the further scheme is that: the method adopts the generated countermeasure network as a data enhancement technology to enhance the limited pet mark images so as to obtain a large amount of data for training, and the image enhancement technology based on the generated countermeasure network reduces the training cost in the practical application of the neural network.
Further, the method further comprises:
carrying out data enhancement processing on the pet marked image to obtain an enhanced processing image; the data enhancement processing includes: turning, rotating, graying and cutting;
and mixing the enhanced processing image, the target generation image and the pet mark image to obtain the enhanced training image.
The beneficial effect of adopting the further scheme is that: by data enhancement processing such as turning, rotating, graying and cutting, the diversity of the enhanced training images is increased, and the accuracy and robustness of subsequent model training are improved.
Further, the detecting the pet in the captured image according to the YOLO model comprises:
taking an image shot by a current camera as the collected image; or, an image acquired from a server is taken as the captured image.
The beneficial effect of adopting the further scheme is that: the images acquired by the camera or the server have wider use scenes.
Further, the detecting the collected image according to the YOLO model to obtain a target image only including a pet includes:
inputting the collected image into a YOLO model, and outputting coordinate values and confidence degrees of pets in the collected image;
and when the confidence coefficient is greater than a preset confidence coefficient, separating the acquired image according to the coordinate value of the pet corresponding to the confidence coefficient to obtain the target image.
The beneficial effect of adopting the further scheme is that: the method adopts YOLO to check the collected image, so that the requirements on the calculation and the memory resource of a camera can be reduced, the detection speed can be accelerated, the real-time detection is realized, and the reliability of the obtained target image is improved by further judging the reliability.
Further, the obtaining of the category to which the pet in the target image belongs includes:
training the enhanced training image to obtain an image recognition model MobileNet;
identifying the category of the pet in the target image according to the MobileNet to obtain a first identification result;
inputting the target image into the image classification model based on AdderNet to obtain a second identification result;
and when the first recognition result and the second recognition result are the same, determining the category of the pet in the target image.
The beneficial effect of adopting the further scheme is that: and image recognition is carried out through a MobileNet and an image classification model based on AdderNet, and the accuracy and reliability of determining the category of the pet in the target image are improved by combining two recognition results.
Further, after determining the category to which the pet in the target image belongs, the method includes:
when determining that the pet in the target image is independent, searching contact person information of the pet according to the category of the pet and the target image;
and contacting the corresponding contact person through the contact person information of the pet.
The beneficial effect of adopting the further scheme is that: through searching the contact person information of the pet, the pet management is better realized.
In order to solve the technical problem, the invention also provides a pet identification device, which comprises an image enhancement module, a training module, an image retrieval module and an image identification module;
the image enhancement module is used for enhancing the original pet marked image according to the generated countermeasure network to obtain an enhanced training image;
the training model is used for training the enhanced training image to obtain a neural network model YOLO and an image classification model based on an addition network AdderNet;
the image detection module is used for detecting the pet in the collected image according to the YOLO model to obtain a target image only containing the pet;
the image identification module is used for inputting the target image into the image classification model based on AdderNet to obtain the category of the pet in the target image.
In order to solve the above technical problem, the present invention further provides a pet identification device, wherein the pet identification device comprises a memory and a processor;
the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the pet identification method as described above.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, wherein the computer-readable storage medium stores one or more computer programs, and the one or more computer programs are executable by one or more processors to implement the steps of the pet identification method as described above.
Drawings
Fig. 1 is a schematic flow chart of a pet identification method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a generation countermeasure network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a pet identification device according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, fig. 1 is a pet identification method provided in an embodiment of the present invention, and the pet identification method includes:
s101, enhancing an original pet marked image according to a generated confrontation network to obtain an enhanced training image;
s102, training the enhanced training image to obtain a neural network model YOLO and an image classification model based on AdderNet;
s103, detecting the pet in the collected image according to the YOLO model to obtain a target image only containing the pet;
s104, identifying the pets in the target image according to AdderNet, and determining the category of the pets in the target image.
In the embodiment, an image enhancement technology based on generation of a confrontation network is adopted, training is carried out based on a small number of labeled pet images to obtain an enhanced training image, and a neural network model for detection and identification of the pet is trained by using the enhanced training image; a lightweight target detection neural network model YOLO is adopted, compared with other target detection models, the model has smaller requirements on computing resources and memory resources and higher reasoning speed, and the pet in the collected image is detected to obtain the image only containing the pet; the image classification model based on the lightweight convolutional neural network AdderNet is adopted to identify the detected image, the model parameters and the calculated amount are less, the identification accuracy is higher, and the method can be applied to equipment such as intelligent cameras of communities.
The neural network needs a large amount of manually labeled standard data to train the pet to obtain excellent performance, a large amount of standard pet data needs to be collected in the classification of the smart community on the pet, and the number of pet images of each category needs to be approximately the same, which requires high labor cost; moreover, the number of individual pets is rare, and it is very difficult to acquire enough images of the category. The generation of the countermeasure network can utilize random noise to generate images with high authenticity and high discrimination, and the embodiment aims to adopt the generation of the countermeasure network as a data enhancement technology to enhance limited pet mark images so as to obtain a large amount of enhanced training images; as shown in fig. 2, the generation countermeasure network is composed of a generator and a discriminator, wherein in the training process, the generator generates an image deception discriminator, and the discriminator discriminates the image generated by the generator as false and the real image as true. Thus, the generator and the arbiter form a dynamic "gaming process". Specifically, a group of noise vectors are randomly generated and input into a generator for generating a countermeasure network to obtain an initial generated image; inputting the initial generated image and the pet marked image into a discriminator for generating a countermeasure network to discriminate true and false so as to train the generation of the countermeasure network; and after the training is finished, taking the target generation image and the pet mark image generated by the generator for generating the confrontation network as an enhanced training image. For example, after training is completed to generate the confrontation network, target generation images 9 times as many as the original pet marker images are generated, and the generated images and the pet marker images are combined to be used as enhanced training images.
In this embodiment, in order to ensure the diversity of the enhanced training images, so as to better train the neural network model YOLO and the image classification model based on AdderNet, data enhancement processing may also be adopted, and specifically, data enhancement processing is performed on the pet labeled images to obtain enhanced processing images; the data enhancement processing comprises the following steps: turning, rotating, graying and cutting; and mixing the enhanced processing image, the image generated by the generator and the pet mark image to obtain an enhanced training image. The turning can be horizontal and vertical turning of the pet mark image, and the rotation can be random rotation of the pet mark image according to a certain angle, wherein the rotation is 90 degrees and 180 degrees; the graying can be used for randomly graying the pet mark image and changing the RGB image into a gray value; cutting a part randomly sampled from the pet mark image, and then adjusting the size of the part to the size of the pet mark image; mixing the enhanced processing image, the target generation image generated by the generator and the pet marking image to obtain an enhanced training image; the diversity of data can be increased through data enhancement processing and the target generation image generated by the generator, and the accuracy and robustness of subsequent model training are improved.
In this embodiment, after obtaining the enhanced training images, the enhanced training images are used for YOLO and AdderNet-based image classification model training.
In this embodiment, before the pet in the captured image is detected according to the YOLO model, the captured image needs to be acquired, which includes but is not limited to: taking an image shot by a current camera as an acquired image; or, the image obtained from the server is used as the collected image; it should be understood that, where the captured image is to contain a pet to be identified, the captured image may be a history storage pet image obtained from a server, a local database, or the like, or the captured image may be a pet image obtained by shooting in real time. It should be understood that the historical stored pet image obtained from the server, the local database, etc. and the pet image obtained by real-time shooting may be the pet image obtained by identifying in the video stream, or the pet image obtained in the image/photo.
It should be noted that, the detecting the collected image according to the YOLO model, and obtaining the target image only including the pet includes: inputting the collected image into a YOLO model, and outputting coordinate values and confidence degrees of pets in the collected image; and when the confidence coefficient is greater than the preset confidence coefficient, separating and collecting the image according to the coordinate value of the pet corresponding to the confidence coefficient to obtain a target image. The YOLO model comprises 24 convolutional layers and 2 fully-connected layers, and divides an input image into S multiplied by S unit cells, wherein each unit cell is responsible for detecting targets with central points falling in the cell; each cell predicts B bounding boxes (bounding boxes) and the confidence of the bounding boxes; outputting a confidence coefficient and four coordinate values of the bounding box through a YOLO model, wherein the confidence coefficient is a confidence score of the prediction of the bounding box, the four coordinate values comprise the center (x, y), the width w and the height h of the bounding box, and the center coordinate (x, y) is an offset value of the center position of the bounding box relative to the current grid position; and w and h of the bounding box are the ratio of width to height relative to the entire image. In the embodiment, the position of the pet can be calibrated through the four coordinate values of the boundary frame, so that an image only containing the pet is obtained, and the degree of reliability is judged for further ensuring the accuracy of subsequent pet identification; the confidence actually comprises two aspects, namely the probability size of the bounding box containing the target and the accuracy of the bounding box; and when the confidence coefficient is greater than a preset confidence coefficient threshold value, representing that the possibility line and the accuracy of obtaining the image only containing the pet are high, and further taking the image corresponding to the coordinate value of the pet corresponding to the confidence coefficient as a target image.
Note that the convolution, which is widely used in deep neural networks, is just to measure the similarity between the input features and the convolution filter, which involves a large number of multiplications between floating point values; these large scale multiplications in deep neural networks, particularly Convolutional Neural Networks (CNNs), can be swapped by addition networks (AdderNet) to obtain simpler additions to reduce computational cost; in this embodiment, the image classification model based on AdderNet may be a model that migrates AdderNet into the ResNet50 network, and then inputs the target images into the image classification model based on AdderNet, and identifies the category to which the pet of each target image belongs.
In some embodiments, in order to improve the accuracy of pet identification, the target image is input into an image classification model based on AdderNet, and the category to which the pet in the target image belongs is obtained, including: training the enhanced training image to obtain an image recognition model MobileNet; identifying the category of the pet in the target image according to the MobileNet to obtain a first identification result; inputting the target image into an image classification model based on AdderNet to obtain a second recognition result; and when the first recognition result and the second recognition result are the same, determining the category of the pet in the target image. The main work of the MobileNet is to replace the past standard convolution with depthwise separable convolution to solve the problems of the computational efficiency and parameter number of the convolution network; the MobileNets model is based on depthwise separable convolutions, which can decompose a standard convolution into a depth convolution and a point convolution (1 × 1 convolution kernel). Deep convolution applies each convolution kernel to each channel, and 1 × 1 convolution is used to combine the output of the channel convolutions; classifying images only containing pets through MobileNet, and finally obtaining a first identification result to which the pets belong; comparing the first recognition result with a second recognition result recognized by an image classification model based on AdderNet, and when the first recognition result is the same as the second recognition result, indicating that the finally recognized pet belongs to the accurate category; when the first recognition result is different from the second recognition result, the target image can be further cached, so that the subsequent manual recognition is facilitated.
In this embodiment, after determining the category to which the pet in the target image belongs, the relevant contact person may be contacted to perform better pet management. Specifically, when determining that the pet in the target image is independent, searching contact information of the pet according to the category of the pet and the target image; and contacting the corresponding contact person through the contact person information of the pet. It can be understood that, during community management, the contact person information of each pet is recorded to obtain a contact person information base, whether the pet in the target image is alone can be determined through the video or the image shot by the camera, if so, the pet is possibly lost, the contact person information of the pet is searched from the contact person information base according to the category and the pet model of the pet, namely, the contact person information of the pet corresponding to the type and the pet model of the pet is screened out, and then the corresponding contact person is contacted through the contact person information; of course, the contact person information of the pet can be searched from the pet searching inspiration on the network according to the category and the target image of the pet.
In the pet identification method provided by the embodiment, an image is acquired by using equipment such as an intelligent camera, and the position of a pet in the image is checked and the category of the pet is identified through a lightweight target detection model YOLO deployed on the equipment and an image classification model based on an addition network AdderNet; the training cost in the practical application of the neural network is reduced by using the image enhancement technology based on the generation of the countermeasure network; the problems of intelligent camera calculation and limited memory resources are also considered while high accuracy of pet detection and identification is ensured; the lightweight target detection model and the image classification model of the optimized convolutional neural network structural member can ensure accuracy and have quick reasoning capability, the requirements on calculation and memory resources are low, and the method can be deployed on intelligent camera equipment.
The present embodiment also provides a pet identification device, as shown in fig. 3, including: an image enhancement module 301, a training module 302, an image detection module 303 and an image recognition module 304;
the image enhancement module 301 is configured to enhance the original pet labeled image according to the generated countermeasure network to obtain an enhanced training image;
the training model 302 is used for training the enhanced training image to obtain a neural network model YOLO and an image classification model based on an addition network AdderNet;
the image detection module 303 is configured to detect the pet in the acquired image according to the YOLO model to obtain a target image only including the pet;
and the image identification module 304 is used for inputting the target image into an image classification model based on AdderNet to obtain the category of the pet in the target image.
The image enhancement module 301 is specifically configured to randomly generate a group of noise vectors, input the noise vectors into a generator for generating a countermeasure network, and obtain an initial generated image; inputting the initial generated image and the pet marked image into a discriminator for generating a countermeasure network to discriminate true and false so as to train the generation of the countermeasure network; and after the training is finished, taking the target generation image and the pet mark image generated by the generator for generating the confrontation network as an enhanced training image.
The image enhancement module 301 is further configured to perform data enhancement processing on the pet marked image to obtain an enhanced processed image; the data enhancement processing comprises the following steps: turning, rotating, graying and cutting; and mixing the enhanced processing image, the image generated by the generator and the pet mark image to obtain an enhanced training image.
In this embodiment, the pet identification device further includes an obtaining module, configured to take an image obtained by shooting with a current camera as a captured image; or, an image acquired from a server is taken as the captured image.
The image detection module 303 is specifically configured to input the acquired image into the YOLO model, and output a coordinate value and a confidence level of the pet in the acquired image; and when the confidence coefficient is greater than the preset confidence coefficient, separating and collecting the image according to the coordinate value of the pet corresponding to the confidence coefficient to obtain a target image.
The image recognition module 304 is specifically configured to train the enhanced training image to obtain an image recognition model MobileNet; identifying the category of the pet in the target image according to the MobileNet to obtain a first identification result; inputting the target image into an image classification model based on AdderNet to obtain a second recognition result; and when the first recognition result and the second recognition result are the same, determining the category of the pet in the target image.
In this embodiment, the pet identification device further comprises a contact module, configured to search contact information of the pet according to the category to which the pet belongs and the target image when the pet in the target image is determined to be independent; and contacting the corresponding contact person through the contact person information of the pet.
The embodiment also provides a pet identification device, which comprises a memory and a processor;
the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the pet identification method according to the above-described embodiments, which are not described in detail herein.
It will be appreciated that the pet identification device and apparatus may be a camera or a computer terminal. In order to apply the neural network model to the intelligent camera to promote the pet recognition level of the intelligent community, the training image is enhanced based on the image enhancement technology for generating the countermeasure network, so that the training of the corresponding neural network model is completed under the condition of only using a small number of training images, and the cost for acquiring and labeling the training image is reduced; the method has the advantages that the lightweight target detection model YOLO is adopted to detect the pets in the collected images, so that the requirements on the calculation and memory capacity of the intelligent camera are reduced, the detection of the pets is accelerated, and the images only containing the pets are obtained; and then detecting pet categories in the obtained images by using a high-performance AdderNet-based image classification model for accurate identification.
The present embodiment further provides a computer-readable storage medium, where one or more computer programs are stored, and the one or more computer programs may be executed by one or more processors to implement the steps of the pet identification method according to the above embodiments, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The technical solutions provided by the embodiments of the present invention are described in detail above, and the principles and embodiments of the present invention are explained in this patent by applying specific examples, and the descriptions of the embodiments above are only used to help understanding the principles of the embodiments of the present invention; the present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A pet identification method, characterized in that the pet identification method comprises:
enhancing the pet marked image according to the generated confrontation network to obtain an enhanced training image;
training the enhanced training image to obtain a lightweight target detection model YOLO and an image classification model based on an addition network AdderNet;
detecting the pet in the collected image according to the YOLO model to obtain a target image only containing the pet;
and inputting the target image into the image classification model based on AdderNet to obtain the category of the pet in the target image.
2. The pet identification method of claim 1, wherein the enhancing the pet label image according to the generated confrontation network to obtain an enhanced training image comprises:
randomly generating a group of noise vectors and inputting the noise vectors into a generator for generating a countermeasure network to obtain an initial generated image;
inputting the initial generated image and the pet mark image into a discriminator for generating a countermeasure network to discriminate true and false so as to train the generation countermeasure network;
and after the training is finished, taking the target generation image and the pet mark image generated by the generator for generating the confrontation network as the enhanced training image.
3. The pet identification method of claim 2, further comprising:
carrying out data enhancement processing on the pet marked image to obtain an enhanced processing image; the data enhancement processing includes: turning, rotating, graying and cutting;
and mixing the enhanced processing image, the target generation image and the pet mark image to obtain the enhanced training image.
4. The pet identification method of claim 1, wherein the detecting the pet in the captured image according to the YOLO model comprises:
taking an image shot by a current camera as the collected image; or, an image acquired from a server is taken as the captured image.
5. The pet identification method of claim 1, wherein the detecting the collected image according to the YOLO model to obtain a target image only including a pet comprises:
inputting the collected image into a YOLO model, and outputting coordinate values and confidence degrees of pets in the collected image;
and when the confidence coefficient is greater than a preset confidence coefficient, separating the acquired image according to the coordinate value of the pet corresponding to the confidence coefficient to obtain the target image.
6. The pet identification method of claim 1, wherein the obtaining of the category to which the pet in the target image belongs comprises:
training the enhanced training image to obtain an image recognition model MobileNet;
identifying the category of the pet in the target image according to the MobileNet to obtain a first identification result;
inputting the target image into the image classification model based on AdderNet to obtain a second identification result;
and when the first recognition result and the second recognition result are the same, determining the category of the pet in the target image.
7. The pet identification method according to any one of claims 1 to 6, wherein the determining of the category to which the pet in the target image belongs comprises:
when determining that the pet in the target image is independent, searching contact person information of the pet according to the category of the pet and the target image;
and contacting the corresponding contact person through the contact person information of the pet.
8. The pet identification device is characterized by comprising an image enhancement module, a training module, an image detection module and an image identification module;
the image enhancement module is used for enhancing the original pet marked image according to the generated countermeasure network to obtain an enhanced training image;
the training model is used for training the enhanced training image to obtain a neural network model YOLO and an image classification model based on an addition network AdderNet;
the image detection module is used for detecting the pet in the collected image according to the YOLO model to obtain a target image only containing the pet;
the image identification module is used for inputting the target image into the image classification model based on AdderNet to obtain the category of the pet in the target image.
9. A pet identification device, characterized in that the pet identification device comprises a memory and a processor;
the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the pet identification method according to any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more computer programs executable by one or more processors to implement the steps of the pet identification method according to any one of claims 1 to 7.
CN202110043107.4A 2021-01-13 2021-01-13 Pet identification method, device, equipment and computer readable storage medium Pending CN112699842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110043107.4A CN112699842A (en) 2021-01-13 2021-01-13 Pet identification method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110043107.4A CN112699842A (en) 2021-01-13 2021-01-13 Pet identification method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112699842A true CN112699842A (en) 2021-04-23

Family

ID=75514459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110043107.4A Pending CN112699842A (en) 2021-01-13 2021-01-13 Pet identification method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112699842A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392768A (en) * 2021-06-16 2021-09-14 新疆爱华盈通信息技术有限公司 Pet identification method
CN113657318A (en) * 2021-08-23 2021-11-16 平安科技(深圳)有限公司 Pet classification method, device, equipment and storage medium based on artificial intelligence

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171274A (en) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 For identifying the method and apparatus of animal
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN109522925A (en) * 2018-09-30 2019-03-26 咪咕文化科技有限公司 A kind of image-recognizing method, device and storage medium
CN110096964A (en) * 2019-04-08 2019-08-06 厦门美图之家科技有限公司 A method of generating image recognition model
CN110378420A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 A kind of image detecting method, device and computer readable storage medium
CN111259823A (en) * 2020-01-19 2020-06-09 人民中科(山东)智能技术有限公司 Pornographic image identification method based on convolutional neural network
CN111310815A (en) * 2020-02-07 2020-06-19 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and storage medium
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN111914997A (en) * 2020-06-30 2020-11-10 华为技术有限公司 Method for training neural network, image processing method and device
CN112200187A (en) * 2020-10-16 2021-01-08 广州云从凯风科技有限公司 Target detection method, device, machine readable medium and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171274A (en) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 For identifying the method and apparatus of animal
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN109522925A (en) * 2018-09-30 2019-03-26 咪咕文化科技有限公司 A kind of image-recognizing method, device and storage medium
CN110096964A (en) * 2019-04-08 2019-08-06 厦门美图之家科技有限公司 A method of generating image recognition model
CN110378420A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 A kind of image detecting method, device and computer readable storage medium
CN111259823A (en) * 2020-01-19 2020-06-09 人民中科(山东)智能技术有限公司 Pornographic image identification method based on convolutional neural network
CN111310815A (en) * 2020-02-07 2020-06-19 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and storage medium
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN111914997A (en) * 2020-06-30 2020-11-10 华为技术有限公司 Method for training neural network, image processing method and device
CN112200187A (en) * 2020-10-16 2021-01-08 广州云从凯风科技有限公司 Target detection method, device, machine readable medium and equipment

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
俞勇等: "《人工智能技术入门》", 30 September 2019 *
孙建国: "《数字化智能放疗》", 31 December 2019 *
孙彦等: "基于SSD_MobileNet_v1网络的猫狗图像识别", 《天津职业技术师范大学学报》 *
王德兴等: "基于DCGAN数据增强的水产动物分类方法", 《渔业现代化》 *
石瑞生: "大数据安全与隐私保护" *
贾振卿等: "基于YOLO和图像增强的海洋动物目标检测", 《电子测量技术》 *
陈慧岩: "《智能车辆理论与应用》", 31 July 2018 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392768A (en) * 2021-06-16 2021-09-14 新疆爱华盈通信息技术有限公司 Pet identification method
CN113657318A (en) * 2021-08-23 2021-11-16 平安科技(深圳)有限公司 Pet classification method, device, equipment and storage medium based on artificial intelligence
CN113657318B (en) * 2021-08-23 2024-05-07 平安科技(深圳)有限公司 Pet classification method, device, equipment and storage medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN110569905B (en) Fine-grained image classification method based on generation of confrontation network and attention network
CN111191067A (en) Picture book identification method, terminal device and computer readable storage medium
CN111814690B (en) Target re-identification method, device and computer readable storage medium
CN112348117A (en) Scene recognition method and device, computer equipment and storage medium
CN112633354B (en) Pavement crack detection method, device, computer equipment and storage medium
CN112699842A (en) Pet identification method, device, equipment and computer readable storage medium
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN111091057A (en) Information processing method and device and computer readable storage medium
CN109426793A (en) A kind of image behavior recognition methods, equipment and computer readable storage medium
CN111144425B (en) Method and device for detecting shot screen picture, electronic equipment and storage medium
Feng et al. A novel saliency detection method for wild animal monitoring images with WMSN
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN113870254A (en) Target object detection method and device, electronic equipment and storage medium
CN115115863A (en) Water surface multi-scale target detection method, device and system and storage medium
CN111881775B (en) Real-time face recognition method and device
CN112257628A (en) Method, device and equipment for identifying identities of outdoor competition athletes
CN116977895A (en) Stain detection method and device for universal camera lens and computer equipment
Gou et al. License plate recognition using MSER and HOG based on ELM
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN115018886A (en) Motion trajectory identification method, device, equipment and medium
CN110751034B (en) Pedestrian behavior recognition method and terminal equipment
CN115188031A (en) Fingerprint identification method, computer program product, storage medium and electronic device
CN111382741B (en) Method, system and equipment for detecting text in natural scene picture
CN114445691A (en) Model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210423