CN114267009A - Pet excretion behavior processing method and device, electronic equipment and storage medium - Google Patents

Pet excretion behavior processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114267009A
CN114267009A CN202111447736.XA CN202111447736A CN114267009A CN 114267009 A CN114267009 A CN 114267009A CN 202111447736 A CN202111447736 A CN 202111447736A CN 114267009 A CN114267009 A CN 114267009A
Authority
CN
China
Prior art keywords
pet
information
image
target image
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111447736.XA
Other languages
Chinese (zh)
Inventor
王冬晨
蒋君楠
陈吉胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN202111447736.XA priority Critical patent/CN114267009A/en
Publication of CN114267009A publication Critical patent/CN114267009A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application discloses a pet excretion behavior processing method and device, electronic equipment and a storage medium. One embodiment of the method comprises: acquiring an image acquired by a camera in a preset area; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; and acquiring the position information of a camera for acquiring the target image in response to the fact that the target image of the pet excretion behavior exists in the acquired image. The embodiment provides a pet excretion positioning mechanism based on image recognition, and the pet excretion processing efficiency is improved.

Description

Pet excretion behavior processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a pet excretion behavior processing method and device, electronic equipment and a storage medium.
Background
Pets are increasingly appearing in homes as partners. With the increase in the number of pets, pet management is now becoming more and more important. The pet certificate is transacted from vaccine prevention and control, and even license plates are transacted in some regions or countries or chip injection monitoring is carried out. Under the community management scene, the problem that pets relieve the bowels everywhere is gradually a big problem for property management.
Disclosure of Invention
The embodiment of the application provides a pet excretion behavior processing method and device, electronic equipment and a storage medium.
In a first aspect, some embodiments of the present application provide a pet voiding behavior management method comprising: acquiring an image acquired by a camera in a preset area; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; and acquiring the position information of a camera for acquiring the target image in response to the fact that the target image of the pet excretion behavior exists in the acquired image.
In some embodiments, the pet voiding behavior recognition model comprises a model trained by: acquiring a sample set, wherein the sample set comprises a sample image and mark information related to the sample image, and the mark information is used for indicating whether pet excretion exists in the sample image; selecting sample images and label information from the sample set, and performing the following training steps: inputting the selected sample image into an initial model to obtain prediction information, wherein the prediction information is used for indicating whether pet excretion behaviors exist in the sample image; comparing the prediction information with the tag information; determining whether the initial model reaches a preset standard-reaching condition or not according to the comparison result; and in response to determining that the initial model reaches the standard reaching condition, using the initial model as a pet excretion behavior recognition model.
In some embodiments, after obtaining the position information of the camera acquiring the target image in response to determining that the target image of the pet excretion behavior exists in the acquired image, the method further includes: inquiring sound boxes with positioning information close to the position information in a preset area; and playing voice prompt information through the inquired sound box, wherein the voice prompt information is used for prompting the cleaning of pet excrement.
In some embodiments, after obtaining the position information of the camera acquiring the target image in response to determining that the target image of the pet excretion behavior exists in the acquired image, the method further includes: determining the information of the associated cleaning personnel according to the position information; and pushing the target image and the position information to the terminal of the cleaning staff according to the information of the cleaning staff.
In some embodiments, determining the associated cleaner information from the location information includes: determining whether excrement generated by pet excretion behaviors in the target image is cleared or not based on an image acquired by a camera acquiring the target image within a preset time period after the target image is acquired; and in response to determining that the excrement generated by the pet excretion behavior in the target image is not cleaned, determining the associated cleaning personnel information according to the position information.
In a second aspect, some embodiments of the present application provide a pet voiding behavior management apparatus comprising: the first acquisition unit is configured to acquire an image acquired by a camera in a preset area; an input unit configured to input the acquired image into a pre-trained pet voiding behavior recognition model; a first determination unit configured to determine whether a target image of pet excretory behavior exists in the acquired images according to an output of the pet excretory behavior recognition model; and a second acquisition unit configured to acquire position information of a camera acquiring a target image in response to determining that the target image of pet excretion exists in the acquired image.
In some embodiments, the apparatus further comprises a model training unit configured to: acquiring a sample set, wherein the sample set comprises a sample image and mark information related to the sample image, and the mark information is used for indicating whether pet excretion exists in the sample image; selecting sample images and label information from the sample set, and performing the following training steps: inputting the selected sample image into an initial model to obtain prediction information, wherein the prediction information is used for indicating whether pet excretion behaviors exist in the sample image; comparing the prediction information with the tag information; determining whether the initial model reaches a preset standard-reaching condition or not according to the comparison result; and in response to determining that the initial model reaches the standard reaching condition, using the initial model as a pet excretion behavior recognition model.
In some embodiments, the apparatus further comprises: the inquiring unit is configured to inquire the sound box of which the positioning information is close to the position information in the preset area; and the playing unit is configured to play voice prompt information through the inquired sound box, and the voice prompt information is used for prompting the cleaning of pet excrement.
In some embodiments, the apparatus further comprises: a second determination unit configured to determine associated cleaner information according to the position information; and the pushing unit is configured to push the target image and the position information to the terminal of the cleaning staff according to the cleaning staff information.
In some embodiments, the second determining unit is further configured to: determining whether excrement generated by pet excretion behaviors in the target image is cleared or not based on an image acquired by a camera acquiring the target image within a preset time period after the target image is acquired; and in response to determining that the excrement generated by the pet excretion behavior in the target image is not cleaned, determining the associated cleaning personnel information according to the position information.
In a third aspect, some embodiments of the present application provide an apparatus comprising: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described above in the first aspect.
In a fourth aspect, some embodiments of the present application provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method as described above in the first aspect.
According to the pet excretion behavior processing method and device, the electronic equipment and the storage medium, the images collected by the cameras in the preset area are acquired; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; the pet excretion positioning mechanism based on image recognition is provided in response to the fact that the target image of the pet excretion exists in the collected image, and the position information of the camera for collecting the target image is obtained, so that the pet excretion processing efficiency is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a diagram of an exemplary system architecture to which some of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a pet voiding behavior management method according to the present application;
FIG. 3 is a schematic structural view of an embodiment of a pet excretory behavior management device according to the present application;
FIG. 4 is a block diagram of a computer system suitable for use in implementing a server or terminal of some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the pet voiding behaviour management method or the pet voiding behaviour management apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101, terminal devices 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the camera 101, the terminal devices 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The camera 101, the terminal devices 102, 103 may interact with the server 105 through the network 104 to receive or send messages or the like. Various applications, such as an internet of things application, an image acquisition application, an image processing application, an electronic commerce application, a search application, and the like, may be installed on the camera 101 and the terminal devices 102 and 103.
The terminal devices 102 and 103 may be hardware or software. When the terminal devices 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal devices 102 and 103 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
The camera 101 may transmit the collected video to the terminal devices 102 and 103 or the server 105, or may detect pet excretion behavior by carrying a pet behavior recognition algorithm. The server 105 may be a server providing various services, for example, a background server providing support for applications installed on the camera 101 and the terminal devices 102 and 103, and the server 105 may obtain an image acquired by the camera in a preset area; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; and acquiring the position information of a camera for acquiring the target image in response to the fact that the target image of the pet excretion behavior exists in the acquired image.
The pet excretion behavior processing method provided in the embodiment of the present application may be executed by the server 105, the camera 101, and the terminal devices 102 and 103, and accordingly, the pet excretion behavior processing apparatus may be disposed in the server 105, or may be disposed in the camera 101 and the terminal devices 102 and 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a pet voiding behavior management method in accordance with the present application is illustrated. The pet excretion behavior treatment method comprises the following steps:
step 201, acquiring an image acquired by a camera in a preset area.
In this embodiment, the pet excretion behavior processing method execution subject (e.g., the camera, the server, or the terminal shown in fig. 1) may first obtain an image collected by the camera in a preset area, where the preset area may be a public area inside a cell or an area such as a park, a square, etc. where pet excretion behavior needs to be monitored. In this embodiment, the image captured by the camera may include a picture or a video.
Step 202, inputting the collected images into a pet excretion behavior recognition model trained in advance.
In this embodiment, the execution subject may input the image acquired in step 201 into a pet excretion behavior recognition model trained in advance. The pet voiding behavior identification model may characterize a correspondence between the images and whether the images include voiding behavior. The pet excretion behavior recognition model can be applied to an intelligent monitoring camera.
In some optional implementations of the present embodiment, the pet excretory behavior recognition model described above may include a feature extraction section and a correspondence section. The feature extraction part may be configured to extract features from the input image to generate feature vectors, for example, the feature extraction part may be a convolutional neural network, a deep neural network, or the like. The correspondence relationship part may be a correspondence relationship table that is prepared in advance by a technician based on statistics of a large number of feature vectors and indication information and stores correspondence relationships between a plurality of feature vectors and indication information, or may be a trained classification model. In this way, the pet excretion behavior recognition model described above may first extract the features of the image obtained in step 201 using the feature extraction section, thereby generating the target feature vector. Thereafter, whether the image includes the excretion behavior is determined by the correspondence portion. In addition, the pet excretion behavior recognition model can also use a dual-flow method or other deep learning models commonly used for behavior recognition.
In some optional implementations of the present embodiment, the pet voiding behavior recognition model comprises a model trained by: acquiring a sample set, wherein the sample set comprises a sample image and mark information related to the sample image, and the mark information is used for indicating whether pet excretion exists in the sample image; selecting sample images and label information from the sample set, and performing the following training steps: inputting the selected sample image into an initial model to obtain prediction information, wherein the prediction information is used for indicating whether pet excretion behaviors exist in the sample image; comparing the prediction information with the tag information; determining whether the initial model reaches a preset standard-reaching condition or not according to the comparison result; and in response to determining that the initial model reaches the standard reaching condition, using the initial model as a pet excretion behavior recognition model.
In this implementation, the sample images may include images of the stool and urine poses of the male and female dogs. The initial model may include a neural network model or other models commonly used for behavior recognition, and network parameters of the initial model may be adjusted by using a back propagation algorithm, a gradient descent method (e.g., a stochastic gradient descent algorithm), or the like.
And step 203, determining whether a target image of the pet excretion behavior exists in the acquired images according to the output of the pet excretion behavior recognition model.
In this embodiment, the executing body may determine whether there is a target image of pet excretion behavior in the captured image according to the output of the pet excretion behavior recognition model in step 202. The output of the pet voiding behavior recognition model indicates whether a target image of pet voiding behavior exists in the image.
And step 204, responding to the target image with the pet excretion behavior in the acquired image, and acquiring the position information of the camera acquiring the target image.
In this embodiment, the execution subject may acquire the position information of the camera that acquires the target image in response to the determination in step 203 that the target image of the pet excretion behavior exists in the acquired image.
In some optional implementations of the embodiment, after the determining that the target image of the pet excretion behavior exists in the acquired image, acquiring position information of a camera acquiring the target image, the method further includes: inquiring sound boxes with positioning information close to the position information in a preset area; and playing voice prompt information through the inquired sound box, wherein the voice prompt information is used for prompting the cleaning of pet excrement. As an example, the internet of things cloud management platform can push position information and monitoring content information to the community background music system, and the music system can find a music playing sound box nearby according to the position information, and according to configuration content, performs voice broadcast by using a text-to-speech technology, if: a worried owner asks to clean the excrement of the pet at will and a cleaning tool is put beside a sound box in order to protect the community environment. "this implementation mode reminds pet owner the very first time through voice prompt information, makes its in time handle, has improved pet behavior of excreting treatment effeciency.
In some optional implementations of the embodiment, after the determining that the target image of the pet excretion behavior exists in the acquired image, acquiring position information of a camera acquiring the target image, the method further includes: determining the information of the associated cleaning personnel according to the position information; and pushing the target image and the position information to the terminal of the cleaning staff according to the information of the cleaning staff. In this implementation, the target image and the location information may be pushed to the terminal of the cleaning worker in the form of a work order. The information of the cleaning personnel can comprise information such as a terminal number, an account number and the like of the cleaning personnel, and can be determined according to preset information such as the working range, the working time and the like of the cleaning personnel. This implementation mode has in time reminded the personnel of keeping a public place clean through the terminal propelling movement target image and the positional information to the personnel of keeping a public place clean, makes it in time handle pet and excretes the action, has improved pet and excretes action treatment effeciency.
In some optional implementations of this embodiment, determining the associated cleaning staff information according to the location information includes: determining whether excrement generated by pet excretion behaviors in the target image is cleared or not based on an image acquired by a camera acquiring the target image within a preset time period after the target image is acquired; and in response to determining that the excrement generated by the pet excretion behavior in the target image is not cleaned, determining the associated cleaning personnel information according to the position information. The specific value of the preset time period can be set according to actual needs, for example, 1-10 minutes. When the pet owner does not handle the pet, the cleaning personnel can clean the environment in a targeted manner according to the place where the pet excretion is found quickly, and the community health is guaranteed.
According to the method provided by the embodiment of the application, the image acquired by the camera in the preset area is acquired; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; the pet excretion positioning mechanism based on image recognition is provided in response to the fact that the target image of the pet excretion exists in the collected image, and the position information of the camera for collecting the target image is obtained, so that the pet excretion processing efficiency is improved.
With further reference to fig. 3, as an implementation of the methods shown in the above figures, the present application provides an embodiment of a pet excretory behavior management device, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the pet excretion behavior management apparatus 300 of the present embodiment includes: a first acquisition unit 301, an input unit 302, a first determination unit 303, and a second acquisition unit 304. The first acquisition unit is configured to acquire an image acquired by a camera in a preset area; an input unit configured to input the acquired image into a pre-trained pet voiding behavior recognition model; a first determination unit configured to determine whether a target image of pet excretory behavior exists in the acquired images according to an output of the pet excretory behavior recognition model; and a second acquisition unit configured to acquire position information of a camera acquiring a target image in response to determining that the target image of pet excretion exists in the acquired image.
In this embodiment, the specific processes of the first acquiring unit 301, the input unit 302, the first determining unit 303 and the second acquiring unit 304 of the pet excretory behavior management device 300 may refer to step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the apparatus further comprises a model training unit configured to: acquiring a sample set, wherein the sample set comprises a sample image and mark information related to the sample image, and the mark information is used for indicating whether pet excretion exists in the sample image; selecting sample images and label information from the sample set, and performing the following training steps: inputting the selected sample image into an initial model to obtain prediction information, wherein the prediction information is used for indicating whether pet excretion behaviors exist in the sample image; comparing the prediction information with the tag information; determining whether the initial model reaches a preset standard-reaching condition or not according to the comparison result; and in response to determining that the initial model reaches the standard reaching condition, using the initial model as a pet excretion behavior recognition model.
In some optional implementations of this embodiment, the apparatus further comprises: the inquiring unit is configured to inquire the sound box of which the positioning information is close to the position information in the preset area; and the playing unit is configured to play voice prompt information through the inquired sound box, and the voice prompt information is used for prompting the cleaning of pet excrement.
In some optional implementations of this embodiment, the apparatus further comprises: a second determination unit configured to determine associated cleaner information according to the position information; and the pushing unit is configured to push the target image and the position information to the terminal of the cleaning staff according to the cleaning staff information.
In some optional implementations of this embodiment, the second determining unit is further configured to: determining whether excrement generated by pet excretion behaviors in the target image is cleared or not based on an image acquired by a camera acquiring the target image within a preset time period after the target image is acquired; and in response to determining that the excrement generated by the pet excretion behavior in the target image is not cleaned, determining the associated cleaning personnel information according to the position information.
According to the device provided by the embodiment of the application, the image collected by the camera in the preset area is obtained; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; the pet excretion positioning mechanism based on image recognition is provided in response to the fact that the target image of the pet excretion exists in the collected image, and the position information of the camera for collecting the target image is obtained, so that the pet excretion processing efficiency is improved.
Referring now to FIG. 4, a block diagram of a computer system 400 suitable for use in implementing a server or terminal of an embodiment of the present application is shown. The server or the terminal shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the system 400 are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The following components may be connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 401. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first acquisition unit, an input unit, an acquisition unit, a first determination unit, and a second acquisition unit. The names of the units do not form a limitation on the units themselves in some cases, for example, the first acquiring unit may also be described as a "unit configured to acquire an image captured by a camera in a preset area".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring an image acquired by a camera in a preset area; inputting the collected images into a pet excretion behavior recognition model trained in advance; determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model; and acquiring the position information of a camera for acquiring the target image in response to the fact that the target image of the pet excretion behavior exists in the acquired image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A pet voiding behavior management method comprising:
acquiring an image acquired by a camera in a preset area;
inputting the collected images into a pet excretion behavior recognition model trained in advance;
determining whether a target image of pet excretion exists in the acquired images according to the output of the pet excretion behavior recognition model;
and acquiring the position information of a camera for acquiring the target image in response to the fact that the target image of the pet excretion behavior exists in the acquired image.
2. The method of claim 1, wherein the pet voiding behavior recognition model comprises a model trained by:
obtaining a sample set, wherein the sample set comprises a sample image and mark information associated with the sample image, and the mark information is used for indicating whether pet excretion exists in the sample image;
selecting sample images and label information from the sample set, and performing the following training steps: inputting the selected sample image into an initial model to obtain prediction information, wherein the prediction information is used for indicating whether pet excretion behaviors exist in the sample image or not; comparing the prediction information with the tag information; determining whether the initial model reaches a preset standard-reaching condition or not according to the comparison result; in response to determining that the initial model meets the qualifying condition, treating the initial model as a pet voiding behavior recognition model.
3. The method of claim 1, wherein after acquiring the position information of the camera that acquires the target image in response to determining that the target image of pet excretory behavior exists in the acquired image, the method further comprises:
inquiring sound boxes of which the positioning information is close to the position information in the preset area;
and playing voice prompt information through the inquired sound box, wherein the voice prompt information is used for prompting the cleaning of pet excrement.
4. The method according to any one of claims 1-3, wherein after the acquiring of the position information of the camera that acquired the target image in response to determining that the target image of pet excretory behavior exists in the acquired image, the method further comprises:
determining the information of the associated cleaning personnel according to the position information;
and pushing the target image and the position information to a terminal of the cleaning staff according to the cleaning staff information.
5. The method of claim 4, wherein said determining associated cleaner information from said location information comprises:
determining whether excrement generated by pet excretion behaviors in the target image is cleared or not based on an image acquired by a camera acquiring the target image within a preset time period after the target image is acquired;
and in response to determining that the excrement generated by the pet excretion action in the target image is not cleaned, determining the associated cleaning personnel information according to the position information.
6. A pet voiding behavior management apparatus comprising:
the first acquisition unit is configured to acquire an image acquired by a camera in a preset area;
an input unit configured to input the acquired image into a pre-trained pet voiding behavior recognition model;
a first determination unit configured to determine whether a target image of pet excretory behavior exists in the acquired images according to an output of the pet excretory behavior recognition model;
a second acquisition unit configured to acquire position information of a camera that acquires a target image of pet excretory behavior in response to determining that the target image exists in the acquired image.
7. The apparatus of claim 6, wherein the apparatus further comprises a model training unit configured to:
obtaining a sample set, wherein the sample set comprises a sample image and mark information associated with the sample image, and the mark information is used for indicating whether pet excretion exists in the sample image;
selecting sample images and label information from the sample set, and performing the following training steps: inputting the selected sample image into an initial model to obtain prediction information, wherein the prediction information is used for indicating whether pet excretion behaviors exist in the sample image or not; comparing the prediction information with the tag information; determining whether the initial model reaches a preset standard-reaching condition or not according to the comparison result; in response to determining that the initial model meets the qualifying condition, treating the initial model as a pet voiding behavior recognition model.
8. The apparatus of claim 6, wherein the apparatus further comprises:
the inquiring unit is configured to inquire the sound box of which the positioning information is close to the position information in the preset area;
and the playing unit is configured to play voice prompt information through the inquired sound box, and the voice prompt information is used for prompting the cleaning of pet excrement.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202111447736.XA 2021-11-30 2021-11-30 Pet excretion behavior processing method and device, electronic equipment and storage medium Pending CN114267009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111447736.XA CN114267009A (en) 2021-11-30 2021-11-30 Pet excretion behavior processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111447736.XA CN114267009A (en) 2021-11-30 2021-11-30 Pet excretion behavior processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114267009A true CN114267009A (en) 2022-04-01

Family

ID=80826276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111447736.XA Pending CN114267009A (en) 2021-11-30 2021-11-30 Pet excretion behavior processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114267009A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115152642A (en) * 2022-07-27 2022-10-11 新疆华芯云图网络科技有限公司 AI smart pet induced lavatory

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115152642A (en) * 2022-07-27 2022-10-11 新疆华芯云图网络科技有限公司 AI smart pet induced lavatory

Similar Documents

Publication Publication Date Title
CN108922622B (en) Animal health monitoring method, device and computer readable storage medium
CN108416323B (en) Method and device for recognizing human face
CN108520220B (en) Model generation method and device
CN108960316B (en) Method and apparatus for generating a model
WO2022116322A1 (en) Method and apparatus for generating anomaly detection model, and anomaly event detection method and apparatus
CN111523640B (en) Training method and device for neural network model
CN111012261A (en) Sweeping method and system based on scene recognition, sweeping equipment and storage medium
CN109086780B (en) Method and device for detecting electrode plate burrs
CN111467074B (en) Method and device for detecting livestock status
CN108509921B (en) Method and apparatus for generating information
CN108229375B (en) Method and device for detecting face image
CN110209658B (en) Data cleaning method and device
CN111598006A (en) Method and device for labeling objects
CN114267009A (en) Pet excretion behavior processing method and device, electronic equipment and storage medium
CN111243711A (en) Feature identification in medical imaging
CN111860071A (en) Method and device for identifying an item
CN108038473B (en) Method and apparatus for outputting information
CN113052075A (en) Environment monitoring method, device, terminal and medium for pasture
CN109961060B (en) Method and apparatus for generating crowd density information
Tu et al. Segmentation of sows in farrowing pens
JP7070665B2 (en) Information processing equipment, control methods, and programs
CN111027376A (en) Method and device for determining event map, electronic equipment and storage medium
CN113376160A (en) Method and device for recognizing and processing animal excrement by sweeping robot
CN113537148B (en) Human body action recognition method and device, readable storage medium and electronic equipment
CN115393423A (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination