CN111597942A - Smart pet training and accompanying method, device, equipment and storage medium - Google Patents

Smart pet training and accompanying method, device, equipment and storage medium Download PDF

Info

Publication number
CN111597942A
CN111597942A CN202010381854.4A CN202010381854A CN111597942A CN 111597942 A CN111597942 A CN 111597942A CN 202010381854 A CN202010381854 A CN 202010381854A CN 111597942 A CN111597942 A CN 111597942A
Authority
CN
China
Prior art keywords
pet
information
posture
video
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010381854.4A
Other languages
Chinese (zh)
Other versions
CN111597942B (en
Inventor
娄军
鹿鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Ai & Display Co ltd
Original Assignee
Global Ai & Display Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Global Ai & Display Co ltd filed Critical Global Ai & Display Co ltd
Priority to CN202010381854.4A priority Critical patent/CN111597942B/en
Publication of CN111597942A publication Critical patent/CN111597942A/en
Application granted granted Critical
Publication of CN111597942B publication Critical patent/CN111597942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K15/00Devices for taming animals, e.g. nose-rings or hobbles; Devices for overturning animals in general; Training or exercising equipment; Covering boxes
    • A01K15/02Training or exercising equipment, e.g. mazes or labyrinths for animals ; Electric shock devices ; Toys specially adapted for animals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides a method, a device, equipment and a storage medium for training and accompanying a smart pet, wherein the method comprises the following steps: s11: video information and voice information are prestored; s12: receiving user selection, and presetting or remotely setting playing time and playing times; s13: playing voice information, calling the pet and issuing instruction information, and simultaneously playing video information to guide the pet to pay attention to and watch the video information; s14: collecting image information, inputting the image information into a pre-trained image recognition model, and recognizing pet posture actions; s15: matching the recognition result with the pet posture action, if the recognition result is matched with the pet posture action, outputting positive feedback, and otherwise, outputting negative feedback; s16: S13-S15 are repeatedly executed. The invention identifies the action and emotion of the pet by an artificial intelligence method and establishes a communication channel between the pet and the owner. The pet learning system has the advantages that the demand and mood of the pet are known, meanwhile, the pet learning instruction and the habit order of human are helped, and the pet is better integrated into a family and a human world.

Description

Smart pet training and accompanying method, device, equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence and pet feeding, in particular to a method, a device, equipment and a storage medium for training and accompanying a smart pet.
Background
Pet training takes a lot of time and effort, but most pet owners do not have the time, patience and skill associated with it. Some pet owners choose to send pets to a training school or ask a trainer to go home for training, but pet training is a long-time process, and many pets perform well after training, but return to the original state after a period of time.
The existing pet accompanying equipment mainly provides services such as feeding or pet playing and only provides the most basic accompanying function. The communication channel between the pet and the owner cannot be established, and the pet cannot be helped to be integrated into the human world.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for training and accompanying a smart pet, which are used for solving the problem that in the prior art, a pet owner does not have time and energy to train and accompany the pet, so that the raised pet cannot smoothly communicate with the pet owner, and the pet cannot be helped to merge into a family.
According to a first aspect, a method for smart pet training provided in an embodiment, the method comprises:
s11: video information and voice information corresponding to pets needing training are prestored, wherein the voice information comprises pet names, instruction information and training feedback, and the instruction information in the voice information is matched with pet posture actions in the video information;
s12: receiving the selection of a user on video information, voice information, a pet and the state of the pet, and presetting or remotely setting the playing time and the playing times of the video information and the voice information;
s13: playing corresponding voice information, calling the pet and issuing instruction information, playing video information of corresponding pet posture actions, guiding the pet to pay attention to and watch the video information, and learning the pet posture actions;
s14: acquiring image information synchronously acquired by image acquisition equipment, inputting the image information into a pre-trained image recognition model, and recognizing pet posture actions;
s15: matching the recognition result with the pet posture action corresponding to the instruction information, if the recognition result is matched with the pet posture action corresponding to the instruction information, outputting positive feedback, otherwise, outputting negative feedback;
s16: and repeatedly executing the steps S13-S15 to guide the training of the daily habits of the pet until the pet is skilled to master the corresponding command action.
Preferably, in step S14, the method for recognizing pet gesture motion by inputting image information into a pre-trained image recognition model includes:
inputting the acquired image information into an image recognition model,
detecting the position of the pet in the image by using a computer vision target detection algorithm,
identifying the key points of the pet by using a computer vision key point algorithm,
analyzing the posture of the pet by using the pet key points,
and analyzing the action of the pet according to the change of the pet key points.
Preferably, the step S11 is preceded by: the method for acquiring the video information and the voice information of the training pet comprises the following steps:
collecting audio, video and image information including pets and pet owners;
inputting the audio information into a pre-trained voice recognition model, and recognizing the calling of the pet owner to the pet and the instruction information issued;
and inputting the video/image information into a pre-trained image recognition model, recognizing pet posture actions in the video/image information, analyzing and judging whether the pet posture actions accord with instruction information of a pet owner, and outputting feedback according to a judgment result.
Preferably, before training the pet, the method further comprises: establishing and training a relationship between pet posture actions and pet requirements, wherein the method comprises the following steps:
presetting pet posture actions to be associated with pet requirements;
before the pet requirement is finished, playing corresponding voice information and video information, guiding the pet to learn, and repeating for many times until the pet associates the pet posture action with the pet requirement;
after the association is established, the posture action of the pet is recognized in real time, the pet requirement is obtained, corresponding feedback measures are made according to the pet requirement, and the feedback is given back to the pet owner.
Preferably, before training the pet, the method further comprises: the method for presetting and guiding the daily habits of training the pets comprises the following steps:
presetting pet posture actions to be associated with pet habits;
step S14 is executed, image information is synchronously acquired through image acquisition equipment, and pet posture actions are recognized; judging whether the current pet posture action conforms to a preset pet habit or not;
if the current pet posture action is in accordance with the preset pet habit, positive feedback is output, otherwise negative feedback is output, and correct pet posture action is demonstrated by playing images and video information.
Preferably, the guiding of the daily habit of training the pet further comprises: a method of guiding a pet to understand a pet owner's intention, the method comprising:
recognizing voice information issued by a pet owner, converting the voice information into instruction information, and then playing the instruction information in a video and image mode; acquiring and recognizing pet posture actions; and when the pet posture action is judged to be consistent with the action of playing the video and the image, positive feedback is output, and otherwise negative feedback is output.
Preferably, the guiding of the daily habit of training the pet further comprises: the method for guiding the pet to correctly relieve the bowels comprises the following steps:
when the step S14 is executed, image information is synchronously acquired through image acquisition equipment, and the posture action of pet excrement and urine in the image information is identified by using a computer vision method, or whether the pet excrement and urine exists in the image information is identified, so that whether the pet excrement and urine exists is judged;
and meanwhile, the position of the pet excrement and urine is identified, whether the pet excrement and urine is in the correct position is judged, if the pet excrement and urine is in the correct position, positive feedback is output, otherwise, negative feedback is output, and the correct excrement and urine position of the pet is prompted and guided by playing corresponding image information and voice information.
According to a second aspect, a method for accompanying a smart pet provided in one embodiment, the method includes:
s21: collecting information including audio, video and images of pets;
s22: inputting the collected audio information into a preset sound recognition model, and extracting the pet call; inputting the video and image information into a preset image recognition model, and recognizing information including pet posture actions and positions;
s23: the pet calling is input into a preset emotion recognition model, and a plurality of pet emotion information of the pet is obtained through analysis;
s24: adjusting and confirming the emotion information of the pet in the step S24 according to the posture action and the position of the pet;
s25: the pet emotion is transmitted to the external equipment, so that the pet owner is reminded, and the pet owner can interact with the pet and perform corresponding remote operation control according to the pet emotion information.
Preferably, before step S24, establishing a correspondence between the pet emotion and time; and step S24, when the emotion of the pet is determined, the occurrence time of the emotion of the pet and the video and audio information before and after the occurrence time are transmitted to the external equipment, and the owner of the pet is prompted.
According to a third aspect, an embodiment provides a smart pet training and accompanying device, comprising: the device comprises a device body, wherein a main control unit, a collecting unit, a processing unit, an output unit, a feeding unit and a moving unit are arranged on the device body;
the acquisition unit is connected with the processing unit, and the main control unit is respectively connected with the processing unit, the output unit and the feeding unit;
the acquisition unit is used for acquiring audio, video and image information including pets and/or pet owners and transmitting the information to the processing unit;
the processing unit is used for processing the video and image information according to a computer vision algorithm and identifying the video and image information including the posture, the action and the position of the pet; the system is also used for processing by utilizing audio recognition and a natural language algorithm, recognizing the audio information of the pet and/or the pet owner, and analyzing the information comprising the emotion of the pet and the instruction of the pet owner; then transmitting the identification result to the main control unit;
the main control unit is used for receiving the identification result, controlling the output unit to output audio and/or video and/or image information according to the identification result, and controlling the food throwing unit to throw food;
the mobile unit is used for receiving the mobile information sent by the main control unit and controlling the moving direction and distance of the equipment main body.
Preferably, the acquisition unit comprises an image acquisition unit and a sound acquisition unit, and the image acquisition unit is connected with the processing unit and is used for acquiring image information including pets and transmitting the image information to the processing unit; the sound collection unit is connected with the processing unit and used for collecting audio information including pets and/or pet owners and transmitting the audio information to the processing unit.
Preferably, the output unit comprises a display unit and a sound output unit, and the display unit is connected with the main control unit and used for playing image information or video information; the sound output unit is connected with the main control unit and used for playing audio information.
According to a fourth aspect, there is provided in an embodiment an electronic device comprising: one or more processors, storage for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement a method as in any above.
According to a fifth aspect, an embodiment provides a computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of the preceding claims.
Compared with the prior art, the invention has the beneficial effects that:
(1) the pet posture action learning method and the pet posture action learning device guide the pet to follow the instruction by playing the video and the audio, judge the learning effect of the pet by the image recognition technology, promote the pet to skillfully master the corresponding instruction action by the reward mechanism, and help the pet owner to train the favorite pet;
(2) according to the invention, various computer vision algorithms in the image recognition model are utilized to accurately recognize the position of the pet in the image and accurately recognize the key points of the pet, so that the posture of the pet is analyzed according to the key points, and the action of the pet is further analyzed according to the change of the key points;
(3) the invention utilizes the voice recognition technology to recognize the instruction information called and issued by the pet owner to the pet, further utilizes the image recognition technology to recognize whether the posture action of the pet is matched with the instruction information issued by the pet owner, carries out corresponding reward according to the matching result, promotes the pet to skillfully master the corresponding instruction action, and helps the pet owner to educate the self-loved pet;
(4) the invention relates the pet posture action and the pet requirement, so as to grasp the pet requirement according to the pet posture action and make corresponding feedback according to the pet requirement.
(5) The pet posture action is associated with the pet habit, and the daily habit of training the pet is guided, including relieving oneself and not detaching home, so that the pet can be helped to learn the instruction of the owner and the habit and order of human beings, and the pet can be better blended into the family and the human world;
(6) the invention utilizes the voice recognition technology and the action recognition technology in the artificial intelligence to recognize the emotion of the pet, so that corresponding feedback is made according to the emotion of the pet, and the communication channel is used for the pet to understand the intention of the owner of the pet and the intention of the owner to understand the pet. The pet owner's intent includes instructions given by the owner and the device translates the owner's daily words into simple instructions understandable by the pet. The intention of the pet includes the need and mood of the pet. Meanwhile, the pet can receive the requirements of the owner and the set behavior is standard;
(7) the intelligent pet training and accompanying device provided by the invention integrates the pet accompanying function and the pet training function, effectively identifies the action and emotion of the pet, and further establishes a communication channel between the pet and the owner. The pet management system helps the master to know the demand and mood of the pet, helps the pet to learn the instructions of the master and the habits and orders of human beings, and helps the pet to be better integrated into the family and the human world.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for training a smart pet according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for training a smart pet according to an embodiment of the present invention;
FIG. 3 is a schematic view of a pet gesture recognition process according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a pet companion method according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of a smart pet training and accompanying device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two. It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. It should be understood that although the terms first, second, third, etc. may be used to describe … … in the embodiments of the present application, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … …, and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments herein. The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context. It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of additional like elements in the article or system in which the element is included.
At present, domestic pets are kept more and more, and untrained pets often bring various troubles to owners, such as relieving the bowels everywhere, dismantling the house and the like. Generally, pet owners want their pets to obey instructions, adapt to living habits and orders of human beings, are good at harmonious coexistence with other people or pets, and are good at physical and mental health and the like. However, the pet owner may not have enough time, patience and skill to educate and accompany his or her pet. In addition, the communication channel between the pet and the pet owner is not clear in nature and can not be solved by standing for a long time, for example, the human needs can be described in various languages, but the pet can only understand simple instructions
It is an object of the present invention to alleviate or solve this phenomenon. The actions and emotions of the pet are identified by an artificial intelligence method, and then a communication channel between the pet and a pet owner is built. The pet owner can be helped to know the demand and mood of the pet, and the pet owner can be helped to guide the pet to learn related instruction actions and human living habits and orders, so that the pet can be better integrated into the family and the human world.
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example 1
As shown in FIGS. 1-2, according to a first aspect of the present invention, there is provided a smart pet training method, comprising the following steps.
Step S11: video information and voice information corresponding to pets needing training are prestored, the voice information comprises pet names, instruction information and training feedback, and the instruction information in the voice information is matched with the pet posture actions in the video information.
The voice information in this embodiment is not limited to the voice information recorded by the pet owner, and other voices may be selected. In order to facilitate the cultivation of the communication channel between the pet and the pet owner, in this embodiment, the voice information of the pet owner may be prestored. The instructional information in this embodiment includes, but is not limited to, handshaking, looping, rolling, sitting, standing, remaining stationary, approaching, and departing voice information. The instruction information in the embodiment may refer to an instruction recognizable by the pet, and the invention aims to identify the instruction information by the pet, and the voice information sent by the pet owner may or may not include the instruction information, so that the instruction information in the voice information sent by the pet owner needs to be identified.
Before step S11, the method includes collecting video information and voice information for training the pet, and includes: collecting audio, video and image information including pets and pet owners; inputting the audio information into a pre-trained voice recognition model, and recognizing the calling of the pet owner to the pet and the issued instruction information; and inputting the video/image information into a pre-trained image recognition model, recognizing pet posture actions in the video/image information, analyzing and judging whether the pet posture actions accord with instruction information of a pet owner, and outputting feedback according to a judgment result.
Further, before the smart training of the pet, the method further comprises the following steps: establishing and training a relation between pet actions and pet requirements; the guide trains the daily habits of the pet, further, guides the pet to understand the intention of the pet owner and guides the pet to correctly relieve the bowels, and the like.
In this embodiment, video, audio and image information of the training of the pet are prestored, and information such as time for training the pet, training times and the like is also reserved. In one embodiment, the recording can be performed in advance according to a pet owner, so that the pet can be familiar with the pet owner's command conveniently, and of course, the audio and video recording can be performed through the mobile phone APP and then transmitted to the equipment device for training the pet.
The pre-stored training video information and voice information can be selected according to the requirements of the pet owner, and comprise pet names, lying down, sitting down, rolling and the like. And the instruction information in the pre-stored voice information is matched with the pet action recorded in the video. In this step, the recorded video information executed by the relevant pet according to the instruction is prepared in advance and is in accordance with the variety of the pet. In addition, the instruction information and the video information recorded in the step can be updated remotely.
Step S12: and receiving the selection of the user on the video information, the voice information, the pet and the state of the pet, and presetting or remotely setting the playing time and the playing frequency of the video information and the voice information.
After the corresponding pet, video information and voice information are selected, the video information is played through a display device, the display device can be a display screen, further, various display modes such as a black and white screen and an LCD screen can be adopted, a TFTLCD color display screen is preferably adopted in the embodiment, and various sizes and specifications such as 7 inches and 10 inches can be adopted. The playing time and the playing frequency of the audio information and the video information can be performed according to a preset program, and certainly can be performed through a remotely set training program.
Further, preset or remotely set the playing time and playing frequency of the training video and voice, and also need to select the pet state, and judge whether training is needed according to the pet state, so as to achieve intelligent training, and further, obtain the pet state to judge whether the current pet is suitable for training operation, for example, the pet is not suitable for training when sleeping, or the pet exercises some action states by itself, and at this time, can judge that the current pet state is not suitable for training. And naturally, judging whether the pet is suitable for training, acquiring image information in the reserved training time period, identifying the posture action of the pet, judging that the current pet belongs to a state which is more suitable for training, and training at the moment.
Step S13: playing corresponding voice information, calling the pet and issuing instruction information, playing video information of correct pet posture action, guiding the pet to pay attention to and watch the video information, and learning pet action.
In this step, during training, the corresponding video information is played according to a preset instruction through the display device, and the corresponding voice information is played to call and issue instruction information to the pet, so that the pet is attracted to notice and watch the video information, and the pet corresponding to the instruction information is guided to learn.
In step S13, when the pet owner trains the pet, the audio information of the pet owner is collected in real time, and the audio information is input to the pre-trained voice recognition model, so that the call and the instruction information issued by the pet owner to the pet are recognized.
Step S14: and acquiring image information synchronously acquired by image acquisition equipment, inputting the image information into a pre-trained image recognition model, and recognizing pet posture actions.
In step S14, the image information is input into a pre-trained image recognition model to recognize pet gesture actions, and as shown in fig. 3, the method further includes:
s141: inputting the collected image information into an image recognition model;
s142, detecting the position of the pet in the image by using a computer vision target detection algorithm;
s143, identifying the key points of the pet by using a computer vision key point algorithm;
s144, analyzing the posture of the pet by using the pet key points;
and S145, analyzing the action of the pet according to the change of the pet key point.
In the step, when the pet is guided to watch the video, the image acquisition equipment synchronously acquires image information for acquiring the action posture of the pet, and then inputs the acquired image information into a pre-trained image identification model to identify the posture action of the pet, so that the posture action of the pet is identified. In this embodiment, the image capturing device may be a camera assembly.
In this embodiment, the pre-training of the image recognition module to recognize pet gesture actions further includes the following steps:
and acquiring a pre-trained image recognition model and training data, wherein the training data comprises pet posture and action data and standard description information corresponding to the pet posture and action data. The pet gesture motion data is input into the image recognition model, and the description information of the pet gesture motion is obtained according to the image recognition model, so that the gesture motion of the pet is recognized.
Furthermore, the image recognition model adopts a computer vision target detection algorithm to detect the position of the pet in the image, adopts a computer vision key point algorithm to recognize the key points of the pet, adopts the pet key points to analyze the posture of the pet, and analyzes the action of the pet along with the change of the pet key points.
For example, taking pet dogs as an example, some key points of the dogs are marked in the drawing. By using a computer vision key point detection algorithm, the key points of the dog can be detected in the image, the pet posture can be analyzed through the position relation of the key points, for example, the key points of the leg of the dog in the graph can be analyzed to know that the dog is standing. The motion of the pet can be analyzed by the positions of the key points of the pet and the changes of the key points in the continuous video frames. For example, when a dog shakes its tail, the key point on the tail swings to a greater extent.
In this embodiment, a large number of images of pets need to be collected in advance, and the keypoints of the pets need to be labeled, and then a model for detecting the keypoints is trained by using the images. After an image with a pet is input into the model, the position corresponding to the pet key point in the image can be obtained. The application range of the model can be adjusted according to requirements, for example, one model is trained by all dogs, or the pet dogs are subdivided into different breeds, and one model is trained by each breed. After the key point positions of the pet are obtained, the posture and the action of the pet can be analyzed and obtained. While different actions by the pet may indicate different moods, such as cat wagging tail indicating anger and dog wagging tail indicating happiness.
Step S15: and matching the recognition result with the pet posture action corresponding to the instruction information, if the pet posture is matched with the correct pet posture action, outputting positive feedback, and otherwise, outputting negative feedback.
In this step, the pet gesture action corresponding to the instruction information may be instruction information in the played voice information and description information corresponding to the played video information, or instruction information in the recognized voice information issued by the pet owner. In this step, the recognition result is the description information of the pet gesture motion, the description information of the pet gesture is matched with the description information corresponding to the played audio/video information, and the matching result is output.
In this embodiment, the output positive feedback includes specific sounds, videos not limited to snack feeding, pet lifting; the negative feedback output includes specific sounds and/or videos that are not limited to ignoring the pet for a period of time and/or criticizing the pet.
Step S16: and repeatedly executing the steps S13-S15 to guide the training of the daily habits of the pet until the pet is skilled to master the corresponding command action.
In step S11-step S16, before step S11, the method for collecting video information and voice information of training pet comprises: collecting audio, video and image information including pets and/or pet owners; inputting the audio information into a pre-trained voice recognition model, and recognizing the calling of the pet owner to the pet and the instruction information issued; executing step S14, recognizing the pet gesture and action in the video/image information; and step S15, judging whether the pet executes the instruction of the pet owner or not, and outputting feedback according to the judgment result.
Further, before training the pet, the method further comprises the following steps: establishing a relationship between pet posture actions and pet requirements, wherein the method comprises the following steps: presetting pet posture actions to be associated with pet requirements; before the pet requirement is finished, playing corresponding voice information and video information, guiding the pet to learn, and repeating for many times until the pet associates the pet posture action with the pet requirement; after the association is established, the posture action of the pet is recognized in real time, the pet requirement is obtained, corresponding feedback measures are made according to the pet requirement, and the feedback is given back to the pet owner.
In this embodiment, the method for establishing and training the relationship between pet actions and pet needs includes: presetting pet actions to be associated with pet requirements; before the pet requirement is completed, step S13 is executed, the corresponding voice information and the video information of the pet action are played, the pet is guided to learn, and the process is repeated for many times until the pet associates the pet action with the pet requirement; after the pet requirement is completed, step S14 is executed, the pet action is recognized, the pet requirement is obtained, and the pet requirement is fed back to the pet owner.
The method for training the pet comprises the following steps: the method for guiding the daily habits of training the pet comprises the following steps: presetting pet posture actions to be associated with pet habits; step S14 is executed, image information is synchronously acquired through image acquisition equipment, and pet posture actions are recognized; judging whether the current pet posture action conforms to a preset pet habit or not; if the current pet posture action is in accordance with the preset pet habit, positive feedback is output, otherwise negative feedback is output, and correct pet posture action is demonstrated by playing images and video information.
In some embodiments, the method further comprises the steps of identifying the environmental information of the pet by using the slam algorithm, and judging whether the current pet posture action conforms to the preset pet habit or not by combining the identified environmental information.
The method for presetting and guiding the routine habits of the training pet in the embodiment comprises the following steps: presetting pet actions to be associated with pet habits; step S14 is executed, image information is synchronously acquired through image acquisition equipment, pet posture actions and environment information are recognized, and whether pet behaviors are consistent with preset pet habits or not is judged; if the pet action is consistent with the preset pet habit, positive feedback is output, otherwise negative feedback is output, and correct pet action is demonstrated in an image and video mode.
Guiding the daily habits of training the pet, further comprising: a method of guiding a pet to understand a pet owner's intention, the method comprising: recognizing voice information issued by a pet owner, converting the voice information into instruction information, and then playing the instruction information in a video and image mode; acquiring and recognizing pet posture actions; and when the pet posture action is judged to be consistent with the action of playing the video and the image, positive feedback is output, and otherwise negative feedback is output.
A method of guiding a routine for training a pet, further comprising: the method for guiding the pet to correctly relieve the bowels comprises the following steps: when the step S14 is executed, image information is synchronously acquired through image acquisition equipment, and the posture action of pet excrement and urine in the image information is identified by using a computer vision method, or whether the pet excrement and urine exists in the image information is identified, so that whether the pet excrement and urine exists is judged; and meanwhile, the position of the pet excrement and urine is identified, whether the pet excrement and urine is in the correct position is judged, if the pet excrement and urine is in the correct position, positive feedback is output, otherwise, negative feedback is output, and the correct excrement and urine position of the pet is prompted and guided by playing corresponding image information and voice information.
The method for guiding the pet to correctly relieve the bowels in the embodiment comprises the following steps: step S14 is executed, image information is synchronously acquired through image acquisition equipment, pet posture actions are recognized, and whether the pet urinates or defecates is judged; step S132: identifying whether the pet urinates and defecates and the position of the pet urinates and defecates in the image information by using a slam algorithm; step S133: and step S15 is executed, whether the excrement and urine of the pet are in the correct position is judged, if the excrement and urine of the pet are in the correct position, positive feedback is output, otherwise, negative feedback is output, and the correct position of the pet is prompted by playing images and voice information.
The training method applied in example 1 can include the following aspects:
(1) method for training pet obeying instruction information
In the method, the pet obeying instruction information can be trained by recognizing the pet gesture action and combining the video information and the voice information.
Presetting instruction information of a pet to be trained, and receiving selection of a user on the pet and the instruction information; playing video information and voice information corresponding to the instruction information, wherein the video information and the voice information comprise calling the pet through the voice information and issuing the instruction information, and the video information of the correct pet posture action is played through the video information to guide the pet to learn the pet posture action; acquiring image information through image acquisition equipment, inputting the acquired image information into a pre-trained image recognition model, and recognizing pet posture actions; and matching the recognition result with the pet posture action corresponding to the issued instruction information, if the recognition result is matched with the correct pet posture action, outputting positive feedback, and otherwise, outputting negative feedback.
(2) Method for teaching pet to obey instruction information of pet owner
On the basis of training the pet obeying instruction information, the voice recognition technology can be combined to train the pet obey the instruction information of the pet owner.
Identifying instruction information issued by a pet owner to the pet; judging whether the pet executes the instruction information of the pet owner; if the pet correctly executes the instruction of the pet owner, positive feedback is output, otherwise negative feedback is output.
(3) Method for helping pet understand intention of pet owner
After training the pet to comply with the instructional information, the pet may be taught an intent to understand the pet's owner in conjunction with voice recognition and natural language processing.
Identifying an instruction issued by a pet owner to the pet, and identifying the name of the pet through voice so as to confirm that the next sentence of speech of the user is instruction information issued to the pet; converting instruction information issued by a user into an instruction signal which is easy to understand by a pet; in this embodiment, the instruction signal that is easy for the pet to understand may be a simple voice or sound signal, an image or video signal, or a combination of the two. Judging whether the pet executes the instruction issuing information or not; in this case, the pet owner can judge whether the pet correctly executes the instruction by himself or herself and then feed back the instruction to the identified device; or the equipment deduces whether the pet correctly executes the instruction according to the pet posture action and the current environment. If the pet correctly executes the instruction, giving reward to the pet, otherwise giving negative feedback to the pet;
for example, if the owner wants to add or subtract within the society 10, a few pet calls will indicate that the result is a few. Therefore, the answer of the questions given by the owner can be judged through voice recognition and natural language processing, then the instruction is transmitted to the pet by using the expression which is easy to understand by the pet, for example, each digital result is replaced by a special voice, and finally, how many voices the pet calls are detected to judge whether the pet correctly executes the instruction.
For another example, a pet owner may want the pet to help himself take the slippers, and when we recognize the intention of the pet owner, the intention may be translated into a brief voice prompt and the pattern of the slippers displayed on the display unit. Helping the pet to understand the intention of the pet owner.
(4) Method for helping pet owner to know pet demand
In the application of the method, a pet owner sets a certain action or action combination to be associated with a specific requirement of a pet; wherein, the specific requirements can be, stool and urine, take a walk to bend, remotely communicate with the pet owner, and the like. Training the pet until the pet is skilled to master the action or the action combination; further comprising: before the pet owner finishes the demand of the pet, giving an instruction of the action or the action combination to the pet; repeatedly issuing instruction information to the pet for multiple times until the pet associates the action or the action combination with the requirement; when the pet is detected to do the action or the action combination, the pet owner is informed of the information. In addition, the pet owner gives an instruction, and can give an instruction by himself or herself, or give an instruction by himself or herself through the pet owner giving an instruction.
For example, a pet owner would be associated with taking the pet out of a loop to jump three consecutive times in place, the device first teaching the pet to learn this action. Then, the pet owner orders the pet to execute the action every time the pet owner takes the pet out for walking. After a number of times, the pet can correlate this action with going out for a loop. Then, when the pet wants to go out for taking a bend, the action is executed, and after the device detects that the pet executes the action, the information can be sent to the pet owner, and the pet owner can know that the pet wants to go out for taking a bend.
(5) Method for teaching pet to correctly relieve bowels
In the application of the method, image information is acquired by using image acquisition equipment, and whether the pet urinates or defecates is identified by using a slam identification method; and the position of the pet when the pet urinates and defecates is obtained, whether the position is the correct position of the pet when the pet urinates and defecates is judged, and whether the position is the correct position can be detected by identifying specific articles such as a closestool. When the pet urinates and defecates at the correct position, positive feedback is output, otherwise negative feedback is output, and the correct position is prompted in an image and voice mode.
(6) Method for teaching pets to adapt to human habits and order
On the basis of detecting pet postures and actions, the pet can be taught to adapt to human habits and orders,
the pet owner sets a human etiquette to which the pet needs to comply; wherein, the human etiquette can be used for correctly relieving the bowels, flushing the closestool after relieving the bowels, not biting, throwing things, refusing to eat, not catching people and the like;
monitoring the dynamic state of the pet in real time and judging whether the pet violates the human etiquette or not; the pet owner judges whether the pet executes the human etiquette or not and feeds back the result to the equipment; the equipment can reason the behavior of the pet according to the posture and the action of the pet and the current environment;
if the pet correctly executes the etiquette, positive feedback is given, otherwise negative feedback is given, and when the pet violates the human etiquette, correct actions can be demonstrated to the pet in an image or video mode.
For example, if the pet owner sets an etiquette that the pet cannot extinguish, the device monitors the pet in real time, and if the pet is found to have the action of extinguishing the pet and a person is in front of the pet owner, the pet owner can be determined to violate the etiquette of the person and give negative feedback to the pet owner. When the person comes in, the person correctly meets the guest and does not act for the person, the person can be determined to correctly obey the human etiquette, and positive feedback is given to the person.
Example 2
According to a second aspect, as shown in fig. 4, according to an embodiment of the present invention, there is provided a smart pet companion method including the following steps.
Step S21: information including audio, video and images of the pet is collected.
Step S22: inputting the collected audio information into a preset sound recognition model, and extracting the pet call; and inputting the video and image information into a preset image recognition model, and recognizing information including pet posture actions and positions.
Step S23: and inputting the pet call into a preset emotion recognition model, and analyzing to obtain a plurality of pet emotion information of the pet. In this step, the emotional information of the pet may include, but is not limited to, happiness, anger, fear, and sadness.
In this embodiment, a large amount of pet cry data of a large category is obtained, and then emotion information corresponding to the pet cry is marked out and trained into an emotion recognition model. When the pet calling data is input into the emotion recognition model, the emotion information represented by the pet calling can be obtained.
Step S24: and adjusting and confirming the emotion information of the pet in the step S24 according to the posture action and the position of the pet. Before step S24 is executed, the corresponding relation between the pet emotion and the time is established; and step S24, when the emotion of the pet is determined, the occurrence time of the emotion of the pet and the video and audio information before and after the occurrence time are transmitted to the external equipment, and the owner of the pet is prompted.
In the embodiment, the posture and the action of the pet are recognized by using the image recognition model, and after the key point position of the pet is obtained, the posture and the action of the pet can be obtained through analysis. While different actions by the pet may indicate different moods, such as cat wagging tail indicating anger and dog wagging tail indicating happiness. Therefore, the emotional information of the pet can be obtained through the image recognition model. Since neither method of recognizing emotion can reach 100%. Therefore, the invention combines the two to judge the real emotion of the pet.
Step S25: and transmitting the pet emotion to external equipment in a remote interaction mode to remind a pet owner so that the pet owner can make corresponding remote control operation according to the pet emotion information.
The chaperoning method in the present embodiment may preferably include the following aspects.
(1) Method for recognizing pet posture action of pet
The image acquisition unit is used for acquiring image and video information, the computer vision target detection algorithm is used for detecting the position of the pet in the image, the computer vision key point algorithm is used for identifying the key points of the pet, the pet key points are used for analyzing the posture of the pet, and the action of the pet is analyzed according to the change of the pet key points.
(2) Method for recognizing pet emotion of pet
Collecting voice information through a voice collecting unit, inputting the collected voice information into a voice recognition model, recognizing and extracting pet calling in the voice information, inputting the pet calling into an emotion recognition model, and recognizing a plurality of emotion information matched with the pet calling; and acquiring and analyzing pet posture actions, and adjusting and determining pet cleaning information.
(3) Method for helping pet owner to know pet condition
The above-identified emotions can be transmitted to the pet owner, so that the pet owner can better understand the pet, and the method can comprise the following steps: recording the emotion and time information of the pet, recording the video and sound information before and after the specific emotion or action of the pet, and sending the time of the emotion and the video and sound before and after the emotion to the owner of the pet when the specific emotion occurs in the pet.
(4) Method for helping pet owner to know pet health information
The health information of the pet can be analyzed according to the exercise amount information of the pet, so that the pet owner obtains the health information of the pet, and the method can comprise the following steps: recording the position and time information of the pet, counting the activity of the pet within a period of time by using the position information of the pet, and sending prompt information to a pet owner when the activity of the pet is abnormal;
example 3
As shown in FIG. 5, according to a third aspect of the present invention, there is provided a smart pet coaching and accompanying apparatus. This intelligent pet training, companion's equipment includes: the main control unit 100, the acquisition unit 300, the processing unit 200, the output unit 400, the feeding unit 600 and the mobile unit 700 are arranged on the equipment body.
The collecting unit 300 is connected with the processing unit 200, and the main control unit 100 is respectively connected with the processing unit 200, the output unit 400 and the feeding unit 600.
The collecting unit 300 is used for collecting audio, video and image information including pets and/or pet owners, and transmitting the information to the processing unit 200.
The processing unit 200 is configured to process the video and image information according to a computer vision algorithm, and identify information including the posture, the motion and the position of the pet in the video and image information; the system is also used for processing by utilizing audio recognition and a natural language algorithm, recognizing the audio information of the pet and/or the pet owner, and analyzing the information comprising the emotion of the pet and the instruction of the pet owner; and then transmitting the identification result to the main control unit.
The main control unit 100 is configured to receive the recognition result, and then control the output unit 400 to output audio and/or video and/or image information according to the recognition result, and control the food throwing unit 600 to throw food.
The mobile unit 700 is connected to the main control unit 100, and the mobile unit 700 is configured to receive the movement information sent by the main control unit 100 and control the direction and distance of the movement of the driving device main body.
The feeding unit arranged on the equipment main body is a feeding device which consists of a food storage box, a food throwing device and a food tray (box). The food may be a staple food or a snack food for the pet. The food throwing unit can remotely control food throwing operation according to instructions of a pet owner, and can also carry out food throwing operation according to a preset program on the device main body. In one embodiment, a weight sensor is located at the bottom of the food holding tank, and the device automatically alerts the pet owner when food is not available.
The acquisition unit 300 comprises an image acquisition unit 310 and a sound acquisition unit 320, wherein the image acquisition unit 310 is connected with the processing unit 200 and is used for acquiring image information including pets and transmitting the image information to the processing unit 200 for identification and analysis; the sound collection unit 320 is connected to the processing unit 200, and is configured to collect audio information including pets and/or pet owners, and transmit the audio information to the processing unit 200 for recognition, analysis and processing.
In this embodiment, the image acquisition unit may adopt a camera assembly, and one or more camera components may be selected in the camera assembly, in this embodiment, one camera component is preferred, the viewing angle of the camera may be selected to be 70 to 170 degrees, and the resolution may be selected to be 100 ten thousand or higher.
The camera can be arranged on the equipment body, in one embodiment, the equipment body is simulated to be human-shaped and comprises a head part, the head part can be designed to rotate or not rotate and is set according to different product positioning, and when the head part can rotate, the equipment can have the automatic detection and tracking follow-up functions of pets.
The output unit 400 comprises a display unit 410 and a sound output unit 420, wherein the display unit 410 is connected with the main control unit 100 and is used for playing image information or video information; the sound output unit 420 is connected to the main control unit 100 and is used for playing audio information.
The display unit 410 in this embodiment may be a black and white screen, an LCD screen, or other display devices, preferably a tft LCD color display screen, and may be 7 inches, 10 inches, or other various sizes and specifications.
The sound output unit 420 in this embodiment may adopt a speaker, and the corresponding sound collection unit 320 may adopt a microphone, so as to implement a voice interaction function, and one or more sound output units 420 and sound collection units 320 in this embodiment may be selected, which are set according to different product configurations.
The mobile unit 700 in this embodiment may be configured as a wheel-shaped device, and when the main control unit 100 on the device main body issues the movement instruction information, the mobile unit 700 may move according to the instruction after receiving the movement instruction information, so as to drive the device main body to move. In this embodiment, the device body may be configured as a pet toy projection device, increasing the exercise and entertainment of the pet.
In addition, the device main body is further provided with a communication unit 500, the communication unit 500 is connected with the main control unit 100, and the communication unit 500 is used for being wirelessly connected with an external terminal so that a pet owner can remotely control the device main body. In some alternative embodiments, the communication unit 500 may remotely connect to the pet owner's mobile phone terminal or other dedicated terminal using WIFI, bluetooth, 4G/5G or other wireless communication technologies.
Of course, the pet owner can set the playing time and the playing times, can record the required instruction information and the pet name required to be trained in advance through the mobile phone APP, and can remotely observe the pet state at home or in other scenes through equipment such as a mobile phone terminal and the like and perform interaction of images and voice. The pet can also interact with the pet owner through the display unit 410, the sound collection unit 320 and the sound output unit 420 of the device.
In addition, the communication unit 500 on the device main body is connected to the internet in various wireless interconnection modes such as WIFI/Bluetooth/2G/3G/4G/5G/6G, on one hand, various data of the pet can be uploaded to the cloud server in real time, in addition, relevant contents such as updated algorithm models, instructions, videos and pictures can be downloaded from the cloud server, and the pet owner can also remotely control the device.
Example 4
According to a fourth aspect of the present invention, there is provided an electronic device including: one or more processors, a storage device to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the method of:
(1) video information and voice information corresponding to pets needing training are prestored, wherein the voice information comprises pet names, instruction information and training feedback, and the instruction information in the voice information is matched with pet posture actions in the video information; receiving the selection of a user on the pet, the video information and the voice information, and presetting or remotely setting the playing time and the playing times of the video information and the voice information; playing corresponding voice information, calling the pet and issuing instruction information, playing video information of correct pet posture action, guiding the pet to pay attention to and watch the video information, and learning the pet posture action; synchronously acquiring image information through image acquisition equipment, inputting the image information into a pre-trained image recognition model, and recognizing pet posture actions; matching the recognition result with the pet posture action corresponding to the issued instruction information, if the recognition result is matched with the correct pet posture action, outputting positive feedback, otherwise outputting negative feedback; and repeating the execution until the pet is skilled to master the corresponding instruction action.
(2) Collecting information including audio, video and images of pets; inputting the collected audio information into a preset sound recognition model, and extracting the pet call; inputting the video and image information into a preset image recognition model, and recognizing information including pet posture actions and positions; the pet calling is input into a preset emotion recognition model, and a plurality of pet emotion information of the pet is obtained through analysis; adjusting and confirming the emotion information of the pet in the step S24 according to the posture action and the position of the pet; the pet emotion is transmitted to the external equipment in a remote interaction mode to remind a pet owner, so that the pet owner can make corresponding remote control operation according to the pet emotion information
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions, such as televisions, large vehicle-mounted screens and the like.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Example 5
According to a fifth aspect, an embodiment of the present application provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the following method is implemented.
Further, one or more processors and memory, for example, may be a single processor. The method can also comprise the following steps: an input device and an output device.
The processor, memory, input devices, and output devices may be connected by a bus or other means, such as by a bus connection.
The memory, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the pet identification method in the embodiments of the present application. The processor executes various functional applications and data processing of the server by running the nonvolatile software program, instructions and modules stored in the memory, so as to implement the method for training and accompanying the smart pet in the above method embodiment.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the pet identification device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
The input device may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. The output device may include a display device such as a display screen.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the method for smart pet coaching, companion, and/or other training in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (14)

1. A smart pet coaching method, comprising:
s11: video information and voice information corresponding to pets needing training are prestored, wherein the voice information comprises pet names, instruction information and training feedback, and the instruction information in the voice information is matched with pet posture actions in the video information;
s12: receiving the selection of a user on video information, voice information, a pet and the state of the pet, and presetting or remotely setting the playing time and the playing times of the video information and the voice information;
s13: playing corresponding voice information, calling the pet and issuing instruction information, playing video information of corresponding pet posture actions, guiding the pet to pay attention to and watch the video information, and learning the pet posture actions;
s14: acquiring image information synchronously acquired by image acquisition equipment, inputting the image information into a pre-trained image recognition model, and recognizing pet posture actions;
s15: matching the recognition result with the pet posture action corresponding to the instruction information, if the recognition result is matched with the pet posture action corresponding to the instruction information, outputting positive feedback, otherwise, outputting negative feedback;
s16: and repeatedly executing the steps S13-S15 to guide the training of the daily habits of the pet until the pet is skilled to master the corresponding command action.
2. The smart pet training method as claimed in claim 1, wherein in the step S14, the image information is inputted into a pre-trained image recognition model, and the method for recognizing the pet gesture motion comprises:
inputting the acquired image information into an image recognition model,
detecting the position of the pet in the image by using a computer vision target detection algorithm,
identifying the key points of the pet by using a computer vision key point algorithm,
analyzing the posture of the pet by using the pet key points,
and analyzing the action of the pet according to the change of the pet key points.
3. The smart pet coaching method of claim 1, wherein step S11 is preceded by the further step of: the method for acquiring the video information and the voice information of the training pet comprises the following steps:
collecting audio, video and image information including pets and pet owners;
inputting the audio information into a pre-trained voice recognition model, and recognizing the calling of the pet owner to the pet and the issued instruction information;
and inputting the video/image information into a pre-trained image recognition model, recognizing pet posture actions in the video/image information, analyzing and judging whether the pet posture actions accord with instruction information of a pet owner, and outputting feedback according to a judgment result.
4. The smart pet coaching method of claim 3, wherein prior to training the pet, further comprising: establishing a relationship between pet posture actions and pet requirements, wherein the method comprises the following steps:
presetting pet posture actions to be associated with pet requirements;
before the pet requirement is finished, playing corresponding voice information and video information, guiding the pet to learn, and repeating for many times until the pet associates the pet posture action with the pet requirement;
after the association is established, the posture action of the pet is recognized in real time, the pet requirement is obtained, corresponding feedback measures are made according to the pet requirement, and the feedback is given back to the pet owner.
5. The smart pet coaching method of claim 3, wherein the training of the pet comprises: the method for guiding the daily habits of training the pet comprises the following steps:
presetting pet posture actions to be associated with pet habits;
step S14 is executed, image information is synchronously acquired through image acquisition equipment, and pet posture actions are recognized; judging whether the current pet posture action conforms to a preset pet habit or not;
if the current pet posture action is in accordance with the preset pet habit, positive feedback is output, otherwise negative feedback is output, and correct pet posture action is demonstrated by playing images and video information.
6. The smart pet training method of claim 5, wherein the method of guiding the daily habit of the training pet further comprises: a method of guiding a pet to understand a pet owner's intention, the method comprising:
recognizing voice information issued by a pet owner, converting the voice information into instruction information, and then playing the instruction information in a video and image mode; acquiring and recognizing pet posture actions; and when the pet posture action is judged to be consistent with the action of playing the video and the image, positive feedback is output, and otherwise negative feedback is output.
7. The smart pet training method of claim 5, wherein guiding the daily habits of training the pet, further comprises: the method for guiding the pet to correctly relieve the bowels comprises the following steps:
when the step S14 is executed, image information is synchronously acquired through image acquisition equipment, and the posture action of pet excrement and urine in the image information is identified by using a computer vision method, or whether the pet excrement and urine exists in the image information is identified, so that whether the pet excrement and urine exists is judged;
and meanwhile, the position of the pet excrement and urine is identified, whether the pet excrement and urine is in the correct position is judged, if the pet excrement and urine is in the correct position, positive feedback is output, otherwise, negative feedback is output, and the correct excrement and urine position of the pet is prompted and guided by playing corresponding image information and voice information.
8. A smart pet companion method, characterized in that the method comprises:
s21: collecting information including audio, video and images of pets;
s22: inputting the collected audio information into a preset sound recognition model, and extracting the pet call; inputting the video and image information into a preset image recognition model, and recognizing information including pet posture actions and positions;
s23: the pet calling is input into a preset emotion recognition model, and a plurality of pet emotion information of the pet is obtained through analysis;
s24: adjusting and confirming the emotion information of the pet in the step S23 according to the posture action and the position of the pet;
s25: the pet emotion is transmitted to the external equipment, so that the pet owner is reminded, and the pet owner can interact with the pet and perform corresponding remote operation control according to the pet emotion information.
9. The companion smart pet method as claimed in claim 8, wherein the correspondence between the emotion of the pet and the time is established before the step S24 is performed; and step S24, when the emotion of the pet is determined, the occurrence time of the emotion of the pet and the video and audio information before and after the occurrence time are transmitted to the external equipment, and the owner of the pet is prompted.
10. The utility model provides a device is educated, accompanies to smart pet which characterized in that includes: the device comprises a device body, wherein a main control unit, a collecting unit, a processing unit, an output unit, a feeding unit and a moving unit are arranged on the device body;
the acquisition unit is connected with the processing unit, and the main control unit is respectively connected with the processing unit, the output unit and the feeding unit;
the acquisition unit is used for acquiring audio, video and image information including pets and/or pet owners and transmitting the information to the processing unit;
the processing unit is used for processing the video and image information according to a computer vision algorithm and identifying the video and image information including the posture, the action and the position of the pet; the system is also used for processing by utilizing audio recognition and a natural language algorithm, recognizing the audio information of the pet and/or the pet owner, and analyzing the information comprising the emotion of the pet and the instruction of the pet owner; then transmitting the identification result to the main control unit;
the main control unit is used for receiving the identification result, controlling the output unit to output audio and/or video and/or image information according to the identification result, and controlling the food throwing unit to throw food;
the mobile unit is connected with the main control unit and used for receiving the mobile information sent by the main control unit and controlling the moving direction and distance of the equipment main body.
11. The device as claimed in claim 10, wherein the collecting unit comprises an image collecting unit and a sound collecting unit, the image collecting unit is connected to the processing unit for collecting image information including the pet and transmitting the image information to the processing unit; the sound collection unit is connected with the processing unit and used for collecting audio information including pets and/or pet owners and transmitting the audio information to the processing unit.
12. The device as claimed in claim 10, wherein the output unit comprises a display unit and a sound output unit, the display unit is connected to the main control unit for playing image information or video information; the sound output unit is connected with the main control unit and used for playing audio information.
13. An electronic device, comprising: one or more processors, storage for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7, 8-9.
14. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7, 8-9.
CN202010381854.4A 2020-05-08 2020-05-08 Smart pet training and accompanying method, device, equipment and storage medium Active CN111597942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010381854.4A CN111597942B (en) 2020-05-08 2020-05-08 Smart pet training and accompanying method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010381854.4A CN111597942B (en) 2020-05-08 2020-05-08 Smart pet training and accompanying method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111597942A true CN111597942A (en) 2020-08-28
CN111597942B CN111597942B (en) 2023-04-18

Family

ID=72186835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010381854.4A Active CN111597942B (en) 2020-05-08 2020-05-08 Smart pet training and accompanying method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111597942B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866192A (en) * 2020-09-24 2020-10-30 汉桑(南京)科技有限公司 Pet interaction method, system and device based on pet ball and storage medium
CN112188296A (en) * 2020-09-28 2021-01-05 深圳创维-Rgb电子有限公司 Interaction method, device, terminal and television
CN113728941A (en) * 2021-08-31 2021-12-03 惠州市惠城区健生生态农业基地有限公司 Method and system for intelligently domesticating pet dog
CN114128671A (en) * 2021-11-26 2022-03-04 广州鼎飞航空科技有限公司 Animal training method and control device
CN114145242A (en) * 2021-10-12 2022-03-08 西安七微秒信息技术有限公司 Cloud-based police dog training management method, system, equipment and storage medium
WO2022138839A1 (en) * 2020-12-24 2022-06-30 Assest株式会社 Animal intention determination program
WO2022213974A1 (en) * 2021-04-06 2022-10-13 蚂蚁胜信(上海)信息技术有限公司 Auxiliary image capture methods and apparatuses for pets
WO2023120675A1 (en) * 2021-12-23 2023-06-29 アニコム ホールディングス株式会社 Emotion determination system and method for determining emotion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113405A1 (en) * 2016-12-19 2018-06-28 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream, and corresponding apparatus thereof
CN109169365A (en) * 2018-08-13 2019-01-11 中国联合网络通信集团有限公司 The guidance system of pet behavior
CN109287511A (en) * 2018-09-30 2019-02-01 中山乐心电子有限公司 The method, apparatus of training pet control equipment and the wearable device of pet

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113405A1 (en) * 2016-12-19 2018-06-28 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream, and corresponding apparatus thereof
CN109169365A (en) * 2018-08-13 2019-01-11 中国联合网络通信集团有限公司 The guidance system of pet behavior
CN109287511A (en) * 2018-09-30 2019-02-01 中山乐心电子有限公司 The method, apparatus of training pet control equipment and the wearable device of pet

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓祺盛;邢玉虎;刘旭;翟家兴;: "基于STM32单片机的宠物智能项圈设计" *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866192A (en) * 2020-09-24 2020-10-30 汉桑(南京)科技有限公司 Pet interaction method, system and device based on pet ball and storage medium
CN112188296A (en) * 2020-09-28 2021-01-05 深圳创维-Rgb电子有限公司 Interaction method, device, terminal and television
WO2022138839A1 (en) * 2020-12-24 2022-06-30 Assest株式会社 Animal intention determination program
WO2022213974A1 (en) * 2021-04-06 2022-10-13 蚂蚁胜信(上海)信息技术有限公司 Auxiliary image capture methods and apparatuses for pets
CN113728941A (en) * 2021-08-31 2021-12-03 惠州市惠城区健生生态农业基地有限公司 Method and system for intelligently domesticating pet dog
CN113728941B (en) * 2021-08-31 2023-10-17 惠州市惠城区健生生态农业基地有限公司 Intelligent pet dog domestication method and system
CN114145242A (en) * 2021-10-12 2022-03-08 西安七微秒信息技术有限公司 Cloud-based police dog training management method, system, equipment and storage medium
CN114128671A (en) * 2021-11-26 2022-03-04 广州鼎飞航空科技有限公司 Animal training method and control device
WO2023120675A1 (en) * 2021-12-23 2023-06-29 アニコム ホールディングス株式会社 Emotion determination system and method for determining emotion
JP7330258B2 (en) 2021-12-23 2023-08-21 アニコム ホールディングス株式会社 Emotion determination system and emotion determination method

Also Published As

Publication number Publication date
CN111597942B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111597942B (en) Smart pet training and accompanying method, device, equipment and storage medium
KR102334942B1 (en) Data processing method and device for caring robot
CN101833877B (en) Enlightening education method for preschool child
CN110091335B (en) Method, system, device and storage medium for controlling learning partner robot
US10334823B2 (en) Functional communication lexigram device and training method for animal and human
CN107146177A (en) A kind of tutoring system and method based on artificial intelligence technology
CN108919950A (en) Autism children based on Kinect interact device for image and method
CN112469269A (en) Method for autonomously training animals to respond to oral commands
CN106409030A (en) Customized foreign spoken language learning system
CN105126355A (en) Child companion robot and child companioning system
KR102174198B1 (en) Apparatus and Method for Communicating with a Pet based on Internet of Things(IoT), User terminal therefor
JP7106159B2 (en) Image education system and learning support method using artificial intelligence technology
US11832589B2 (en) Animal communication assistance system
US11033002B1 (en) System and method for selecting and executing training protocols for autonomously training an animal
US10178854B1 (en) Method of sound desensitization dog training
US20160066547A1 (en) Assisted Animal Activities
CN107993319B (en) Kindergarten intelligent teaching and management system
Heath et al. Improving communication skills of children with autism through support of applied behavioral analysis treatments using multimedia computing: a survey
US9485963B2 (en) Assisted animal activities
JP2017223812A (en) Apparatus control system, apparatus control method and control program
CN113728941B (en) Intelligent pet dog domestication method and system
CN110197103A (en) A kind of method and device that people interacts with animal
CN114432683A (en) Intelligent voice playing method and equipment
US11445704B1 (en) Sound button device
KR102366054B1 (en) Healing system using equine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant