CN116246662A - Pacifying method, pacifying device, pacifying system and storage medium - Google Patents

Pacifying method, pacifying device, pacifying system and storage medium Download PDF

Info

Publication number
CN116246662A
CN116246662A CN202111493870.3A CN202111493870A CN116246662A CN 116246662 A CN116246662 A CN 116246662A CN 202111493870 A CN202111493870 A CN 202111493870A CN 116246662 A CN116246662 A CN 116246662A
Authority
CN
China
Prior art keywords
pacifying
target object
target
audio data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111493870.3A
Other languages
Chinese (zh)
Inventor
常小俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202111493870.3A priority Critical patent/CN116246662A/en
Publication of CN116246662A publication Critical patent/CN116246662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a pacifying method, a pacifying device, a pacifying system and a storage medium, wherein the pacifying method comprises the following steps: acquiring first information of a target object; carrying out emotion recognition on the first information to obtain emotion information of the target object; and executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying. The invention can realize timely and effective pacifying of the target object.

Description

Pacifying method, pacifying device, pacifying system and storage medium
Technical Field
The invention relates to the technical field of intelligent household appliances, in particular to a pacifying method, a pacifying device, a pacifying system and a storage medium.
Background
When the child is at home and the child is out, the child can be monitored and checked through the camera. When a child has negative emotion (crying) and wants to find a mother to pacify, a common practice is that a guardian beside the child calls or videos to the mother, and the mother pacifies the child through the call or the videos or the parent catches up to home to pacify the child. At present, the method for remote monitoring through the camera is time-consuming and labor-consuming, the pacifying effect is not good, and on the other hand, parents can be busy, so that timely pacifying on infants cannot be achieved.
Disclosure of Invention
The invention provides a pacifying method, equipment, a device, a system and a storage medium, which are used for solving the defects that the time and the labor are consumed in a remote monitoring mode and the infant cannot be pacified in time in the prior art.
The invention provides a pacifying method which is applied to first electronic equipment and comprises the following steps:
acquiring first information of a target object;
carrying out emotion recognition on the first information to obtain emotion information of the target object;
and executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
According to the pacifying method provided by the invention, the execution target pacifying strategy comprises the following steps:
playing the first audio data to pacify the target object;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of the user or voiceprint data of a screen character favored by the target object.
According to the pacifying method provided by the invention, the execution target pacifying strategy comprises the following steps:
playing the first video data to pacify the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
According to the pacifying method provided by the invention, the execution target pacifying strategy comprises the following steps:
playing the first audio data to pacify the target object;
after the first audio data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, the first video data is played;
or alternatively, the process may be performed,
playing the first video data to pacify the target object;
after the first video data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, first audio data is played;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of a user or voiceprint data of a screen character favored by the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
According to the pacifying method provided by the invention, the first information comprises a face image and/or voiceprint data.
According to the pacifying method provided by the invention, the execution target pacifying strategy comprises the following steps:
determining a target pacifying strategy;
generating a pacifying instruction according to the target pacifying strategy, and sending the pacifying instruction to at least one second electronic device so that each second electronic device executes a corresponding pacifying strategy according to the pacifying instruction;
the pacifying instructions are used for instructing each second electronic device to execute corresponding pacifying strategies.
According to the pacifying method provided by the invention, the second electronic equipment comprises a sound box or a television.
The invention also provides a pacifying device, comprising:
an acquisition unit configured to acquire first information of a target object;
the emotion recognition unit is used for carrying out emotion recognition on the first information to obtain emotion information of the target object;
and the execution unit is used for executing the target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
The present invention also provides a pacifying system comprising: a pacifying device as described above.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the pacifying method as described in any one of the above when the program is executed.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the pacifying method as described in any of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a pacifying method as described in any one of the above.
According to the pacifying method, device and system and the storage medium, the first information of the target object is acquired, emotion recognition is carried out on the first information, so that the emotion information of the target object is obtained, and the target pacifying strategy is executed under the condition that the emotion information of the target object is the emotion information needing pacifying, so that timely and effective pacifying of the target object can be achieved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a pacifying method provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a pacifier according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a pacifying system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present invention may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The pacifying method, the pacifying device, the pacifying system and the readable storage medium provided by the embodiment of the invention are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
FIG. 1 is a schematic flow chart of a pacifying method according to an embodiment of the present invention, as shown in FIG. 1, the method includes the following steps:
step 100, obtaining first information of a target object;
it should be noted that, the pacifying method provided in this embodiment may be applied to the first electronic device, and may be specifically executed by hardware or software in the first electronic device.
Optionally, the first electronic device is a smart device in a home environment, where the smart device refers to a device having a processor and a communication interface.
Optionally, the first electronic device obtains the first information of the target object through a shooting device outside the first electronic device.
The shooting device can be arranged in the living range of the target object and used for collecting first information of the target object.
Optionally, the first electronic device includes a photographing device, and the first electronic device acquires the first information of the target object through the photographing device disposed thereon.
Optionally, the first information includes a face image and/or voiceprint data.
Step 101, carrying out emotion recognition on the first information to obtain emotion information of the target object;
and the first electronic equipment carries out emotion recognition based on the acquired face image and/or voiceprint data of the target object to obtain emotion information of the target object.
The emotion information of the target object includes emotion such as anger, crying, happiness, and the like.
The target object can be subjected to emotion recognition by adopting a related image recognition or voiceprint recognition technology, and the invention is not limited to a specific emotion recognition method.
Step 102, executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
In other words, after the first electronic device recognizes the emotion information of the target object, if the emotion information of the target object is the first emotion information, that is, belongs to a class of emotion information that needs to be pacified, the first electronic device executes the target pacifying policy.
In some embodiments, the first mood information is a type of mood information that requires pacifying, such as crying, anger, and the like. It can be appreciated that when the first electronic device recognizes that the emotion information of the target object is emotion information that does not need to be pacified, a pacifying policy is not executed.
It should be noted that the target object is set by the user.
In some embodiments, the method further comprises:
receiving face information and/or voiceprint data of the target object input by a user;
and storing the face information and/or voiceprint data of the target object.
It can be understood that, in order to realize timely pacifying the target object, the first electronic device needs to obtain the face information and/or voiceprint data of the target object, so as to realize real-time tracking and emotion recognition of the target object according to the face information and/or voiceprint data of the target object.
Optionally, the first electronic device executing the target pacifying policy includes: the first electronic device plays the first audio data and/or the first video data.
In the embodiment of the invention, the first information of the target object is obtained, emotion recognition is carried out on the first information, so that the emotion information of the target object is obtained, and the target pacifying strategy is executed under the condition that the emotion information of the target object is the emotion information needing pacifying, so that timely and effective pacifying of the target object can be realized.
The first electronic device executes a target pacifying strategy, including the following ways:
mode one:
in some alternative embodiments, the executing the target pacifying policy includes:
playing the first audio data to pacify the target object;
the first audio data includes data in the form of sentences, songs, stories, etc.
Wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of the user or voiceprint data of a screen character favored by the target object.
It is understood that the first electronic device may play the first audio data for the purpose of pacifying the target object.
The first audio data may be audio data pre-recorded by a user for pacifying the target object.
Here, the user may be a person familiar with the mom or dad of the target object or other target object.
For example, the first audio data is a pacifying sentence or song pre-recorded by the mother.
Alternatively, the first audio data may be audio data determined according to preference of the target object.
For example, the target object prefers to listen to white noise, or the prenatal education music, which is played as the first audio data, thereby pacifying the target object.
Alternatively, the first audio data may also be audio data synthesized based on the second audio data of the user. Here, the user may be a person familiar with the mom or dad of the target object or other target object.
For example, the first audio data is synthesized from the second audio data of the mother. For another example, the first audio data is a song synthesized with the audio data of the mother.
Alternatively, the first audio data may be audio data synthesized based on voiceprint data of a screen character favored by the target object.
For example, if the target object likes a screen character of a pig's cookie, audio data synthesized using voiceprint data of the pig's cookie, for example, an audio story of the pig's cookie, may be played.
In the embodiment of the invention, under the condition that the emotion information of the target object is the emotion information needing to be pacified, the first audio data is played, so that the target object can be pacified effectively in time.
Mode two:
in some alternative embodiments, the executing the target pacifying policy includes:
playing the first video data to pacify the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
Alternatively, the video data determined according to the preference of the target object may be video data determined according to a recently played or recently loved video recording stored in the first electronic device.
The first video data may also be video data preset by the user for pacifying the target object.
For example, a cartoon that a mother knows that a child loves to see may be set in advance as video data for pacifying a target object on the first electronic device.
In the embodiment of the invention, under the condition that the emotion information of the target object is the emotion information needing to be pacified, the first video data is played, so that the target object can be pacified effectively in time.
Mode three:
in some alternative embodiments, the executing the target pacifying policy includes:
playing the first audio data to pacify the target object;
after the first audio data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, the first video data is played;
or alternatively, the process may be performed,
playing the first video data to pacify the target object;
after the first video data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, first audio data is played;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of a user or voiceprint data of a screen character favored by the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
It will be appreciated that the target object may be platted in a manner that cooperatively plays the first audio data and plays the first video.
For example, if the first emotion information is crying, the target pacifying policy is pacifying audio of a mother tone, and after the target object is detected to be crying, a cartoon loved by the child or other videos loved by the child is played, so that the cooperative pacifying of sound and images is realized.
Or if the first emotion information is crying, the target pacifying strategy is to play the favorite cartoon of the child or other favorite videos of the child, and after the playing is finished, if the target object is detected to be crying, the pacifying audio of the mother tone is played, so that the cooperative pacifying of the sound and the picture is realized.
In the embodiment of the invention, under the condition that the emotion information of the target object is the emotion information needing to be pacified, the first video data and the first audio data are cooperatively played, so that the target object can be pacified timely and effectively.
Mode four:
in some alternative embodiments, the executing the target pacifying policy includes:
determining a target pacifying strategy;
generating a pacifying instruction according to the target pacifying strategy, and sending the pacifying instruction to at least one second electronic device so that each second electronic device executes a corresponding pacifying strategy according to the pacifying instruction;
the pacifying instructions are used for instructing each second electronic device to execute corresponding pacifying strategies.
Among other things, in embodiments of the present application, the first electronic device includes, but is not limited to, a mobile phone, tablet, wearable device, or other portable communication device having a touch-sensitive surface (e.g., a touch screen display and/or touch pad). It should also be appreciated that in some embodiments, the first electronic device may not be a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
It should be understood that the first electronic device may include one or more other physical user interface devices such as a physical keyboard, mouse, and joystick.
The first electronic equipment acquires first information of a target object through a shooting device arranged in the living environment of the target object, and carries out emotion recognition on the first information of the target object. And in the case that the emotion information of the target object is the first emotion information, the first electronic device determines a target pacifying strategy. The first electronic equipment generates a pacifying instruction according to the target pacifying strategy, and sends the pacifying instruction to at least one second electronic equipment, so that each second electronic equipment executes a corresponding pacifying strategy according to the pacifying instruction; the pacifying instructions are used for instructing each second electronic device to execute corresponding pacifying strategies.
Optionally, the second electronic device includes a speaker or a television.
It is understood that the second electronic device refers to a device in the smart home environment, which is capable of responding to a control instruction of the first electronic device, for example, a sound box, a television, and the like.
It is understood that the target pacifying policy corresponds to the first mood information.
According to the target pacifying policy, at least one second electronic device and a target pacifying operation which the at least one second electronic device needs to execute can be determined, so that pacifying instructions are generated and sent.
For example, if the first mood information is anger, the target pacifying policy is pacifying audio of the mother's tone played through a sound box. According to the target pacifying strategy, the second electronic device can be determined to be a sound box, the pacifying strategy to be executed by the second electronic device is to play first audio data, such as pacifying audio of the mother tone, then the first electronic device generates pacifying instructions, and the pacifying instructions are used for indicating the sound box to play the pacifying audio of the mother tone, so that sound pacifying is achieved.
For another example, if the first mood information is crying, the target pacifying policy is to play a favorite animation of the child or other favorite video of the child through a television. According to the target pacifying strategy, the second electronic device can be determined to be a television, the pacifying strategy to be executed by the second electronic device is to play the first video data, for example, the favorite cartoon of the child, and the first electronic device generates a pacifying instruction which is used for indicating the television to play the favorite cartoon of the child, so that picture pacifying is achieved.
For another example, if the first mood information is crying, the target pacifying policy is to send out a voice query "do you want to watch the cartoon" through the sound box, and when the target object is detected to reply to "want to watch", the cartoon is played through the television. According to the target pacifying strategy, the at least one target device can be determined to be a sound box and a television, the pacifying strategy to be executed by the sound box is to send out a voice query of 'do you want to watch an cartoon', the pacifying strategy to be executed by the television is to play the cartoon, and accordingly the first electronic device generates a pacifying instruction which is used for indicating the sound box to send out the voice query and the television to play the cartoon, and accordingly collaborative pacifying of the voice and the picture is achieved.
For another example, if the first emotion information is crying, the target pacifying policy is pacifying audio of a mother tone through a sound box, if the target object is detected to be crying after the first emotion information is crying, a favorite cartoon of the child or other videos of the child are played through a television, according to the target pacifying policy, the target device can be determined to be the sound box and the television, the pacifying policy required to be executed by the sound box is pacifying audio of the mother tone, the pacifying policy required to be executed by the television is pacifying the cartoon of the child or other videos of the child, and accordingly the first electronic device generates pacifying instructions, and the pacifying instructions are used for indicating the sound box to play the pacifying audio of the mother tone and the television to play the cartoon or other videos, so that sound and picture collaborative pacifying are achieved.
In this embodiment, when the emotion information of the target object is the first emotion information, the first electronic device determines a target pacifying policy, generates a pacifying instruction according to the target pacifying policy, and sends the pacifying instruction to at least one second electronic device, so that each second electronic device executes the corresponding pacifying policy, and timely and effective pacifying of the target object can be achieved.
Fig. 2 is a schematic structural diagram of a pacifier according to an embodiment of the present invention, as shown in fig. 2, where the pacifier includes: an acquisition unit 210, an emotion recognition unit 220, and an execution unit 230, wherein,
an acquiring unit 210, configured to acquire first information of a target object;
in some embodiments, the first information includes a face image and/or voiceprint data.
An emotion recognition unit 220, configured to perform emotion recognition on the first information to obtain emotion information of the target object;
the emotion recognition unit 220 performs emotion recognition based on the acquired face image and/or voiceprint data of the target object, and obtains emotion information of the target object.
The emotion information of the target object includes emotion such as anger, crying, happiness, and the like.
The emotion recognition unit 220 may perform emotion recognition on the target object using a related image recognition or voiceprint recognition technique, and the present invention is not limited to a specific emotion recognition method.
And an execution unit 230, configured to execute a target pacifying policy if the emotion information of the target object is first emotion information, where the first emotion information is emotion information that needs pacifying.
In other words, in some embodiments, the first mood information is a type of mood information that requires pacifying, such as crying, anger, and the like. It can be appreciated that when the first electronic device recognizes that the emotion information of the target object is emotion information that does not need to be pacified, a pacifying policy is not executed.
It should be noted that the target object is set by the user.
In some embodiments, the apparatus further comprises:
the receiving module is used for receiving face information and/or voiceprint data of the target object input by a user;
and the storage module is used for storing the face information and/or voiceprint data of the target object.
It can be understood that, in order to realize timely pacifying the target object, the pacifying device needs to obtain the face information and/or voiceprint data of the target object, so as to realize real-time tracking and emotion recognition of the target object according to the face information and/or voiceprint data of the target object.
In the embodiment of the invention, the first information of the target object is obtained, emotion recognition is carried out on the first information, so that the emotion information of the target object is obtained, and the target pacifying strategy is executed under the condition that the emotion information of the target object is the emotion information needing pacifying, so that timely and effective pacifying of the target object can be realized.
In some embodiments, the executing the target pacifying policy includes:
playing the first audio data to pacify the target object;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of the user or voiceprint data of a screen character favored by the target object.
In the embodiment of the invention, under the condition that the emotion information of the target object is the emotion information needing to be pacified, the first audio data is played, so that the target object can be pacified effectively in time.
In some embodiments, the executing the target pacifying policy includes:
playing the first video data to pacify the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
In the embodiment of the invention, under the condition that the emotion information of the target object is the emotion information needing to be pacified, the first video data is played, so that the target object can be pacified effectively in time.
In some embodiments, the executing the target pacifying policy includes:
playing the first audio data to pacify the target object;
after the first audio data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, the first video data is played;
or alternatively, the process may be performed,
playing the first video data to pacify the target object;
after the first video data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, first audio data is played;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of a user or voiceprint data of a screen character favored by the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
In the embodiment of the invention, under the condition that the emotion information of the target object is the emotion information needing to be pacified, the first video data and the first audio data are cooperatively played, so that the target object can be pacified timely and effectively.
In some embodiments, the executing the target pacifying policy includes:
determining a target pacifying strategy;
generating a pacifying instruction according to the target pacifying strategy, and sending the pacifying instruction to at least one second electronic device so that each second electronic device executes a corresponding pacifying strategy according to the pacifying instruction;
the pacifying instructions are used for instructing each second electronic device to execute corresponding pacifying strategies.
The second electronic device comprises a sound box or a television.
Among other things, in embodiments of the present application, pacifying devices include, but are not limited to, mobile phones, tablet computers, wearable devices, or other portable communication devices having touch-sensitive surfaces (e.g., touch screen displays and/or touchpads). It should also be appreciated that in some embodiments, the pacifier may not be a portable communication device, but rather a desktop computer having a touch sensitive surface (e.g., a touch screen display and/or a touch pad).
It should be appreciated that the pacifying means may comprise one or more other physical user interface devices such as a physical keyboard, mouse and joystick.
The pacifying device obtains first information of a target object through a shooting device arranged in the living environment of the target object, and carries out emotion recognition on the first information of the target object. In case the mood information of the target object is the first mood information, the pacifying device determines a target pacifying strategy. Generating a pacifying instruction according to the target pacifying strategy by the pacifying device, and sending the pacifying instruction to at least one second electronic device so that each second electronic device executes a corresponding pacifying strategy according to the pacifying instruction; the pacifying instructions are used for instructing each second electronic device to execute corresponding pacifying strategies.
Optionally, the second electronic device includes a speaker or a television.
It is understood that the second electronic device refers to a device in the smart home environment, which is capable of responding to a control instruction of the first electronic device, for example, a sound box, a television, and the like.
It is understood that the target pacifying policy corresponds to the first mood information.
According to the target pacifying policy, at least one second electronic device and a target pacifying operation which the at least one second electronic device needs to execute can be determined, so that pacifying instructions are generated and sent.
For example, if the first mood information is anger, the target pacifying policy is pacifying audio of the mother's tone played through a sound box. According to the target pacifying strategy, the second electronic device can be determined to be a sound box, the pacifying strategy to be executed by the second electronic device is to play first audio data, such as pacifying audio of the mother tone, and then the pacifying device generates pacifying instructions which are used for indicating the sound box to play the pacifying audio of the mother tone, so that sound pacifying is achieved.
For another example, if the first mood information is crying, the target pacifying policy is to play a favorite animation of the child or other favorite video of the child through a television. According to the target pacifying strategy, the second electronic device can be determined to be a television, the pacifying strategy to be executed by the second electronic device is to play the first video data, for example, the favorite cartoon of the child, and the pacifying device generates a pacifying instruction which is used for indicating the television to play the favorite cartoon of the child, so that picture pacifying is achieved.
For another example, if the first mood information is crying, the target pacifying policy is to send out a voice query "do you want to watch the cartoon" through the sound box, and when the target object is detected to reply to "want to watch", the cartoon is played through the television. According to the target pacifying strategy pacifying device, the at least one target device can be determined to be a sound box and a television, the pacifying strategy to be executed by the sound box is to send out a voice query of 'you want to watch an cartoon', the pacifying strategy to be executed by the television is to play the cartoon, and accordingly the pacifying device generates pacifying instructions which are used for indicating the sound box to send out the voice query and the television to play the cartoon, and accordingly collaborative pacifying of sound and pictures is achieved.
For another example, if the first emotion information is crying, the target pacifying policy is pacifying audio of a mother tone through a sound box, if the target object is detected to be crying after the playing is finished, a favorite cartoon of the child or other videos of the child are played through a television, according to the target pacifying policy, the target device can be determined to be the sound box and the television, the pacifying policy required to be executed by the sound box is pacifying audio of the mother tone, the pacifying policy required to be executed by the television is pacifying the cartoon of the child or other videos of the child, and accordingly pacifying instructions are generated by the pacifying device, and the pacifying instructions are used for indicating the sound box to play the pacifying audio of the mother tone and the television to play the cartoon or other videos, so that sound and picture collaborative pacifying are achieved.
In this embodiment, the pacifying device determines the target pacifying policy under the condition that the emotion information of the target object is the first emotion information, generates the pacifying instruction according to the target pacifying policy, and sends the pacifying instruction to at least one second electronic device, so that each second electronic device executes the corresponding pacifying policy, and timely and effective pacifying of the target object can be achieved.
The embodiment of the invention also provides a pacifying system, which comprises the pacifying device in the embodiment.
The embodiment of the invention also provides a pacifying system, which comprises the first electronic equipment in the embodiment, the shooting device in the embodiment and at least one second electronic equipment, wherein the at least one second electronic equipment is used for executing a pacifying strategy according to the indication of the first electronic equipment.
As shown in fig. 3, the pacifying system includes a first electronic device 310, a camera 320, a speaker 330, and a television 340.
The pacifying system embodiment may refer to the respective implementation procedures and implementation manner of the first electronic device side method embodiment and the photographing device side method embodiment described above, and the same technical effects can be achieved.
Fig. 4 illustrates a physical schematic diagram of an electronic device, as shown in fig. 4, which may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. Processor 410 may call logic instructions in memory 430 to perform a pacifying method comprising: acquiring first information of a target object; carrying out emotion recognition on the first information to obtain emotion information of the target object; and executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the pacifying method provided by the above methods, the method comprising:
acquiring first information of a target object; carrying out emotion recognition on the first information to obtain emotion information of the target object; and executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the pacifying method provided by the above methods, the method comprising:
acquiring first information of a target object; carrying out emotion recognition on the first information to obtain emotion information of the target object; and executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A pacifying method, applied to a first electronic device, comprising:
acquiring first information of a target object;
carrying out emotion recognition on the first information to obtain emotion information of the target object;
and executing a target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
2. The pacifying method of claim 1 wherein said executing a target pacifying policy comprises:
playing the first audio data to pacify the target object;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of the user or voiceprint data of a screen character favored by the target object.
3. The pacifying method of claim 1 wherein said executing a target pacifying policy comprises:
playing the first video data to pacify the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
4. The pacifying method of claim 1 wherein said executing a target pacifying policy comprises:
playing the first audio data to pacify the target object;
after the first audio data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, the first video data is played;
or alternatively, the process may be performed,
playing the first video data to pacify the target object;
after the first video data is played for a preset time period, if the emotion information of the target object is detected to be unchanged, first audio data is played;
wherein the first audio data includes one of:
the user pre-records the audio data for pacifying the target object;
audio data determined according to the preference of the target object;
audio data synthesized based on second audio data of a user or voiceprint data of a screen character favored by the target object;
wherein the first video data includes one of:
video data determined according to the preference of the target object;
video data preset by a user for pacifying the target object.
5. The pacifying method of claim 1 wherein said first information includes facial images and/or voiceprint data.
6. The pacifying method of claim 1 wherein said executing a target pacifying policy comprises:
determining a target pacifying strategy;
generating a pacifying instruction according to the target pacifying strategy, and sending the pacifying instruction to at least one second electronic device so that each second electronic device executes a corresponding pacifying strategy according to the pacifying instruction;
the pacifying instructions are used for instructing each second electronic device to execute corresponding pacifying strategies.
7. The pacifying method of claim 6 wherein said second electronic device comprises a speaker or television.
8. A pacifying device, comprising:
an acquisition unit configured to acquire first information of a target object;
the emotion recognition unit is used for carrying out emotion recognition on the first information to obtain emotion information of the target object;
and the execution unit is used for executing the target pacifying strategy under the condition that the emotion information of the target object is first emotion information, wherein the first emotion information is emotion information needing pacifying.
9. A pacifying system comprising a pacifying device according to claim 8.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the pacifying method according to any one of claims 1 to 7.
CN202111493870.3A 2021-12-08 2021-12-08 Pacifying method, pacifying device, pacifying system and storage medium Pending CN116246662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111493870.3A CN116246662A (en) 2021-12-08 2021-12-08 Pacifying method, pacifying device, pacifying system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111493870.3A CN116246662A (en) 2021-12-08 2021-12-08 Pacifying method, pacifying device, pacifying system and storage medium

Publications (1)

Publication Number Publication Date
CN116246662A true CN116246662A (en) 2023-06-09

Family

ID=86624753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111493870.3A Pending CN116246662A (en) 2021-12-08 2021-12-08 Pacifying method, pacifying device, pacifying system and storage medium

Country Status (1)

Country Link
CN (1) CN116246662A (en)

Similar Documents

Publication Publication Date Title
WO2020029523A1 (en) Video generation method and apparatus, electronic device, and storage medium
US20200234478A1 (en) Method and Apparatus for Processing Information
US20200328903A1 (en) Method and apparatus for waking up via speech
KR102488530B1 (en) Method and apparatus for generating video
US20200066297A1 (en) Audio event detection method and device, and computer-readable storage medium
JP2018525691A (en) Human computer interaction method and system based on knowledge map
CN105141587A (en) Virtual doll interaction method and device
US10820060B1 (en) Asynchronous co-watching
CN106575361A (en) Method of providing visual sound image and electronic device implementing the same
US20150279371A1 (en) System and Method for Providing an Audio Interface for a Tablet Computer
US11232790B2 (en) Control method for human-computer interaction device, human-computer interaction device and human-computer interaction system
JP6987969B2 (en) Network-based learning model for natural language processing
JP2015517709A (en) A system for adaptive distribution of context-based media
US10082928B2 (en) Providing content to a user based on amount of user contribution
JP2023175757A (en) Automatically captioning audible part of content on computing device
CN112672207B (en) Audio data processing method, device, computer equipment and storage medium
TWI823055B (en) Electronic resource pushing method and system
CN109213892A (en) A kind of audio frequency playing method, device, equipment and storage medium
Hargraves To trust in strange habits and last calls: The Good Wife’s smartphone storytelling
JP6367748B2 (en) Recognition device, video content presentation system
CN116246662A (en) Pacifying method, pacifying device, pacifying system and storage medium
CN117319340A (en) Voice message playing method, device, terminal and storage medium
CN110196900A (en) Exchange method and device for terminal
JP2022060820A (en) Remote conference support control apparatus, method and program
WO2023075884A1 (en) Real-time name mispronunciation detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination