CN114024789A - Voice playing method based on working mode and intelligent household equipment - Google Patents

Voice playing method based on working mode and intelligent household equipment Download PDF

Info

Publication number
CN114024789A
CN114024789A CN202111205827.2A CN202111205827A CN114024789A CN 114024789 A CN114024789 A CN 114024789A CN 202111205827 A CN202111205827 A CN 202111205827A CN 114024789 A CN114024789 A CN 114024789A
Authority
CN
China
Prior art keywords
voice
mode
output
text information
working mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111205827.2A
Other languages
Chinese (zh)
Inventor
高扬
高滔
李芸
郑彩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinmao Green Building Technology Co Ltd
Original Assignee
Jinmao Green Building Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinmao Green Building Technology Co Ltd filed Critical Jinmao Green Building Technology Co Ltd
Priority to CN202111205827.2A priority Critical patent/CN114024789A/en
Publication of CN114024789A publication Critical patent/CN114024789A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • G10L17/24Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides a voice playing method based on a working mode and intelligent household equipment, wherein the method can be applied to the intelligent household equipment, the intelligent household equipment has a voice collecting function and a voice playing function, and the method comprises the following steps: the method comprises the steps that the intelligent home equipment firstly determines a current working mode and then detects whether a voice output event aiming at the current working mode is triggered or not; and when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, and playing the text information to be output by adopting the target voiceprint characteristics. By the embodiment of the invention, the text information needing to be output and the voiceprint characteristics used for generating the voice output data are determined based on the current working mode and the voice output event, the tone of voice playing is enriched, and the personalized requirements of users on the voice playing sound of the intelligent household equipment are met.

Description

Voice playing method based on working mode and intelligent household equipment
Technical Field
The invention relates to the technical field of intelligent home, in particular to a voice playing method based on a working mode and intelligent home equipment.
Background
With the continuous development of artificial intelligence and smart home technologies, more and more smart home devices have a function of voice interaction with a user, for example: the user may control the smart home device by inputting a voice instruction, or may have a conversation with the smart home device by a voice instruction, and so on.
When receiving a voice instruction input by a user, the existing intelligent home equipment can firstly identify the voice instruction, then generate corresponding output text information, and then convert the output text information into voice data for output; however, the voice of the voice data converted by the existing smart home devices is one-size-fits-all, and cannot meet the personalized requirements of users.
Disclosure of Invention
In view of the above problems, it is proposed to provide a voice playing method based on an operation mode and a smart home device that overcome or at least partially solve the above problems, including:
a voice playing method based on a working mode is applied to intelligent home equipment, the intelligent home equipment has a voice collecting function and a voice playing function, and the method comprises the following steps:
determining a current working mode of the intelligent household equipment;
detecting whether a voice output event for a current operating mode is triggered;
when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event;
and playing the text information to be output by adopting the target voiceprint characteristics.
Optionally, playing the text information to be output by using the target voiceprint feature includes:
carrying out voice conversion on the text information to be output to generate voice data to be converted;
replacing the voiceprint characteristics in the voice data to be converted with target voiceprint characteristics to obtain target voice data;
and playing the target voice data.
Optionally, determining the current working mode of the smart home device includes:
receiving a mode determination instruction input by a user, and determining a current working mode according to the mode determination instruction;
or acquiring the current time of the intelligent household equipment, and determining the current working mode according to the current time.
Optionally, the current working mode includes a home mode, and detecting whether to trigger a voice output event for the current working mode includes:
when a home furnishing interaction instruction is received, judging to trigger a voice output event aiming at a home furnishing mode;
determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, wherein the determining comprises the following steps:
identifying the home furnishing interaction instruction and generating text information to be output, which is matched with the identification result;
and acquiring a target voiceprint characteristic preset for the home mode.
Optionally, the current operating mode includes a sleep mode, and detecting whether to trigger a voice output event for the current operating mode includes:
when a sleep-aiding instruction is received, judging to trigger a voice output event aiming at a sleep mode;
determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, wherein the determining comprises the following steps:
identifying the sleep-aiding instruction and generating text information to be output, which is matched with the identification result;
and acquiring a target voiceprint characteristic preset for the sleep mode.
Optionally, the current operating mode includes a reminder mode, and detecting whether to trigger a voice output event for the current operating mode includes:
when the current time of the intelligent household equipment reaches the preset time, judging to trigger a voice output event aiming at the reminding mode;
determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, wherein the determining comprises the following steps:
acquiring text information to be output, which is pre-input according to preset time, and acquiring target voiceprint characteristics preset according to a reminding mode.
Optionally, the current working mode includes a security mode, and detecting whether to trigger a voice output event for the current working mode includes:
when an intrusion event exists in a preset area, judging to trigger a voice output event aiming at a security mode;
determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, wherein the determining comprises the following steps:
acquiring text information to be output, which is pre-recorded aiming at an intrusion event, and acquiring target voiceprint characteristics preset aiming at a security mode.
The embodiment of the invention also provides intelligent household equipment, which comprises:
a collecting unit: the controller unit is used for collecting instructions input by a user and sending the instructions input by the user to the controller unit;
a storage unit: for storing target voiceprint features;
a controller unit: the voice generation cloud platform is used for uploading a command input by a user to the voice generation cloud platform through the communication unit; receiving the voice data to be converted downloaded from the voice generation cloud platform by the communication unit; generating to-be-output text information by the voice generation cloud platform according to an instruction input by a user and generating the to-be-output text information based on the to-be-output text information;
a communication unit: the voice conversion system is used for uploading a command input by a user to the voice generation cloud platform and downloading voice data to be converted from the voice generation cloud platform;
a speech synthesis unit: the voice conversion device is used for acquiring voice data to be converted from the controller unit; acquiring a target voiceprint characteristic from the storage unit, and converting the voice data to be converted by adopting the target voiceprint characteristic to generate target voice data;
a voice playing unit: and the voice synthesizer is used for receiving the target voice data sent by the voice synthesis unit and playing the target voice data.
The embodiment of the invention also provides a voice playing device based on the working mode, the device is applied to the intelligent household equipment, the intelligent household equipment has the voice collecting function and the voice playing function, and the method comprises the following steps:
the mode determining module is used for determining the current working mode of the intelligent household equipment;
the detection module is used for detecting whether a voice output event aiming at the current working mode is triggered or not;
the text voiceprint determining module is used for determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event when the voice output event aiming at the current working mode is detected to be triggered;
and the playing module is used for playing the text information to be output by adopting the target voiceprint characteristics.
Optionally, the playing module includes:
the conversion submodule is used for carrying out voice conversion on the text information to be output and generating voice data to be converted;
the replacing submodule is used for replacing the voiceprint characteristics in the voice data to be converted with the target voiceprint characteristics to obtain target voice data;
and the target voice data playing submodule is used for playing the target voice data.
Optionally, the mode determination module includes:
the first determining submodule is used for receiving a mode determining instruction input by a user and determining a current working mode according to the mode determining instruction;
and the second determining submodule is used for acquiring the current time of the intelligent household equipment and determining the current working mode according to the current time.
Optionally, the current working mode includes a home mode, and the detecting module includes:
the first detection submodule is used for judging to trigger a voice output event aiming at a home mode when receiving a home interaction instruction;
a text voiceprint determination module comprising:
the first text voiceprint determining submodule is used for identifying the home furnishing interaction instruction and generating text information to be output, which is matched with the identification result; and acquiring a target voiceprint characteristic preset for the home mode.
Optionally, the current operating mode includes a sleep mode, and the detecting module includes:
the second detection submodule is used for judging and triggering a voice output event aiming at the sleep mode when a sleep-assisting instruction is received;
a text voiceprint determination module comprising:
the second text voiceprint determining submodule is used for identifying the sleep-assisting instruction and generating text information to be output, which is matched with the identification result; and acquiring a target voiceprint characteristic preset for the sleep mode.
Optionally, the current working mode includes a reminding mode, and the detecting module includes:
the third detection submodule is used for judging that a voice output event aiming at the reminding mode is triggered when the current time of the intelligent household equipment reaches the preset time;
a text voiceprint determination module comprising:
and the third text voiceprint determining submodule is used for acquiring the text information to be output, which is pre-input according to the preset time, and acquiring the target voiceprint characteristics preset according to the reminding mode.
Optionally, the current working mode includes a security mode, and the detecting module includes:
the fourth detection submodule is used for judging to trigger a voice output event aiming at the security mode when the intrusion event exists in the preset area;
a text voiceprint determination module comprising:
and the fourth text voiceprint determining submodule is used for acquiring the text information to be output, which is pre-recorded aiming at the intrusion event, and acquiring the target voiceprint characteristics preset aiming at the security mode.
The embodiment of the invention also provides a computer readable storage medium, a computer program is stored on the computer readable storage medium, and the computer program is executed by a processor to realize the voice playing method based on the working mode.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the intelligent household equipment firstly determines the current working mode and then detects whether to trigger a voice output event aiming at the current working mode; and when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, and playing the text information to be output by adopting the target voiceprint characteristics. By the embodiment of the invention, the text information needing to be output and the voiceprint characteristics used for generating the voice output data are determined based on the current working mode and the voice output event, the tone of voice playing is enriched, and the personalized requirements of users on the voice playing sound of the intelligent household equipment are met.
And various modes are pre-deployed for the intelligent household equipment, the types of the modes in the intelligent household equipment are enriched, and therefore the functions of the intelligent household equipment are enriched, and the use of a user is enriched.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a voice playing method based on a working mode according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of a voice playing method based on an operation mode according to another embodiment of the present invention;
FIG. 3 is a flow chart of one embodiment of the present invention for generating target speech data;
FIG. 4 is a flow chart of selecting a current operating mode according to an embodiment of the present invention;
FIG. 5 is a flow diagram of another embodiment of the present invention for generating target speech data;
fig. 6 is a block diagram of a smart home device according to an embodiment of the present invention;
fig. 7 is a block diagram of a voice playing apparatus based on an operating mode according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a voice playing method based on a working mode according to an embodiment of the present invention is shown, where the method may be applied to an intelligent home device, and the intelligent home device may have a voice collecting function and a voice playing function at the same time;
specifically, the method may include the steps of:
step 101, determining a current working mode of the intelligent household equipment;
wherein, intelligent household equipment can indicate the household equipment who connects through internet of things, for example: intelligent voice sounders, intelligent televisions, intelligent curtains, and the like, which are not limited in this embodiment of the present invention.
In practical applications, a plurality of working modes can be deployed in the smart home device in advance, for example: the household intelligent reminding device comprises a daily household mode, a sleep mode used for assisting sleep during sleep, a security mode used for security protection during outgoing, a reminding mode used for reminding when reminding is needed and the like, so that the use of users is enriched.
Therefore, the current working mode of the intelligent household equipment can be determined firstly, so that voice output data can be output based on the current working mode subsequently.
Step 102, detecting whether a voice output event aiming at the current working mode is triggered;
the voice output event may refer to a trigger event set in advance for different working modes, and when the smart home device is in different working modes, the voice output data may be output under different conditions, for example: the home equipment can output voice output data when receiving a home interaction instruction input by a user; the sleep mode (or the reminding mode) may output the voice output data at a preset time, and the security mode may output the voice output data when it is detected that someone enters a room, and the like.
Thus, it may be determined whether a voice output event for the current operating mode is triggered after the current operating mode is determined, in order to determine whether voice output data needs to be output in the current operating mode.
103, when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event;
when a voice output event for the current working mode is detected to be triggered, it can be shown that the smart home device needs to output voice output data in the current working mode.
At this time, the target voiceprint feature set for the current working mode in advance and the text information to be output set for the voice trigger event in advance can be acquired.
The target voiceprint feature may be set by the user in advance for different working modes, or may be obtained by the smart home device by collecting the voiceprint feature during the daily conversation of the user, which is not limited in this embodiment of the present invention.
The text information to be output may also be input by the user in advance, or may be obtained by selection of the user after the smart home device provides a plurality of pieces of text information to be output, or may be generated after the smart home device identifies an instruction input by the user, or may be generated and returned after the smart home device sends the instruction input by the user to the cloud platform and the cloud platform identifies the instruction.
And step 104, playing the text information to be output by adopting the target voiceprint characteristics.
After the text information to be output and the target voiceprint characteristics are obtained, the intelligent home equipment can play the text information to be output by adopting the target voiceprint characteristics; therefore, the intelligent household equipment can output the voice output data corresponding to the text information to be output by adopting the specific voiceprint characteristics according to the personalized setting of the user.
In the embodiment of the invention, the intelligent household equipment firstly determines the current working mode and then detects whether to trigger a voice output event aiming at the current working mode; and when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, and playing the text information to be output by adopting the target voiceprint characteristics. By the embodiment of the invention, the text information needing to be output and the voiceprint characteristics used for generating the voice output data are determined based on the current working mode and the voice output event, the tone of voice playing is enriched, and the personalized requirements of users on the voice playing sound of the intelligent household equipment are met.
And various modes are pre-deployed for the intelligent household equipment, the types of the modes in the intelligent household equipment are enriched, and therefore the functions of the intelligent household equipment are enriched, and the use of a user is enriched.
Referring to fig. 2, a flowchart illustrating steps of another voice playing method based on an operating mode according to an embodiment of the present invention is shown, including the following steps:
step 201, determining a current working mode of the intelligent household equipment;
in practical applications, a plurality of working modes can be deployed in the smart home device in advance, for example: the household intelligent reminding device comprises a daily household mode, a sleep mode used for assisting sleep during sleep, a security mode used for security protection during outgoing, a reminding mode used for reminding when reminding is needed and the like, so that the use of users is enriched.
Therefore, the current working mode of the intelligent household equipment can be determined firstly, so that voice output data can be output based on the current working mode subsequently.
In an embodiment of the present invention, the current operating mode may be determined by:
and receiving a mode determination instruction input by a user, and determining the current working mode according to the mode determination instruction.
In daily life, a user can actively input a mode determination instruction on the smart home device so as to control the smart home device to perform a corresponding working mode, for example: the user may input the mode determination instruction on the display screen or the control of the smart home device, or may input the mode determination instruction through voice, which is not limited in this embodiment of the present invention.
After receiving the mode determination instruction input by the user, the intelligent home equipment can identify the mode determination instruction so as to determine the current working mode which the user needs to enter.
In another embodiment of the present invention, the current operation mode may also be determined by:
the current time of the intelligent household equipment is obtained, and the current working mode is determined according to the current time.
The user can also preset the mode entering time and the mode ending time, so that when the system time of the intelligent household equipment reaches the mode entering time, the corresponding working mode can be entered, and when the system time of the intelligent household equipment reaches the mode ending time, the current working mode can be exited.
Therefore, the intelligent household equipment can determine the current working mode according to the current time of the system; for example: the user sets 22:00-7:00 sleep mode, 7:00-8:00 home mode, 8:00-18:00 security mode and current time as 15:00 in advance in the smart home device, and then can determine that the current working mode is the security mode.
In addition, the current working mode corresponding to the current time may also be determined according to the user work and rest rules acquired by the intelligent terminal device, which is not limited in the embodiment of the present invention.
As shown in fig. 3, the current working mode may be determined by a mode determination instruction input by a user, and when the mode determination instruction input by the user is not received, the smart home device may determine based on the collected work and rest rules of the user.
Step 202, detecting whether a voice output event aiming at the current working mode is triggered;
when the smart home device is in different working modes, the voice output data may be output under different conditions, for example: the home equipment can output voice output data when receiving a home interaction instruction input by a user; the sleep mode (or the reminding mode) may output the voice output data at a preset time, and the security mode may output the voice output data when it is detected that someone enters a room, and the like.
Thus, it may be determined whether a voice output event for the current operating mode is triggered after the current operating mode is determined, in order to determine whether voice output data needs to be output in the current operating mode.
Step 203, when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event;
when a voice output event for the current working mode is detected to be triggered, it can be shown that the smart home device needs to output voice output data in the current working mode.
At this time, the target voiceprint feature set for the current working mode in advance and the text information to be output set for the voice trigger event in advance can be acquired.
In an embodiment of the present invention, the current working mode may include a home mode; accordingly, whether a voice output event for the home mode is triggered can be detected by the following steps:
and when receiving a home furnishing interaction instruction, judging to trigger a voice output event aiming at the home furnishing mode.
The home interaction instruction may refer to an instruction input by a user and used for controlling the smart home device to work, for example: the instruction is used for controlling the intelligent voice sound to play music, inquiring weather, setting an alarm clock and the like, and the embodiment of the invention is not limited to this.
When the current working mode is determined to be the home mode, whether a home interaction instruction aiming at the intelligent home equipment is received or not can be detected.
When a home interaction instruction for the smart home device is received, it may be determined that a voice output event for the home mode is triggered.
In addition, when the current working mode is the home mode, the text information to be output and the target voiceprint characteristics can be determined through the following steps:
identifying the home furnishing interaction instruction and generating text information to be output, which is matched with the identification result; and acquiring a target voiceprint characteristic preset for the home mode.
After receiving the home furnishing interactive instruction, the voice recognition can be carried out on the home furnishing interactive instruction to obtain a recognition result; the recognition result may refer to text information obtained by recognizing the home interaction instruction, for example: the household interactive instruction is a voice instruction of 'how much weather today', and a recognition result of text information of 'how much weather today' can be obtained after the household interactive instruction is recognized.
After the recognition result is obtained, the text information to be output, which is matched with the recognition result, can be obtained, as follows: the text information to be output "today is sunny, 26 to 28 degrees celsius" matching the recognition result can be obtained.
Meanwhile, a target voiceprint feature set in advance for the home mode can be obtained, the target voiceprint feature can be input by a user in advance, and the target voiceprint feature can also be generated by the smart home device through collecting daily conversations of the user.
In another embodiment of the present invention, the current operating mode may also include a sleep mode; accordingly, whether a voice output event for the sleep mode is triggered can be detected by:
when a sleep-aid instruction is received, a voice output event for the sleep mode is determined to be triggered.
Wherein, the sleep-assisting instruction can be an instruction input by a user through voice or a display screen; or the user can preset a sleep time, and then the intelligent household equipment automatically generates a sleep-assisting instruction during the sleep time.
When the current working mode is determined to be the sleep mode, whether a sleep-assisting instruction for the intelligent household equipment is received or not can be detected.
When a sleep-aid instruction for the smart home device is received, it may be determined that a voice output event for the sleep mode is triggered.
In addition, when the current working mode is the sleep mode, the text information to be output and the target voiceprint feature can be determined through the following steps:
identifying the sleep-aiding instruction and generating text information to be output, which is matched with the identification result; and acquiring a target voiceprint characteristic preset for the sleep mode.
After receiving the sleep-aiding instruction, the sleep-aiding instruction can be identified to obtain an identification result; for example: the user can input a sleep-assisting instruction of 'please speak a story before sleep', and after the intelligent household equipment receives the sleep-assisting instruction, the intelligent household equipment can recognize the sleep-assisting instruction to obtain a recognition result of text information of 'please speak a story before sleep'.
After the recognition result is obtained, the smart home device can obtain the text information to be output, which is matched with the recognition result, and the text information to be output, which is matched with the recognition result, of the story with the small red hat can be obtained.
Meanwhile, a target voiceprint characteristic preset for the sleep mode can be obtained, and the target voiceprint characteristic can be input by a user in advance or generated by the smart home device through collecting daily conversations of the user. For example: the parent's voiceprint feature can be set for the pre-sleep story.
In the sleep mode, it is also possible to set only pure music, for example: the natural sound can be synthesized to help the user fall asleep, which is not limited by the embodiment of the present invention.
In another embodiment of the present invention, the current operation mode may also include a reminder mode; accordingly, whether to trigger a voice output event for the alert mode may be detected by:
and when the current time of the intelligent household equipment reaches the preset time, judging to trigger a voice output event aiming at the reminding mode.
In daily life, the user can set up a default time in intelligent home equipment to make intelligent home equipment remind when the default time, for example: and 3, reminding the user of purchasing the train ticket.
Therefore, whether the current time of the intelligent household equipment system reaches the preset time preset by the user can be detected; when the current time reaches the preset time, it may indicate that a voice output event for the alert mode is triggered.
In addition, when the current working mode is a reminding mode, the text information to be output and the target voiceprint characteristics can be determined through the following steps:
acquiring text information to be output, which is pre-input according to preset time, and acquiring target voiceprint characteristics preset according to a reminding mode.
After the voice output event for the reminding mode is triggered, text information to be output input for a preset time can be acquired, for example: the user sets text information to be output of "please purchase train tickets at 3 o' clock" for 2:59 in advance, and acquires the text information to be output when the current time reaches 2: 59.
Meanwhile, a target voiceprint characteristic preset for the reminding mode can be obtained, and the target voiceprint characteristic can be input by a user when inputting text information to be output or generated by the intelligent household equipment through acquiring daily conversations of the user.
In still another embodiment of the present invention, the current working mode may further include a security mode; correspondingly, whether a voice output event for the security mode is triggered can be detected through the following steps:
and when the intrusion event exists in the preset area, judging to trigger a voice output event aiming at the security mode.
The preset area may refer to an area where an external person is prohibited from entering, for example: if the indoor environment is used at home, the whole indoor area can be a preset area.
In order to avoid when the user is out, have outside personnel to break into the room in, intelligent household equipment can detect and predetermine the region and whether have the invasion event after getting into the security protection mode, for example: the intelligent household equipment can detect whether an intrusion event exists in a preset area through an image collected by the camera.
When an intrusion event is detected to occur in the preset area, it can be indicated that a voice output event for the security mode is triggered.
In addition, when the current working mode is the security mode, the text information to be output and the target voiceprint characteristics can be determined through the following steps:
acquiring text information to be output, which is pre-recorded aiming at an intrusion event, and acquiring target voiceprint characteristics preset aiming at a security mode.
When a voice output event aiming at the security mode is triggered, the situation that the indoor environment is possibly invaded by external personnel or is invaded by the external personnel can be represented; at this time, in order to avoid the loss of the user and to be able to drink away external personnel, text information to be output, which is pre-entered for the intrusion event, may be acquired, for example: text information to be output which may be a dialog of two or more users; alternatively, there may be text information to be output such as "who is outside".
Meanwhile, a target voiceprint feature set in advance for the home mode can be obtained, the target voiceprint feature can be input by a user in advance, and the target voiceprint feature can also be generated by the smart home device through collecting daily conversations of the user.
When the text information to be output is the dialogue of two or more users, two or more target voiceprint characteristics can be obtained; of course, in order to further improve the effect of withdrawal, the voiceprint feature of the adult male may be set as the target voiceprint feature preset for the security mode, which is not limited in the embodiment of the present invention.
In an example, as shown in fig. 4, the user may select an operation mode in which the smart terminal device is located, for example: if the reminding mode is selected, the family message reminding can be synthesized, and the message is output at the preset time; when the sleep mode is selected, natural sound can be synthesized to help sleep; the security mode is selected to synthesize family member conversation so as to enable the user to drink and withdraw outside personnel when the user goes out and the outside personnel invade the room; the furniture mode is selected, and the home interaction sound can be synthesized, so that the user can conveniently perform voice interaction with the intelligent home device.
It should be noted that the working modes may be deployed in the smart home devices at the same time, or only one or more of the working modes may be deployed in the smart home devices, which is not limited in this embodiment of the present invention.
Step 204, carrying out voice conversion on the text information to be output to generate voice data to be converted;
after the text information to be output is obtained, the text information to be output can be subjected to voice conversion by adopting the universal voiceprint characteristic, and voice data to be converted are obtained.
Step 205, replacing the voiceprint features in the voice data to be converted with target voiceprint features to obtain target voice data;
the voice data to be converted obtained by only adopting the universal voiceprint characteristics can not meet the individual requirements of the user; therefore, after the voice data to be converted is obtained, the target voiceprint characteristics are adopted to replace the voiceprint characteristics in the voice data to be converted, and therefore the target voice data with individuation is obtained.
Specifically, fourier transform may be performed on the voice data to be converted first to obtain frequency spectrum information:
Figure BDA0003306793810000141
wherein, x (n) refers to a voice signal of voice data to be converted; w (mT, n) refers to a window function; xw(mT,w)Refers to the frequency domain signal of the voice data to be converted.
Then, a fundamental frequency value and a frequency floating range are searched in the frequency spectrum information, and content characterization information and voiceprint characteristics are analyzed and recorded; the content representation information may be feature information corresponding to the text information to be output.
Then, the target voice data may be obtained based on Griffin & Lim algorithm:
Figure BDA0003306793810000142
wherein, x (n) refers to a voice signal of voice data to be converted; w (mT, n) refers to a window function; xw(mT, W) refers to a frequency domain signal of the voice data to be converted; y represents the reconstructed speech signal of each frame of speech data.
After the inverse transformation, each frame of voice signal is multiplied by a window function, then overlapped and added, and finally divided by the sum of squares of the window functions, so that the voice signal which is replaced by the target voiceprint representation is reconstructed, and the target voice data is generated.
In an example, as shown in fig. 5, after obtaining text information to be input and a target voiceprint feature, the intelligent terminal device may first obtain a general voiceprint feature from a general voice library, generate voice data to be converted based on the general voiceprint feature and the text information to be input, and then perform fourier transform on the voice data to obtain content characterization information and a voiceprint feature; and replacing the voiceprint characteristics in the voice data to be converted by adopting the target voiceprint characteristics. And then, obtaining target voice data based on Griffin & Lim algorithm.
Step 206, playing the target voice data.
After the target voice data with the voiceprint characteristics replaced is obtained, the intelligent home equipment can play the target voice data, and therefore interaction with a user is achieved.
As shown in fig. 3, after the current working mode is determined, the smart home device may also obtain target voice data from the voice generation cloud platform having a communication relationship with the smart home device, and a specific process will be described in detail in subsequent embodiments and will not be described herein again.
In the embodiment of the invention, the intelligent household equipment firstly determines the current working mode and then detects whether to trigger a voice output event aiming at the current working mode; when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event; then, carrying out voice conversion on the text information to be output to generate voice data to be converted; replacing the voiceprint characteristics in the voice data to be converted with target voiceprint characteristics to obtain target voice data; and then playing the target voice data. According to the embodiment of the invention, the voiceprint characteristics of the generated voice data are replaced, so that the tone of voice playing is enriched, and the personalized requirements of users on the voice playing of the intelligent household equipment are met.
And a plurality of modes are pre-deployed for the intelligent household equipment, for example: a home mode, a sleep mode, a reminding mode and a security mode; the types of modes arranged in the intelligent household equipment are enriched, so that the functions of the intelligent household equipment are enriched, and the use of a user is enriched.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a schematic structural diagram of an intelligent home device according to an embodiment of the present invention is shown, including the following units:
the acquisition unit 601: for collecting and sending user-entered instructions to the controller unit 603;
the acquisition unit 601 may be used to collect instructions input by the user, such as: a mode determination instruction, a home interaction instruction, a sleep-aid instruction, and the like; the instruction may be input by the user through a voice input, or may be input by the user through a display screen or a control, which is not limited in this embodiment of the present invention. Wherein the acquisition unit 601 may comprise a microphone acquisition array and a filter.
After collecting various instructions input by the user, the acquisition unit 601 may perform preprocessing such as noise reduction on the instructions, and then send the preprocessed instructions to the controller unit 603 for processing.
The storage unit 602: for storing target voiceprint features;
the storage unit 602 may be configured to store the target voiceprint feature; the target voiceprint features can be pre-input by the user or collected by the intelligent terminal device in the daily dialog of the user. The memory unit 602 may include, but is not limited to, an electric memory device, a magnetic memory device, a semiconductor memory device, and the like, which is not limited by the embodiment of the present invention.
The controller unit 603: the voice generation cloud platform is used for uploading a command input by a user to the voice generation cloud platform through the communication unit 604; receiving the voice data to be converted downloaded from the voice generation cloud platform by the communication unit 604; generating to-be-output text information by the voice generation cloud platform according to an instruction input by a user and generating the to-be-output text information based on the to-be-output text information;
the controller unit 603 may be configured to receive an instruction input by the user and sent by the acquisition unit 601, and upload the instruction to the speech generation cloud platform through the communication unit 604; the voice generation cloud platform may be connected to the smart home device through the communication unit 604, and is used for generating voice data to be converted; the efficiency and accuracy of target voice data generation can be improved through the voice generation cloud platform.
After receiving an instruction input by a user, the voice generation cloud platform can identify the instruction and generate corresponding text information to be output; then converting the text information to be output into voice data to be converted by adopting a universal voiceprint characteristic; after generating the voice data to be converted, the voice generation cloud platform may send the voice data to be converted back to the smart home device.
In addition, the controller unit 603 can also be used to control the operation of other units in the smart home device.
The communication unit 604: the voice conversion system is used for uploading a command input by a user to the voice generation cloud platform and downloading voice data to be converted from the voice generation cloud platform;
the communication unit 604 may be configured to upload an instruction sent by the controller unit 603 to the speech generation cloud platform, or download speech data to be converted from the speech generation cloud platform, and send the speech data to be converted to the controller unit 603.
The speech synthesis unit 605: for obtaining the voice data to be converted from the controller unit 603; acquiring the target voiceprint characteristics from the storage unit 602, and converting the voice data to be converted by adopting the target voiceprint characteristics to generate target voice data;
the voice synthesis unit 605 can acquire voice data to be converted from the controller unit 603 and acquire a target voiceprint feature from the storage unit 602; then, the target voiceprint feature can be used for replacing the voiceprint feature in the voice data to be converted, so that the voice data to be converted are converted, and the target voice data are generated.
The generated target voice data is obtained by converting voice print characteristics related to the user (for example, pre-stored by the user or collected and generated by the intelligent household equipment), so that the generated target voice data can meet the personalized requirements of the user on voice played by the intelligent household equipment.
The voice playing unit 606: for receiving the target voice data sent by the voice synthesis unit 605 and playing the target voice data.
The voice playing unit 606 may receive the target voice data sent from the voice synthesizing unit 605 and call a speaker to play the target voice data.
It should be noted that the method embodiments can be applied to the smart home devices in this embodiment.
Referring to fig. 7, a schematic structural diagram of a voice playing apparatus based on a working mode according to an embodiment of the present invention is shown, where the apparatus is applied to an intelligent home device, and the intelligent home device has a voice collecting function and a voice playing function;
specifically, the following modules may be included:
a mode determining module 701, configured to determine a current working mode of the smart home device;
a detection module 702, configured to detect whether a voice output event for a current operating mode is triggered;
a text voiceprint determining module 703, configured to determine, when a voice output event for a current working mode is detected to be triggered, to-be-output text information and a target voiceprint feature corresponding to the current working mode and the voice output event;
and the playing module 704 is configured to play the text information to be output by using the target voiceprint feature.
Optionally, the playing module 704 includes:
the conversion submodule is used for carrying out voice conversion on the text information to be output and generating voice data to be converted;
the replacing submodule is used for replacing the voiceprint characteristics in the voice data to be converted with the target voiceprint characteristics to obtain target voice data;
and the target voice data playing submodule is used for playing the target voice data.
Optionally, the mode determining module 701 includes:
the first determining submodule is used for receiving a mode determining instruction input by a user and determining a current working mode according to the mode determining instruction;
and the second determining submodule is used for acquiring the current time of the intelligent household equipment and determining the current working mode according to the current time.
Optionally, the current working mode includes a home mode, and the detecting module 702 includes:
the first detection submodule is used for judging to trigger a voice output event aiming at a home mode when receiving a home interaction instruction;
the text voiceprint determination module 703 includes:
the first text voiceprint determining submodule is used for identifying the home furnishing interaction instruction and generating text information to be output, which is matched with the identification result; and acquiring a target voiceprint characteristic preset for the home mode.
Optionally, the current operating mode includes a sleep mode, and the detecting module 702 includes:
the second detection submodule is used for judging and triggering a voice output event aiming at the sleep mode when a sleep-assisting instruction is received;
the text voiceprint determination module 703 includes:
the second text voiceprint determining submodule is used for identifying the sleep-assisting instruction and generating text information to be output, which is matched with the identification result; and acquiring a target voiceprint characteristic preset for the sleep mode.
Optionally, the current operation mode includes a reminding mode, and the detecting module 702 includes:
the third detection submodule is used for judging that a voice output event aiming at the reminding mode is triggered when the current time of the intelligent household equipment reaches the preset time;
the text voiceprint determination module 703 includes:
and the third text voiceprint determining submodule is used for acquiring the text information to be output, which is pre-input according to the preset time, and acquiring the target voiceprint characteristics preset according to the reminding mode.
Optionally, the current working mode includes a security mode, and the detecting module 702 includes:
the fourth detection submodule is used for judging to trigger a voice output event aiming at the security mode when the intrusion event exists in the preset area;
the text voiceprint determination module 703 includes:
and the fourth text voiceprint determining submodule is used for acquiring the text information to be output, which is pre-recorded aiming at the intrusion event, and acquiring the target voiceprint characteristics preset aiming at the security mode.
In the embodiment of the invention, the intelligent household equipment firstly determines the current working mode and then detects whether to trigger a voice output event aiming at the current working mode; and when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event, and playing the text information to be output by adopting the target voiceprint characteristics. By the embodiment of the invention, the text information needing to be output and the voiceprint characteristics used for generating the voice output data are determined based on the current working mode and the voice output event, the tone of voice playing is enriched, and the personalized requirements of users on the voice playing sound of the intelligent household equipment are met.
And various modes are pre-deployed for the intelligent household equipment, the types of the modes in the intelligent household equipment are enriched, and therefore the functions of the intelligent household equipment are enriched, and the use of a user is enriched.
The embodiment of the invention also provides a computer readable storage medium, a computer program is stored on the computer readable storage medium, and the computer program is executed by a processor to realize the voice playing method based on the working mode.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The voice playing method based on the working mode and the smart home device are described in detail, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A voice playing method based on a working mode is characterized in that the method is applied to intelligent household equipment, the intelligent household equipment has a voice collecting function and a voice playing function, and the method comprises the following steps:
determining a current working mode of the intelligent household equipment;
detecting whether a voice output event for the current operating mode is triggered;
when a voice output event aiming at the current working mode is detected to be triggered, determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event;
and playing the text information to be output by adopting the target voiceprint characteristics.
2. The method according to claim 1, wherein the playing the text information to be output by using the target voiceprint feature comprises:
carrying out voice conversion on the text information to be output to generate voice data to be converted;
replacing the voiceprint features in the voice data to be converted with the target voiceprint features to obtain the target voice data;
and playing the target voice data.
3. The method according to claim 1, wherein the determining the current operating mode of the smart home device comprises:
receiving a mode determination instruction input by a user, and determining the current working mode according to the mode determination instruction;
or acquiring the current time of the intelligent household equipment, and determining the current working mode according to the current time.
4. The method according to any one of claims 1-3, wherein the current operating mode comprises a home mode, and the detecting whether to trigger a voice output event for the current operating mode comprises:
when a home furnishing interaction instruction is received, judging to trigger a voice output event aiming at the home furnishing mode;
the determining the text information to be output and the target voiceprint characteristics corresponding to the current working mode and the voice output event comprises the following steps:
identifying the home furnishing interaction instruction, and generating text information to be output, which is matched with an identification result;
and acquiring a target voiceprint characteristic preset for the home mode.
5. The method according to any one of claims 1-3, wherein the current operating mode comprises a sleep mode, and the detecting whether to trigger a voice output event for the current operating mode comprises:
when a sleep-aid instruction is received, determining to trigger a voice output event for the sleep mode;
the determining the text information to be output and the target voiceprint characteristics corresponding to the current working mode and the voice output event comprises the following steps:
identifying the sleep-aiding instruction and generating text information to be output, which is matched with the identification result;
and acquiring a target voiceprint characteristic preset aiming at the sleep mode.
6. The method of any of claims 1-3, wherein the current operating mode comprises a reminder mode, and wherein the detecting whether to trigger a voice output event for the current operating mode comprises:
when the current time of the intelligent household equipment reaches a preset time, judging to trigger a voice output event aiming at the reminding mode;
the determining the text information to be output and the target voiceprint characteristics corresponding to the current working mode and the voice output event comprises the following steps:
and acquiring the text information to be output which is pre-input aiming at the preset time, and acquiring the preset target voiceprint characteristics aiming at the reminding mode.
7. The method of any one of claims 1-3, wherein the current operating mode comprises a security mode, and wherein the detecting whether to trigger a voice output event for the current operating mode comprises:
when an intrusion event exists in a preset area, judging to trigger a voice output event aiming at the security mode;
the determining the text information to be output and the target voiceprint characteristics corresponding to the current working mode and the voice output event comprises the following steps:
and acquiring text information to be output, which is pre-recorded aiming at the intrusion event, and acquiring target voiceprint characteristics preset aiming at the security mode.
8. The utility model provides an intelligent household equipment, its characterized in that, intelligent household equipment includes:
a collecting unit: the system comprises a controller unit, a display unit and a display unit, wherein the controller unit is used for collecting instructions input by a user and sending the instructions input by the user to the controller unit;
a storage unit: for storing target voiceprint features;
a controller unit: the voice generation cloud platform is used for uploading the instruction input by the user to the voice generation cloud platform through the communication unit; receiving the voice data to be converted downloaded from the voice generation cloud platform by the communication unit; generating to-be-output text information by the voice generation cloud platform according to the instruction input by the user and generating the to-be-output text information based on the to-be-output text information;
a communication unit: the voice conversion system is used for uploading a command input by a user to the voice generation cloud platform and downloading voice data to be converted from the voice generation cloud platform;
a speech synthesis unit: the voice conversion device is used for acquiring the voice data to be converted from the controller unit; acquiring a target voiceprint characteristic from the storage unit, and converting the voice data to be converted by adopting the target voiceprint characteristic to generate target voice data;
a voice playing unit: and the target voice data is used for receiving the target voice data sent by the voice synthesis unit and playing the target voice data.
9. The utility model provides a pronunciation play device based on mode, its characterized in that, the device is applied to intelligent household equipment, intelligent household equipment has pronunciation collection function and pronunciation broadcast function, the method includes:
the mode determining module is used for determining the current working mode of the intelligent household equipment;
the detection module is used for detecting whether a voice output event aiming at the current working mode is triggered or not;
the text voiceprint determining module is used for determining text information to be output and target voiceprint characteristics corresponding to the current working mode and the voice output event when the voice output event aiming at the current working mode is detected to be triggered;
and the playing module is used for playing the text information to be output by adopting the target voiceprint characteristics.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the working mode-based speech playback method according to any one of claims 1 to 7.
CN202111205827.2A 2021-10-15 2021-10-15 Voice playing method based on working mode and intelligent household equipment Pending CN114024789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111205827.2A CN114024789A (en) 2021-10-15 2021-10-15 Voice playing method based on working mode and intelligent household equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111205827.2A CN114024789A (en) 2021-10-15 2021-10-15 Voice playing method based on working mode and intelligent household equipment

Publications (1)

Publication Number Publication Date
CN114024789A true CN114024789A (en) 2022-02-08

Family

ID=80056344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111205827.2A Pending CN114024789A (en) 2021-10-15 2021-10-15 Voice playing method based on working mode and intelligent household equipment

Country Status (1)

Country Link
CN (1) CN114024789A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704530A (en) * 2017-09-19 2018-02-16 百度在线网络技术(北京)有限公司 Speech ciphering equipment exchange method, device and equipment
CN109036374A (en) * 2018-07-03 2018-12-18 百度在线网络技术(北京)有限公司 Data processing method and device
US20200394992A1 (en) * 2019-06-13 2020-12-17 Baidu.Com Times Technology (Beijing) Co., Ltd. Client, system and method for customizing voice broadcast

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704530A (en) * 2017-09-19 2018-02-16 百度在线网络技术(北京)有限公司 Speech ciphering equipment exchange method, device and equipment
CN109036374A (en) * 2018-07-03 2018-12-18 百度在线网络技术(北京)有限公司 Data processing method and device
US20200394992A1 (en) * 2019-06-13 2020-12-17 Baidu.Com Times Technology (Beijing) Co., Ltd. Client, system and method for customizing voice broadcast

Similar Documents

Publication Publication Date Title
KR102571011B1 (en) Responding to Remote Media Classification Queries Using Classifier Models and Context Parameters
US10062304B1 (en) Apparatus and method for wireless sound recognition to notify users of detected sounds
EP2720224B1 (en) Voice Converting Apparatus and Method for Converting User Voice Thereof
US10409547B2 (en) Apparatus for recording audio information and method for controlling same
CN109920419B (en) Voice control method and device, electronic equipment and computer readable medium
JP2006194959A (en) Voice detector, automatic imaging device and voice detecting method
CN102568478A (en) Video play control method and system based on voice recognition
WO2013006489A1 (en) Learning speech models for mobile device users
JP2009540414A (en) Media identification
CN108762494A (en) Show the method, apparatus and storage medium of information
Droghini et al. A Combined One‐Class SVM and Template‐Matching Approach for User‐Aided Human Fall Detection by Means of Floor Acoustic Features
CN112820291A (en) Intelligent household control method, system and storage medium
JP6716300B2 (en) Minutes generation device and minutes generation program
JP6314837B2 (en) Storage control device, reproduction control device, and recording medium
US20220084543A1 (en) Cognitive Assistant for Real-Time Emotion Detection from Human Speech
EP4141869A1 (en) A method for identifying an audio signal
CN111508491A (en) Intelligent voice interaction equipment based on deep learning
CN107452398B (en) Echo acquisition method, electronic device and computer readable storage medium
JP2006324715A (en) Calling device
CN113709291A (en) Audio processing method and device, electronic equipment and readable storage medium
CN106782625A (en) Audio-frequency processing method and device
CN109271480B (en) Voice question searching method and electronic equipment
CN112700765A (en) Assistance techniques
CN114024789A (en) Voice playing method based on working mode and intelligent household equipment
CN113692618B (en) Voice command recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220208