CN111524514A - Voice control method and central control equipment - Google Patents

Voice control method and central control equipment Download PDF

Info

Publication number
CN111524514A
CN111524514A CN202010320904.8A CN202010320904A CN111524514A CN 111524514 A CN111524514 A CN 111524514A CN 202010320904 A CN202010320904 A CN 202010320904A CN 111524514 A CN111524514 A CN 111524514A
Authority
CN
China
Prior art keywords
user
voice data
controlled device
controlled
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010320904.8A
Other languages
Chinese (zh)
Inventor
唐至威
高雪松
孟卫明
王月岭
刘波
蒋鹏民
王彦芳
刘帅帅
田羽慧
陈维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN202010320904.8A priority Critical patent/CN111524514A/en
Publication of CN111524514A publication Critical patent/CN111524514A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application discloses a voice control method and a central control device, wherein the central control device identifies voice data triggered by a user, and after a control instruction which corresponds to first voice data and is used for controlling the running state of the controlled device is determined, target running parameters which need to be used when the controlled device corresponding to the user runs can be obtained.

Description

Voice control method and central control equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to a voice control method and central control equipment.
Background
With the continuous progress of natural language processing technology, the accuracy and speed of speech semantic recognition by a machine are higher and higher, so that various manufacturers start to develop various types of applications based on the natural language processing technology, and equipment control is an important application direction.
The traditional control mode of the household equipment mainly depends on a remote controller, when the remote controller is damaged or lost, the equipment cannot be remotely controlled, and how to more intelligently control the household equipment becomes a problem to be solved urgently.
Disclosure of Invention
At present, the control mode of the traditional home equipment mainly depends on a remote controller for control, and an exemplary embodiment of the present application provides a voice control method and a central control device, so as to realize more intelligent control of the home equipment.
According to a first aspect of the exemplary embodiments, there is provided a central control apparatus, including: a processor, a transceiver unit;
the transceiving unit is configured to receive first voice data triggered by a user and sent by voice acquisition equipment;
the processor is configured to identify the first voice data, determine a control instruction corresponding to the first voice data and used for controlling the operation state of the controlled device, and acquire a target operation parameter required to be used when the controlled device corresponding to the user operates; wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
the processor is configured to send operation information including the target operation parameter and the control instruction to the controlled device, so that the controlled device executes an operation corresponding to the control instruction after receiving the operation information, and determines an operation state according to the target operation parameter.
In the above embodiment, after recognizing the voice data triggered by the user and determining the control instruction corresponding to the first voice data and used for controlling the operation state of the controlled device, the central control device may obtain the target operation parameter that needs to be used when the controlled device corresponding to the user operates, and since the target operation parameter is determined according to the operation parameter of the controlled device when the controlled device is used by the user history, the operation state of the controlled device may be controlled according to the operation parameter preferred by the user, the probability of readjustment of the operation parameter by the user is reduced, the use experience of the user is improved, and a method for controlling the controlled device by voice is also implemented.
In some embodiments of the present application, when recognizing the first voice data and determining a control instruction corresponding to the first voice data and used for controlling an operation state of a controlled device, the processor is configured to:
recognizing text information corresponding to the first voice data, and sending the text information to a remote server so that the remote server performs semantic recognition on the text information to determine a control instruction corresponding to the first voice data;
and receiving a control instruction corresponding to the first voice data returned by the server.
In the above embodiment, because the far-end server performs semantic recognition on the text information corresponding to the first voice data triggered by the user and sent by the central control device, and determines the control instruction corresponding to the first voice data, the design difficulty of the central control device is reduced, and the control instruction corresponding to the voice data can be quickly obtained.
In some embodiments of the present application, when obtaining a target operation parameter that needs to be used when a current controlled device corresponding to the user operates, the processor is configured to:
determining a user identifier of the user, and determining a target operation parameter corresponding to the user identifier of the user according to a stored corresponding relationship between the user identifier corresponding to the controlled device and the operation parameter;
and determining the operation parameters corresponding to the user identifications in the corresponding relationship according to the operation parameters of the controlled equipment when the controlled equipment is used according to the history of each user.
In the above embodiment, the central control device first determines the user identifier of the user, and determines the target operation parameter corresponding to the user identifier of the user according to the stored corresponding relationship between the user identifier corresponding to the controlled device and the operation parameter, and because the operation parameter corresponding to the user identifier is determined according to the operation parameter of the controlled device when the controlled device is used according to the history of each user in the corresponding relationship, the target operation parameter preferred by the user can be obtained, and then the operation state of the controlled device is controlled by using the target operation parameter preferred by the user, so that the requirements of the user are met, and the use experience of the user is improved.
In some embodiments of the present application, in determining the user identification of the user, the processor is configured to:
performing voiceprint recognition on the first voice data, and determining a user identifier of a user triggering the first voice data; or
And carrying out voiceprint recognition on the second voice data for awakening the voice acquisition equipment, and determining a user identifier of a user triggering the second voice data.
In the above embodiment, two methods are provided for identifying a user and determining a user identifier of the user:
in the first mode, the central control device performs voiceprint recognition on first voice data triggered by a user, and determines a user identifier of the user.
And in the second mode, the central control equipment sends the second voice data which is triggered by the user and used for awakening the voice acquisition equipment to the remote server for voiceprint recognition, and determines the user identification of the user which triggers the second voice data.
The two methods can quickly determine the user identification of the user, and improve the use experience of the user.
In some embodiments of the present application, the processor is further configured to:
and if the corresponding relation between the user identification and the operation parameters does not include the target operation parameters corresponding to the user identification of the user, acquiring the operation parameters of the controlled equipment in the last operation as the operation parameters needed to be used in the operation of the controlled equipment.
In the above embodiment, if the stored corresponding relationship between the user identifier and the operating parameter does not include the target operating parameter corresponding to the user identifier of the user triggering the first voice data, the operating parameter of the controlled device in the last operation is obtained and used as the operating parameter that needs to be used by the controlled device in the current operation, so that the operating parameter better meets the user requirement than the operating parameter that needs to be used by the controlled device in the current operation, and the user experience is improved.
In some embodiments of the present application, after sending the operation information including the target operation parameter and the control instruction to the controlled device, the processor is further configured to:
and if the operation parameter of the controlled equipment is determined to be changed in the process of using the controlled equipment by the user, updating the operation parameter corresponding to the user identifier of the user in the corresponding relation between the user identifier and the operation parameter according to the changed operation parameter.
In the above embodiment, in the process that the user uses the controlled device, if the operating parameter of the controlled device changes, the operating parameter corresponding to the user identifier of the user that triggers the first voice data in the corresponding relationship between the user identifier and the operating parameter is updated according to the changed operating parameter, so that the operating parameter corresponding to the user identifier of the user can be updated in the stored corresponding relationship according to different requirements of the user at different time intervals, the requirements of the user are met, and the use experience of the user is improved.
According to a second aspect of the exemplary embodiments, there is provided a voice control method comprising:
receiving first voice data triggered by a user and sent by voice acquisition equipment;
identifying the first voice data, determining a control instruction which is corresponding to the first voice data and is used for controlling the running state of the controlled equipment, and acquiring target running parameters which need to be used when the controlled equipment corresponding to the user runs; wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
and sending operation information containing the target operation parameters and the control instructions to the controlled equipment, so that the controlled equipment executes the operation corresponding to the control instructions after receiving the operation information, and determines the operation state according to the target operation parameters.
In some embodiments of the present application, the recognizing the first voice data and determining a control instruction corresponding to the first voice data and used for controlling an operation state of a controlled device includes:
recognizing text information corresponding to the first voice data, and sending the text information to a remote server so that the remote server performs semantic recognition on the text information to determine a control instruction corresponding to the first voice data;
and receiving a control instruction corresponding to the first voice data returned by the server.
In some embodiments of the application, the obtaining of the target operation parameter that needs to be used when the current controlled device corresponding to the user operates includes:
determining a user identifier of the user, and determining a target operation parameter corresponding to the user identifier of the user according to a stored corresponding relationship between the user identifier corresponding to the controlled device and the operation parameter; and determining the operation parameters corresponding to the user identifications in the corresponding relationship according to the operation parameters of the controlled equipment when the controlled equipment is used according to the history of each user.
In some embodiments of the present application, the determining the user identifier of the user includes:
performing voiceprint recognition on the first voice data, and determining a user identifier of a user triggering the first voice data; or
And carrying out voiceprint recognition on the second voice data for awakening the voice acquisition equipment, and determining a user identifier of a user triggering the second voice data.
In some embodiments of the present application, the method further comprises:
and if the corresponding relation between the user identification and the operation parameters does not include the target operation parameters corresponding to the user identification of the user, acquiring the operation parameters of the controlled equipment in the last operation as the operation parameters needed to be used in the operation of the controlled equipment.
In some embodiments of the present application, after sending the operation information including the target operation parameter and the control instruction to the controlled device, the method further includes:
and if the operation parameter of the controlled equipment is determined to be changed in the process of using the controlled equipment by the user, updating the operation parameter corresponding to the user identifier of the user in the corresponding relation between the user identifier and the operation parameter according to the changed operation parameter.
According to a third aspect of the exemplary embodiments, there is provided a voice control system, including a voice collecting device, a central control device, and a controlled device;
the voice acquisition equipment is used for acquiring first voice data triggered by a user and sending the first voice data to the central control equipment;
the central control device is used for identifying the first voice data, determining a control instruction which is corresponding to the first voice data and is used for controlling the running state of the controlled device, and acquiring a target running parameter which is required to be used when the controlled device corresponding to the user runs; and sending operation information containing the target operation parameters and the control instructions to the controlled equipment; wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
and the controlled equipment is used for executing the operation corresponding to the control instruction after receiving the operation information and determining the operation state according to the target operation parameter.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the application.
Drawings
Fig. 1 is a schematic diagram of a speech control system according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a complete voice control method provided by an embodiment of the present application;
fig. 3 is a structural diagram schematically illustrating a central control device according to an embodiment of the present application;
fig. 4 is a block diagram schematically illustrating a voice control apparatus according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a voice control method provided by an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be described in detail and removed with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Some terms appearing herein are explained below:
1. in the embodiment of the present application, the term "and/or" describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
2. In the embodiment of the application, the term "voiceprint recognition" refers to converting a voice signal into an electric signal and then recognizing the electric signal by using a computer. The main tasks of voiceprint recognition include: voice signal processing, voiceprint feature extraction, voiceprint modeling, voiceprint comparison, decision discrimination and the like.
3. In the embodiment of the present application, the term "semantic recognition", which is one of the important components of Natural Language Processing (NLP) technology, the core of semantic recognition, in addition to understanding the meaning of text vocabulary, also understands the meaning of this word represented in sentences and chapters, which means that semantic recognition technically does: the method comprises the following steps of semantic analysis and ambiguity elimination at the level of text, vocabulary, syntax, morphology, chapter (paragraph) and corresponding meaning recombination so as to achieve the aim of identifying the text, the vocabulary, the syntax, the morphology and the chapter (paragraph).
Semantic recognition can be divided into three layers:
(1) an application layer: including industrial applications and intelligent voice interaction systems/technology applications.
(2) NLP technology layer: the method comprises the technical processes of word analysis, information extraction, time cause and effect, emotion judgment and the like on natural language with subjects such as linguistics, computer languages and the like as backgrounds, and finally achieves the purposes of enabling a computer to understand natural language cognition of human language and converting computer data into natural language generation of the natural language.
(3) Bottom data layer: dictionaries, data sets, corpora, knowledge maps, knowledge of the external world and the like are all the basis of the semantic recognition algorithm model.
At present, a user mainly controls household equipment through a remote controller, for example, the user can use the remote controller corresponding to the air conditioning equipment to start an air conditioner and adjust the running state of the air conditioner, but in the control mode depending on the remote controller, when the remote controller is damaged or lost, the equipment cannot be remotely controlled, and poor use experience is brought to the user.
In view of the above problems, an embodiment of the present application provides a voice control system, so that a user can control a home device through voice and control an operating state of the home device according to an operating parameter preferred by the user.
As shown in fig. 1, the speech control system in the embodiment of the present application includes: the voice acquisition device 10, the central control device 20 and the controlled device 30.
The voice acquisition equipment 10 is used for acquiring first voice data triggered by a user and sending the first voice data to the central control equipment 20;
the central control device 20 is configured to identify the first voice data, determine a control instruction corresponding to the first voice data and used for controlling an operation state of the controlled device, and acquire a target operation parameter that needs to be used when the controlled device corresponding to the user operates; sending operation information containing target operation parameters and control instructions to the controlled equipment; the target operation parameters are determined according to the operation parameters of the controlled equipment when the user uses the controlled equipment historically;
and the controlled device 30 is used for executing the operation corresponding to the control instruction after receiving the operation information, and determining the operation state according to the target operation parameter.
Wherein, pronunciation collection equipment 10 can be for intelligent audio amplifier, voice control panel, domestic intelligent sensor etc. and well accuse equipment 20 is the central control equipment who controls the running state of various intelligent homes, and controlled equipment 30 can include: air conditioners, water heaters, refrigerators, lighting devices, and the like.
It should be noted that, in the embodiment of the present application, voice data of a user, which is acquired when the voice acquisition device is in an awake state, is used as first voice data, and voice data, which is triggered by the user and used for waking up the voice acquisition device, is used as second voice data.
When the voice acquisition equipment is in an un-awakened state, if the voice acquisition equipment acquires second voice data which is triggered by a user and used for awakening the voice acquisition equipment, switching the voice acquisition equipment into an awakened state; and the second voice data of the user awakening the voice acquisition equipment comprises preset keywords.
For example, assuming that the preset keyword is "small letter, when the voice collecting device is in an un-awakened state, if the collected voice data triggered by the user is" small letter ", after receiving the voice data, the voice collecting device is switched to an awakened state.
When the voice acquisition equipment is in an awakening state, the voice acquisition equipment can acquire first voice data triggered by a user and send the first voice data to the central control equipment;
after receiving first voice data triggered by a user and sent by voice acquisition equipment, the central control equipment identifies the first voice data and determines a control instruction which corresponds to the first voice data and is used for controlling the running state of the controlled equipment.
In some embodiments, the central control device identifies the first voice data and determines the control instruction according to the following modes:
in the mode 1, the central control device identifies text information corresponding to the first voice data, and performs semantic identification on the text information to determine a control instruction corresponding to the first voice data.
For example, assuming that the central control device receives first voice data triggered by a user and sent by the voice acquisition device as "turn on a water heater", the central control device recognizes text information corresponding to the voice data, performs semantic recognition on the obtained text information, and determines that a control instruction corresponding to the voice data "turn on the water heater" is "controlled device name: a water heater; the device response action: open ".
And 2, the central control equipment identifies text information corresponding to the first voice data and sends the text information to the far-end server, and the far-end server performs semantic identification on the text information to determine a control instruction corresponding to the first voice data and sends the determined control instruction to the central control equipment.
In this embodiment of the application, the server may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, an intermediate service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, may also be an independent physical server that provides semantic identification and voiceprint identification services, may also be a server cluster or a distributed system formed by a plurality of physical servers, and the central control device and the server may be directly or indirectly connected in a wired or wireless communication manner, which is not limited herein.
For example, if the voice data triggered by the user and sent by the voice acquisition device is "turn on the air conditioner", the central control device recognizes text information corresponding to the voice data and sends the text information to the remote server, so that the remote server performs semantic recognition, and determines that a control instruction corresponding to the voice data "turn on the air conditioner" is "the name of the controlled device: an air conditioner; the device response action: and opening', after the identification of the remote server is completed, the central control equipment receives a control instruction corresponding to the voice data returned by the remote server.
After the central control device identifies the first voice data and determines a control instruction which is corresponding to the first voice data and is used for controlling the running state of the controlled device, if the central control device determines that the first voice data comprises running parameters which are needed to be used when the controlled device runs currently, the central control device controls the running state of the controlled device according to the running parameters corresponding to the first voice data;
for example, assuming that the first voice data input by the user is "turn on the air conditioner and set the temperature to 26 ℃", the central control device determines that the control instruction for controlling the operation state of the controlled device corresponding to the first voice data is "controlled device name: an air conditioner; the device response action: and determining that the operation parameter required to be used at the current operation corresponding to the first voice data is' temperature: 26 ℃, the central control equipment is controlled according to the determined operating parameters "temperature: 26 c "controls the operating state of the air conditioning apparatus.
If the central control device determines that the first voice data does not include the operation parameters needed to be used by the controlled device when the controlled device operates currently, whether the user is a voiceprint registered user is judged;
the process of determining whether a user is a voiceprint registered user is described in detail below.
The method comprises the steps of judging whether a user is a user with a voiceprint registration or not through central control equipment.
In the embodiment of the application, when a user registers a voiceprint, voice data needs to be pre-recorded through the voice acquisition device, and after receiving the voice data sent by the voice acquisition device and determining feature information in the voice data, the central control device pre-generates and stores a user identifier corresponding to the feature information.
The user identifier of the user may be a user ID, or may be other information for distinguishing different user identities, and is not limited specifically.
1. In some embodiments, the central control device may perform voiceprint recognition on the first voice data triggered by the user and collected by the voice collection device, and determine whether the user is a user who has been voiceprint registered.
In implementation, the central control device performs voiceprint recognition on the first voice data, determines feature information in the first voice data, and judges whether the corresponding relation includes the feature information in the first voice data according to the corresponding relation between the stored feature information and the user identifier;
if the corresponding relation is determined to include the feature information in the first voice data, the central control equipment takes the user identification corresponding to the feature information as the user identification for triggering the first voice data user, and determines that the user is the user who has been registered in the voiceprint;
if the corresponding relation does not include the feature information in the first voice data, the central control device obtains the information which cannot be identified by the user, and determines that the user is a user who is not registered in the voiceprint.
2. In other embodiments, the central control device may perform voiceprint recognition on the second voice data, which is acquired by the voice acquisition device and is triggered by the user, and determine whether the user is a user who has been voiceprint registered.
In implementation, the central control device performs voiceprint recognition on second voice data which is triggered by a user and used for waking up the voice acquisition device, determines feature information in the second voice data, and judges whether the corresponding relation includes the feature information in the second voice data or not according to the corresponding relation between the stored feature information and the user identification;
if the corresponding relation is determined to include the feature information in the second voice data, the central control equipment takes the user identification corresponding to the feature information as the user identification for triggering the second voice data user, and determines that the user is the user who has been registered in the voiceprint;
if the corresponding relation does not include the feature information in the second voice data, the central control device obtains the information which cannot be identified by the user, and determines that the user is a user who is not registered in the voiceprint.
And judging whether the user is a user registered by the voiceprint through the remote server.
In the embodiment of the application, when a user registers a voiceprint, voice data needs to be recorded in advance through the voice acquisition equipment, and after receiving the voice data sent by the voice acquisition equipment, the central control equipment sends the voice data to the remote server; after determining the characteristic information in the received voice data, the remote server generates and stores a user identifier corresponding to the characteristic information in advance.
The user identifier of the user may be a user ID, or may be other information for distinguishing different user identities, and is not limited specifically.
1. In some embodiments, the central control device may send the first voice data triggered by the user and acquired by the voice acquisition device to the remote server for voiceprint recognition, and determine whether the user is a user who has been voiceprint registered.
In implementation, the central control device sends the first voice data to the far-end server for voiceprint recognition, the far-end server determines the feature information in the received first voice data, and judges whether the corresponding relation includes the feature information in the first voice data or not according to the corresponding relation between the stored feature information and the user identification;
if the corresponding relation is determined to include the feature information in the first voice data, the remote server takes the user identification corresponding to the feature information as the user identification for triggering the first voice data user, and after the central control device receives the user identification returned by the remote server, the user is determined to be the user which is registered by the voiceprint;
if the corresponding relation does not include the feature information in the first voice data, the far-end server obtains the information which cannot be identified by the user, and after the central control device receives the information which cannot be identified by the user and is returned by the far-end server, the user is determined to be a user which is not registered in the voiceprint.
2. In other embodiments, the central control device may send the second voice data triggered by the user and acquired by the voice acquisition device to the remote server for voiceprint recognition, and determine whether the user is a user who has been voiceprint registered.
In implementation, the central control device sends the second voice data for waking up the voice acquisition device to the far-end server for voiceprint recognition, the far-end server determines the feature information in the received second voice data, and judges whether the corresponding relation includes the feature information in the second voice data according to the corresponding relation between the stored feature information and the user identifier;
if the remote server determines that the corresponding relationship comprises the feature information in the second voice data, the user identification corresponding to the feature information is used as the user identification for triggering the second voice data user, and the central control device determines that the user is the user which is registered by the voiceprint after receiving the user identification returned by the remote server;
if the remote server determines that the corresponding relation does not include the feature information in the second voice data, information which cannot be identified by the user is obtained, and the central control device determines that the user is a user which is not registered in the voiceprint after receiving the information which cannot be identified by the user and is returned by the remote server.
In some embodiments of the present application, the central control device may determine, in different manners, the target operation parameters that need to be used when the controlled device is currently operating, for a user who has been registered with a voiceprint and a user who has not been registered with a voiceprint.
The following is directed to two types of users, in conjunction with specific embodiments: respectively introducing a method for determining target operation parameters needed to be used by the controlled equipment when the controlled equipment is currently operated by the central control equipment if the first voice data does not include the operation parameters needed to be used by the controlled equipment when the controlled equipment is currently operated by users who have been registered with voiceprints and users who have not been registered with voiceprints:
type 1, user is a user who has registered for voiceprint.
In some embodiments, after determining the user identifier of the user, the central control device determines whether the stored correspondence between the user identifier corresponding to the controlled device and the operating parameter includes the user identifier of the user;
if so, the central control equipment takes the operation parameters corresponding to the user identification of the user as target operation parameters needed to be used by the controlled equipment in the current operation;
for example, assuming that the user ID is a user ID, the correspondence between the stored user ID corresponding to the air conditioning equipment and the operation parameter is shown in table 1:
table 1, user identification and operation parameter corresponding relation table.
Figure BDA0002461346770000131
Wherein: the operation parameter corresponding to the user with the user ID of 2000 is 20 ℃; the operation parameter corresponding to the user with the user ID of 2001 is 23 ℃; the operation parameter corresponding to the user having the user ID of "2002" is "21 ℃. Assuming that the user is identified and the user ID of the user is "2001", the target operation parameter corresponding to the user is determined to be "23 ℃.
If not, the central control device takes the operation parameter of the controlled device in the last operation as the operation parameter needed by the controlled device in the current operation, or the central control device takes the operation parameter preset by the controlled device as the operation parameter needed by the controlled device in the current operation.
For example, it is assumed that the stored correspondence relationship between the user identifier corresponding to the air conditioning equipment and the operation parameter includes: the operation parameter corresponding to the user with the user ID of 2004 is 15 ℃; the operation parameter corresponding to the user with the user ID of 2005 is 19 ℃; if it is determined that the user ID of the user is "2006", the central control device determines that the stored correspondence between the user identifier corresponding to the air conditioning device and the operating parameter does not include the target operating parameter corresponding to the user with the user identifier of "2006", and the central control device takes the latest operating parameter of the air conditioning device, which is "12 ℃, as the operating parameter that needs to be used when the current controlled device operates;
for another example, assume that the stored correspondence between the user identifier and the operating parameter for the water heater device includes: the operation parameter corresponding to the user with the user ID of 2010 is 70 ℃; the operation parameter corresponding to the user with the user ID of 2011 is 75 ℃, and the operation parameter of the preset water heater equipment is 73 ℃; and assuming that the user ID of the user is determined to be "2013", the central control device determines that the stored corresponding relationship between the user identifier corresponding to the water heater device and the operation parameter does not include the target operation parameter corresponding to the user with the user identifier of "2013", and the central control device takes the preset operation parameter "73 ℃" of the air conditioner device as the operation parameter required to be used when the current water heater device operates.
It should be noted that, in the correspondence between the user identifier and the operation parameter corresponding to the controlled device stored in the embodiment of the present application, the operation parameter corresponding to each user identifier is determined according to the operation parameter of the controlled device when the controlled device is used by the user in the history.
In some embodiments, the correspondence between the user identifier and the operating parameter may be generated according to the following manner:
in the mode 1, the operation parameter corresponding to each user identifier in the corresponding relation is the operation parameter with the largest use frequency when the user uses the controlled equipment historically.
For example, when the user with the user ID "2001" uses the controlled device air conditioner historically, the historical operating parameters of the air conditioner are "23 ℃, 24 ℃, 25 ℃ and 24 ℃, and then the central control device takes the operating parameter" 24 ℃ with the largest number of times of use by the user as the operating parameter corresponding to the user with the user ID "2001" in the corresponding relationship; when the user with the user ID of "2002" uses the controlled device air conditioning equipment historically, the historical operating parameters of the air conditioning equipment are "20 ℃, 21 ℃, 24 ℃, 20 ℃ and 23 ℃, and then the central control device takes the operating parameter" 20 ℃ with the largest user use frequency as the operating parameter corresponding to the user with the user ID of "2002" in the corresponding relationship.
And 2, the operation parameter corresponding to each user identifier in the corresponding relation is an average value of the operation parameters when the controlled equipment is used according to the user history.
For example, when the user with the user ID of "2001" uses the controlled device air conditioner historically, the historical operating parameters of the air conditioner are "15 ℃, 16 ℃, 14 ℃ and 16 ℃", and the central control device takes the average value "15" of the historical operating parameters as the operating parameters of the air conditioner corresponding to the user with the user ID of "2001" in the corresponding relationship; when the controlled device air conditioner is used by the user with the user ID of "2002" in the history, the historical operating parameters of the air conditioner are "12 ℃, 14 ℃ and 14 ℃, and then the central control device takes the average value" 14 "of the historical operating parameters as the operating parameters of the air conditioner corresponding to the user with the user ID of" 2002 "in the corresponding relationship.
Type 2, user is a user who is not voiceprint registered.
In some embodiments, the central control device takes the operation parameter used by the controlled device at the last operation time as the target operation parameter needed to be used by the current controlled device at the operation time.
For example, when the user is a user who is not registered with a voiceprint, assuming that the controlled device is an air conditioner and the last operation parameter of the air conditioner is "21 ℃", the central control device takes the last operation parameter "21 ℃" of the air conditioner as the operation parameter that needs to be used when the current air conditioner is operated.
In other embodiments, the central control device obtains an operation parameter preset by the controlled device, and the operation parameter is used as an operation parameter required by the controlled device when the controlled device is currently operated.
For example, when the user is a user who is not registered with a voiceprint, assuming that the controlled device is an air conditioning device and the preset operating parameter of the air conditioning device is "20 ℃", the central control device takes the preset operating parameter "20 ℃" of the air conditioning device as the operating parameter that needs to be used when the current air conditioning device operates.
After the central control device determines a control instruction for controlling the running state of the controlled device corresponding to the first voice data and acquires a target running parameter which needs to be used when the controlled device runs currently, the central control device generates control information for controlling the controlled device;
the central control device sends operation information containing the operation parameters and the control instructions to the controlled device, the controlled device executes the operation corresponding to the control instructions after receiving the operation information, and determines the operation state according to the operation parameters.
In implementation, the central control device sends operation information including the control instruction and the currently-used operation parameters to the controlled device according to the address information corresponding to the controlled device, and the controlled device executes the operation corresponding to the control instruction after receiving the operation information and determines the operation state according to the currently-used operation parameters.
In some embodiments, the address information corresponding to the controlled device is preset as an IP address of the controlled device.
For example, the control instruction included in the operation information is "controlled device name: an air conditioner; the equipment responds to the operation: the target operation parameter corresponding to the user with the user ID of 2001 is 26 ℃, and the central control equipment sends the operation information to the air conditioning equipment according to the IP address of the air conditioner;
the air conditioner sets the operation state to the on state after the received operation information, and sets the temperature parameter at the current operation to 26 ℃.
After the central control device sends the operation information to the controlled device, the central control device also generates corresponding voice information to be played; and the central control equipment sends the generated voice information to be played to the voice acquisition equipment for playing.
For example, the initial operating state of the controlled device is a closed state, after the central control device sends the operation information including the opening instruction to the controlled device, the corresponding to-be-played voice information is generated, that is, the device is opened and the operating state is adjusted, and the generated to-be-played voice information is sent to the voice acquisition device for playing.
For another example, after the central control device sends the operation information including the start instruction to the controlled device, the central control device generates corresponding to-be-played voice information that the current device is running and the operation state has been adjusted according to your requirements, and sends the generated to-be-played voice information to the voice acquisition device for playing.
In the embodiment of the application, after the operation information is sent to the controlled device, in the process of running the controlled device, in order to meet different requirements of the user in different time periods, the running parameters corresponding to the user identifier of the user can be updated in the stored corresponding relationship between the user identifier corresponding to the controlled device and the running parameters.
In some embodiments, if the central control device determines that the operation parameter of the controlled device changes in the process of using the controlled device by the user who has been registered with the voiceprint, the operation parameter corresponding to the user identifier of the user in the correspondence relationship between the user identifier and the operation parameter is updated according to the changed operation parameter.
In implementation, the central control device updates the operation parameters corresponding to the user identifier of the user in the corresponding relationship between the user identifier and the operation parameters according to the changed operation parameters and the operation parameters of the controlled device when the user uses the controlled device historically.
For example, assuming that the controlled device is an air conditioner, and the operating parameters of the air conditioner change from 25 ℃ to 26 ℃ in the process that the user with the user ID of "2000" uses the air conditioner, the user updates the operating parameters corresponding to the user with the user ID of "2000" in the correspondence relationship between the user identifier corresponding to the air conditioner and the operating parameters according to the changed operating parameters of 26 ℃ and the operating parameters of the controlled device "26 ℃, 24 ℃, 26 ℃, and 25 ℃" when the user uses the controlled device historically.
In the embodiment of the application, after the central control device identifies voice data triggered by a user and determines a control instruction corresponding to first voice data and used for controlling the running state of the controlled device, a target running parameter corresponding to the user is determined from a stored corresponding relationship between a user identifier corresponding to the controlled device and a running parameter according to a user identifier of a user triggering the voice data, so that the controlled device executes an operation corresponding to the control instruction and determines the running state according to the determined target running parameter, because the target running parameter is determined according to the running parameter of the controlled device when the controlled device is used by the user according to the history, and the corresponding relationship between the stored user identifier corresponding to the controlled device and the running parameter can be dynamically updated according to the requirements of the user at different moments, the running state of the controlled device can be controlled according to the running parameter preferred by the user, the probability of readjusting the operation parameters by the user is reduced, and the use experience of the user is improved.
Fig. 2 illustrates a complete flow chart of a voice control method provided by an embodiment of the present application, and as shown in fig. 2, the flow chart may include the following steps:
step S201, voice acquisition equipment acquires first voice data triggered by a user and sends the first voice data to central control equipment;
step S202, the central control equipment receives first voice data triggered by a user and sent by the voice acquisition equipment and identifies text information corresponding to the first voice data;
step S203, sending the text information to a remote server;
step S204, the far-end server carries out semantic recognition on the text information to determine a control instruction corresponding to the first voice data;
step S205, returning the control instruction to the central control equipment;
step S206, the central control device receives a control instruction corresponding to the first voice data returned by the server, and determines that the first voice data does not include the operation parameters needed to be used when the controlled device is currently operated;
step S207, determining a user identifier of the user, and determining a target operation parameter corresponding to the user identifier of the user according to the stored corresponding relation between the user identifier corresponding to the controlled device and the operation parameter;
the operation parameters corresponding to the user identifications in the corresponding relation are determined according to the operation parameters of the controlled equipment when the controlled equipment is used by each user in history;
step S208, sending the operation information containing the target operation parameters and the control instructions to the controlled equipment;
step S209, the controlled device executes the operation corresponding to the control instruction after receiving the operation information, and determines the operation state according to the target operation parameter;
step S210, the central control device obtains operation parameters of the controlled device in the operation process;
step S211, if the central control device determines that the operation parameter of the controlled device changes during the process of using the controlled device by the user, the central control device updates the operation parameter corresponding to the user identifier of the user in the correspondence relationship between the user identifier and the operation parameter according to the changed operation parameter.
Based on the same inventive concept, in the embodiment of the present application, a central control device is provided, as shown in fig. 3, and at least includes: a receiving unit 301, a recognizing unit 302, and a transmitting unit 303;
the receiving unit 301 is configured to receive first voice data triggered by a user and sent by a voice acquisition device;
the identifying unit 302 is configured to identify the first voice data, determine a control instruction corresponding to the first voice data and used for controlling an operation state of a controlled device, and acquire a target operation parameter that needs to be used when the controlled device corresponding to the user operates;
wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
the sending unit 303 is configured to send operation information including the target operation parameter and the control instruction to the controlled device, so that the controlled device executes an operation corresponding to the control instruction after receiving the operation information, and determines an operation state according to the target operation parameter.
An embodiment of the present application provides a voice control apparatus, and as shown in fig. 4, the voice control apparatus 400 includes: a transceiver unit 401 and a processor 402;
the transceiver unit 401 is configured to receive first voice data triggered by a user and sent by a voice acquisition device;
the processor 402 is configured to recognize the first voice data, determine a control instruction corresponding to the first voice data and used for controlling an operation state of a controlled device, and acquire a target operation parameter that needs to be used when the controlled device corresponding to the user operates; wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
and sending operation information containing the target operation parameters and the control instructions to the controlled equipment, so that the controlled equipment executes the operation corresponding to the control instructions after receiving the operation information, and determines the operation state according to the target operation parameters.
In some embodiments of the present application, when recognizing the first voice data and determining a control instruction corresponding to the first voice data and used for controlling an operation state of a controlled device, the processor is configured to:
recognizing text information corresponding to the first voice data, and sending the text information to a remote server so that the remote server performs semantic recognition on the text information to determine a control instruction corresponding to the first voice data;
and receiving a control instruction corresponding to the first voice data returned by the server.
In some embodiments of the present application, when obtaining a target operation parameter that needs to be used when a current controlled device corresponding to the user operates, the processor is configured to:
determining a user identifier of the user, and determining a target operation parameter corresponding to the user identifier of the user according to a stored corresponding relationship between the user identifier corresponding to the controlled device and the operation parameter; wherein, the operation parameter corresponding to the user identifier in the corresponding relationship is determined according to the operation parameter of the controlled device when the controlled device is used by the history of each user;
and/or
And if the corresponding relation between the user identification and the operation parameters does not include the target operation parameters corresponding to the user identification of the user, acquiring the operation parameters of the controlled equipment in the last operation as the operation parameters needed to be used in the operation of the controlled equipment.
In some embodiments of the present application, in determining the user identification of the user, the processor is configured to:
performing voiceprint recognition on the first voice data, and determining a user identifier of a user triggering the first voice data; or
And carrying out voiceprint recognition on the second voice data for awakening the voice acquisition equipment, and determining a user identifier of a user triggering the second voice data.
In some embodiments of the present application, after sending the operation information including the target operation parameter and the control instruction to the controlled device, the processor is further configured to:
and if the operation parameter of the controlled equipment is determined to be changed in the process of using the controlled equipment by the user, updating the operation parameter corresponding to the user identifier of the user in the corresponding relation between the user identifier and the operation parameter according to the changed operation parameter.
Based on the same inventive concept, the embodiment of the present application provides a method for voice control, and because the method corresponds to a central control device in a voice control system of the embodiment of the present application, and the principle of the method for solving the problem is similar to that of the system, the implementation of the method can refer to the implementation of the system, and repeated details are not repeated.
As shown in fig. 5, a method for voice control provided in an embodiment of the present application includes:
step S501, receiving first voice data triggered by a user and sent by voice acquisition equipment;
step S502, identifying the first voice data, determining a control instruction corresponding to the first voice data and used for controlling the running state of the controlled equipment, and acquiring target running parameters required to be used when the controlled equipment corresponding to a user runs;
the target operation parameters are determined according to the operation parameters of the controlled equipment when the user uses the controlled equipment historically;
step S503, sending the operation information including the target operation parameter and the control instruction to the controlled device, so that the controlled device executes the operation corresponding to the control instruction after receiving the operation information, and determines the operation state according to the target operation parameter.
The embodiment of the present disclosure provides a storage medium, and when instructions in the storage medium are executed by a processor, the processor can execute any voice control method implemented by a central control device in the above-mentioned flow.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A central control device, comprising: a transceiver unit and a processor;
the transceiving unit is configured to receive first voice data triggered by a user and sent by voice acquisition equipment;
the processor is configured to identify the first voice data, determine a control instruction corresponding to the first voice data and used for controlling the operation state of the controlled device, and acquire a target operation parameter required to be used when the controlled device corresponding to the user operates; wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
and sending operation information containing the target operation parameters and the control instructions to the controlled equipment, so that the controlled equipment executes the operation corresponding to the control instructions after receiving the operation information, and determines the operation state according to the target operation parameters.
2. The central control device according to claim 1, wherein when the first voice data is recognized and a control instruction for controlling the operation state of the controlled device corresponding to the first voice data is determined, the processor is configured to:
recognizing text information corresponding to the first voice data, and sending the text information to a remote server so that the remote server performs semantic recognition on the text information to determine a control instruction corresponding to the first voice data;
and receiving a control instruction corresponding to the first voice data returned by the server.
3. The central control device according to claim 1, wherein, when obtaining the target operation parameters that are needed to be used when the current controlled device corresponding to the user operates, the processor is configured to:
determining a user identifier of the user, and determining a target operation parameter corresponding to the user identifier of the user according to a stored corresponding relationship between the user identifier corresponding to the controlled device and the operation parameter; wherein, the operation parameter corresponding to the user identifier in the corresponding relationship is determined according to the operation parameter of the controlled device when the controlled device is used by the history of each user;
and/or
And if the corresponding relation between the user identification and the operation parameters does not include the target operation parameters corresponding to the user identification of the user, acquiring the operation parameters of the controlled equipment in the last operation as the operation parameters needed to be used in the operation of the controlled equipment.
4. The central control device of claim 3, wherein in determining the user identification of the user, the processor is configured to:
performing voiceprint recognition on the first voice data, and determining a user identifier of a user triggering the first voice data; or
And carrying out voiceprint recognition on the second voice data for awakening the voice acquisition equipment, and determining a user identifier of a user triggering the second voice data.
5. The central control device of claim 1, wherein after transmitting the operational information including the target operating parameters and the control instructions to the controlled device, the processor is further configured to:
and if the operation parameter of the controlled equipment is determined to be changed in the process of using the controlled equipment by the user, updating the operation parameter corresponding to the user identifier of the user in the corresponding relation between the user identifier and the operation parameter according to the changed operation parameter.
6. A voice control method, comprising:
receiving first voice data triggered by a user and sent by voice acquisition equipment;
identifying the first voice data, determining a control instruction which is corresponding to the first voice data and is used for controlling the running state of the controlled equipment, and acquiring target running parameters which need to be used when the controlled equipment corresponding to the user runs; wherein the target operating parameter is determined according to the operating parameter of the controlled device when the user uses the controlled device historically;
and sending operation information containing the target operation parameters and the control instructions to the controlled equipment, so that the controlled equipment executes the operation corresponding to the control instructions after receiving the operation information, and determines the operation state according to the target operation parameters.
7. The method of claim 6, wherein the recognizing the first voice data and determining the control instruction corresponding to the first voice data for controlling the operation state of the controlled device comprises:
recognizing text information corresponding to the first voice data, and sending the text information to a remote server so that the remote server performs semantic recognition on the text information to determine a control instruction corresponding to the first voice data;
and receiving a control instruction corresponding to the first voice data returned by the server.
8. The method of claim 6, wherein the obtaining of the target operation parameters that are needed to be used when the current controlled device corresponding to the user operates comprises:
determining a user identifier of the user, and determining a target operation parameter corresponding to the user identifier of the user according to a stored corresponding relationship between the user identifier corresponding to the controlled device and the operation parameter; wherein, the operation parameter corresponding to the user identifier in the corresponding relationship is determined according to the operation parameter of the controlled device when the controlled device is used by the history of each user;
and/or
And if the corresponding relation between the user identification and the operation parameters does not include the target operation parameters corresponding to the user identification of the user, acquiring the operation parameters of the controlled equipment in the last operation as the operation parameters needed to be used in the operation of the controlled equipment.
9. The method of claim 8, wherein said determining a user identification of said user comprises:
performing voiceprint recognition on the first voice data, and determining a user identifier of a user triggering the first voice data; or
And carrying out voiceprint recognition on the second voice data for awakening the voice acquisition equipment, and determining a user identifier of a user triggering the second voice data.
10. The method of claim 6, wherein after transmitting the operational information including the target operating parameters and the control instructions to the controlled device, the method further comprises:
and if the operation parameter of the controlled equipment is determined to be changed in the process of using the controlled equipment by the user, updating the operation parameter corresponding to the user identifier of the user in the corresponding relation between the user identifier and the operation parameter according to the changed operation parameter.
CN202010320904.8A 2020-04-22 2020-04-22 Voice control method and central control equipment Pending CN111524514A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010320904.8A CN111524514A (en) 2020-04-22 2020-04-22 Voice control method and central control equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010320904.8A CN111524514A (en) 2020-04-22 2020-04-22 Voice control method and central control equipment

Publications (1)

Publication Number Publication Date
CN111524514A true CN111524514A (en) 2020-08-11

Family

ID=71902964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010320904.8A Pending CN111524514A (en) 2020-04-22 2020-04-22 Voice control method and central control equipment

Country Status (1)

Country Link
CN (1) CN111524514A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112201237A (en) * 2020-09-23 2021-01-08 安徽中科新辰技术有限公司 Method for realizing voice centralized control of multimedia equipment in command hall based on COM port
CN115312051A (en) * 2022-07-07 2022-11-08 青岛海尔科技有限公司 Voice control method and device for equipment, storage medium and electronic device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104575504A (en) * 2014-12-24 2015-04-29 上海师范大学 Method for personalized television voice wake-up by voiceprint and voice identification
CN104978957A (en) * 2014-04-14 2015-10-14 美的集团股份有限公司 Voice control method and system based on voiceprint identification
CN105206275A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Device control method, apparatus and terminal
CN105444332A (en) * 2014-08-19 2016-03-30 青岛海尔智能家电科技有限公司 Equipment voice control method and device
CN107809667A (en) * 2017-10-26 2018-03-16 深圳创维-Rgb电子有限公司 Television voice exchange method, interactive voice control device and storage medium
CN108320753A (en) * 2018-01-22 2018-07-24 珠海格力电器股份有限公司 Control method, the device and system of electrical equipment
US20180308477A1 (en) * 2016-01-07 2018-10-25 Sony Corporation Control device, display device, method, and program
CN108735217A (en) * 2018-06-19 2018-11-02 Oppo广东移动通信有限公司 Control method of electronic device, device, storage medium and electronic equipment
CN108958810A (en) * 2018-02-09 2018-12-07 北京猎户星空科技有限公司 A kind of user identification method based on vocal print, device and equipment
CN109243448A (en) * 2018-10-16 2019-01-18 珠海格力电器股份有限公司 A kind of sound control method and device
CN109379261A (en) * 2018-11-30 2019-02-22 北京小米智能科技有限公司 Control method, device, system, equipment and the storage medium of smart machine
CN109584874A (en) * 2018-12-15 2019-04-05 深圳壹账通智能科技有限公司 Electrical equipment control method, device, electrical equipment and storage medium
CN110706697A (en) * 2019-09-18 2020-01-17 云知声智能科技股份有限公司 Voice control method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978957A (en) * 2014-04-14 2015-10-14 美的集团股份有限公司 Voice control method and system based on voiceprint identification
CN105444332A (en) * 2014-08-19 2016-03-30 青岛海尔智能家电科技有限公司 Equipment voice control method and device
CN104575504A (en) * 2014-12-24 2015-04-29 上海师范大学 Method for personalized television voice wake-up by voiceprint and voice identification
CN105206275A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Device control method, apparatus and terminal
US20180308477A1 (en) * 2016-01-07 2018-10-25 Sony Corporation Control device, display device, method, and program
CN107809667A (en) * 2017-10-26 2018-03-16 深圳创维-Rgb电子有限公司 Television voice exchange method, interactive voice control device and storage medium
CN108320753A (en) * 2018-01-22 2018-07-24 珠海格力电器股份有限公司 Control method, the device and system of electrical equipment
CN108958810A (en) * 2018-02-09 2018-12-07 北京猎户星空科技有限公司 A kind of user identification method based on vocal print, device and equipment
CN108735217A (en) * 2018-06-19 2018-11-02 Oppo广东移动通信有限公司 Control method of electronic device, device, storage medium and electronic equipment
CN109243448A (en) * 2018-10-16 2019-01-18 珠海格力电器股份有限公司 A kind of sound control method and device
CN109379261A (en) * 2018-11-30 2019-02-22 北京小米智能科技有限公司 Control method, device, system, equipment and the storage medium of smart machine
CN109584874A (en) * 2018-12-15 2019-04-05 深圳壹账通智能科技有限公司 Electrical equipment control method, device, electrical equipment and storage medium
CN110706697A (en) * 2019-09-18 2020-01-17 云知声智能科技股份有限公司 Voice control method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112201237A (en) * 2020-09-23 2021-01-08 安徽中科新辰技术有限公司 Method for realizing voice centralized control of multimedia equipment in command hall based on COM port
CN112201237B (en) * 2020-09-23 2024-04-19 安徽中科新辰技术有限公司 Method for realizing voice centralized control command hall multimedia equipment based on COM port
CN115312051A (en) * 2022-07-07 2022-11-08 青岛海尔科技有限公司 Voice control method and device for equipment, storage medium and electronic device

Similar Documents

Publication Publication Date Title
WO2021093449A1 (en) Wakeup word detection method and apparatus employing artificial intelligence, device, and medium
WO2019141028A1 (en) Control method, device and system for electrical device
CN105654949B (en) A kind of voice awakening method and device
CN107667318A (en) Dialog interface technology for system control
KR20200012928A (en) Customizable wake-up voice commands
CN106647311B (en) Intelligent central control system, equipment, server and intelligent equipment control method
CN108170034B (en) Intelligent device control method and device, computer device and storage medium
CN105444332A (en) Equipment voice control method and device
CN112051743A (en) Device control method, conflict processing method, corresponding devices and electronic device
CN111640433A (en) Voice interaction method, storage medium, electronic equipment and intelligent home system
CN111192574A (en) Intelligent voice interaction method, mobile terminal and computer readable storage medium
CN111522909B (en) Voice interaction method and server
CN110534102B (en) Voice wake-up method, device, equipment and medium
US10789961B2 (en) Apparatus and method for predicting/recognizing occurrence of personal concerned context
CN109377995B (en) Method and device for controlling equipment
CN111524514A (en) Voice control method and central control equipment
CN112735418B (en) Voice interaction processing method, device, terminal and storage medium
CN111197841A (en) Control method, control device, remote control terminal, air conditioner, server and storage medium
CN111178081B (en) Semantic recognition method, server, electronic device and computer storage medium
CN111413877A (en) Method and device for controlling household appliance
KR20200074690A (en) Electonic device and Method for controlling the electronic device thereof
CN107742520B (en) Voice control method, device and system
CN110579977A (en) control method and device of electrical equipment and computer readable storage medium
CN114676689A (en) Sentence text recognition method and device, storage medium and electronic device
CN111667840A (en) Robot knowledge graph node updating method based on voiceprint recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination