CN111599353A - Equipment control method and device based on voice - Google Patents

Equipment control method and device based on voice Download PDF

Info

Publication number
CN111599353A
CN111599353A CN202010499951.3A CN202010499951A CN111599353A CN 111599353 A CN111599353 A CN 111599353A CN 202010499951 A CN202010499951 A CN 202010499951A CN 111599353 A CN111599353 A CN 111599353A
Authority
CN
China
Prior art keywords
voice control
user
control instruction
information
controlled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010499951.3A
Other languages
Chinese (zh)
Inventor
�乔力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruying Intelligent Technology Co ltd
Original Assignee
Beijing Ruying Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruying Intelligent Technology Co ltd filed Critical Beijing Ruying Intelligent Technology Co ltd
Priority to CN202010499951.3A priority Critical patent/CN111599353A/en
Publication of CN111599353A publication Critical patent/CN111599353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The disclosure relates to a voice-based device control method and apparatus. The method comprises the following steps: acquiring actual operation information of each voice-usable device controlled by a user in daily life; training the machine learning model according to the actual operation information to obtain a training model; receiving a voice control instruction; determining target equipment to be controlled by a user according to the training model and the voice control instruction; and sending a voice control instruction to the target equipment. The machine learning model is trained based on actively acquired actual operation information of the user on each device, and the user does not need to manually input parameters, so that the accuracy of the trained training model is improved, and the accuracy of the determined target device is further improved.

Description

Equipment control method and device based on voice
Technical Field
The disclosure relates to the technical field of smart home, in particular to a voice-based equipment control method and device.
Background
It is a development trend today that users use voice commands to control devices, but one voice control command of a user may find a plurality of controllable devices at the same time, but the real intention of the user is not to control the devices at the same time, but to control a certain target device of the devices, and then a specific rule is needed to implement the control.
At present, a user can define some rules to achieve the purpose, but the mode of defining the rules by the user is adopted, the rule conditions which need to be defined are very various, so that great trouble and trouble are brought to the user, once a certain rule condition is defined by mistake, equipment cannot be reliably controlled, and the user experience is poor.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a method and an apparatus for controlling a voice-based device. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a voice-based device control method, including:
acquiring actual operation information of each voice-usable device controlled by a user in daily life;
training a machine learning model according to the actual operation information to obtain a training model;
receiving a voice control instruction;
determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and sending the voice control instruction to the target equipment.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the voice-based device control method in the present disclosure includes: acquiring actual operation information of each voice-usable device controlled by a user in daily life; training the machine learning model according to the actual operation information to obtain a training model; receiving a voice control instruction; determining target equipment to be controlled by a user according to the training model and the voice control instruction; and sending a voice control instruction to the target equipment. The machine learning model is trained based on actively acquired actual operation information of the user on each device, and the user does not need to manually input parameters, so that the accuracy of the trained training model is improved, and the accuracy of the determined target device is further improved.
In one embodiment, the actual operation information includes at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
In one embodiment, the determining a target device to be controlled by a user according to the training model and the voice control instruction includes:
detecting whether the voice control instruction has keywords related to the temperature;
when the fact that keywords related to the temperature exist in the voice control instruction is detected, current temperature information is obtained;
determining target equipment to be controlled by a user and temperature parameters of the target equipment according to the training model, the voice control instruction and the current temperature information;
the sending the voice control instruction to the target device includes:
and sending the temperature parameter to the target equipment.
In one embodiment, the method further comprises:
when detecting that the voice control instruction has no keywords related to the temperature, acquiring current time information;
acquiring current position information of a user;
and determining target equipment to be controlled by the user according to the training model, the voice control instruction, the current time information and the current position information of the user.
According to a second aspect of the embodiments of the present disclosure, there is provided a voice-based device control apparatus including:
the acquisition module is used for acquiring the actual operation information of each device which can be controlled by voice in daily life of a user;
the training module is used for training a machine learning model according to the actual operation information to obtain a training model;
the receiving module is used for receiving a voice control instruction;
the determining module is used for determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and the sending module is used for sending the voice control instruction to the target equipment.
In one embodiment, the actual operation information includes at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
In one embodiment, the determining module comprises: the detection submodule, the first obtaining submodule and the first determining submodule, and the sending module includes: a sending submodule;
the detection submodule is used for detecting whether the voice control instruction has keywords related to the temperature;
the first obtaining submodule is used for obtaining current temperature information when the fact that keywords related to temperature exist in the voice control instruction is detected;
the first determining submodule is used for determining target equipment to be controlled by a user and temperature parameters of the target equipment according to the training model, the voice control instruction and the current temperature information;
and the sending submodule is used for sending the temperature parameter to the target equipment.
In one embodiment, the determining module further comprises:
the second obtaining submodule is used for obtaining current time information when the fact that no keyword related to the temperature exists in the voice control instruction is detected;
the third obtaining submodule is used for obtaining the current position information of the user;
a second determining submodule for determining the current time according to the training model, the voice control instruction and the current time
According to a third aspect of the embodiments of the present disclosure, there is provided a voice-based device control apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring actual operation information of each voice-usable device controlled by a user in daily life;
training a machine learning model according to the actual operation information to obtain a training model;
receiving a voice control instruction;
determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and sending the voice control instruction to the target equipment.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any one of the first aspects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a voice-based device control method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a voice-based device control method according to an example embodiment.
FIG. 3 is a model diagram of a method for controlling a speech-based device according to an exemplary embodiment.
Fig. 4 is a schematic structural diagram of a voice-based device control apparatus provided according to an exemplary embodiment.
Fig. 5 is a schematic structural diagram of a voice-based device control apparatus provided according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a voice-based device control apparatus 90 according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a voice-based device control method according to an exemplary embodiment, as shown in fig. 1, the method including the following steps S101-S105:
in step S101, acquiring actual operation information of each device that can be controlled by a user using voice in daily life;
in one embodiment, the actual operation information includes at least one of the following information: the voice control command of the user, the equipment to be controlled by the voice control command, the time information, the temperature information, the position information of the user and the position information of the equipment.
For example, at eight nights, if the voice control command is "turn on light", then turn on the headlight of the living room, and at this time, the obtained actual operation information includes: the voice control instructions of turning on the light and turning on the light of the user are used for controlling the equipment of the living room, and the current time point is eight night.
The reason for this recording is that many lamps are available at home, if not, after receiving the sound control command of turning on the lamp next time, a plurality of lamps triggered by the sound control command of turning on the lamp will not be turned on at the same time, after this recording, if the sound control command of turning on the lamp is received next time, the current time point will be obtained, if the current time point is 'eight night', the headlight of the living room is directly controlled to be turned on, and other lamps will not be controlled to be turned on.
The device can acquire the actual operation information when the user actually operates the device, or the device records the actual operation information after the user operates the device, and then acquires the actual operation information from the device.
In step S102, a training model is obtained by training the machine learning model according to the actual operation information.
In step S103, a voice control instruction is received.
In step S104, a target device to be controlled by the user is determined according to the training model and the voice control instruction.
In step S105, a voice control instruction is transmitted to the target apparatus.
A plurality of devices which can be controlled through voice instructions can be placed in a user home, so that when the users use the devices daily, actual operation information of the users on the devices can be recorded, then a machine learning module is adopted to carry out deep machine learning on the actual operation information, an independent training model corresponding to the current user can be established for each user, and then the users can deduce target devices to be controlled by the users through the trained training models when using voice to control the devices, so that the control meets the real intention of the users.
Specifically, after receiving a voice control instruction of a user, a target device (a device intended to be controlled by the user) to be controlled by the user can be determined through the training model and the voice control instruction, and after finding the target device, the voice control instruction is sent to the target device, so that the target device executes an operation corresponding to the voice control instruction.
Furthermore, the actual operation information of each device which can be controlled by voice in daily life of the user is actively acquired, and then the machine learning model is trained based on the acquired actual operation information without being similar to the prior art, so that the user inputs more parameters for training the machine learning model, and the problem that the trained model is not accurate enough due to the fact that the input parameters are wrong can be effectively solved.
Moreover, because the acquisition process of the actual operation information is uninterrupted, the machine learning model training also adopts an incremental learning mode, so that the judgment of the voice control instruction of the user is closer to the actual operation behavior of the user, and the self-learning function is achieved.
The voice-based device control method in the present disclosure includes: acquiring actual operation information of each voice-usable device controlled by a user in daily life; training the machine learning model according to the actual operation information to obtain a training model; receiving a voice control instruction; determining target equipment to be controlled by a user according to the training model and the voice control instruction; and sending a voice control instruction to the target equipment. The machine learning model is trained based on actively acquired actual operation information of the user on each device, and the user does not need to manually input parameters, so that the accuracy of the trained training model is improved, and the accuracy of the determined target device is further improved.
In one embodiment, as shown in fig. 2, the step S104 includes the following sub-steps: S1041-S1043, said step S105 comprises the following substeps: s1051:
in step S1041, detecting whether there is a keyword related to temperature in the voice control instruction;
temperature-related keywords such as: hot, cold, temperature, specific values of temperature (e.g., 37 degrees), etc.
In step S1042, when it is detected that there is a keyword related to temperature in the voice control instruction, current temperature information is obtained;
in step S1043, determining target devices to be controlled by the user and temperature parameters of the target devices according to the training model, the voice control instruction, and the current temperature information;
in step S1051, the temperature parameter is transmitted to the target device.
For example, the actual operation information of the user for training the machine learning model is: when the current temperature is 40 degrees, the user turns on the air conditioner through a voice control instruction, and the temperature of the air conditioner is set to 27 degrees.
Continuing with the above example, the voice control command issued by the user is "good hot", and after receiving the voice control command issued by the user, it will detect whether there is a keyword related to temperature in the voice control command, and after detecting that "hot" in the voice control command is a keyword related to temperature, it will be determined that there is a keyword related to temperature in the voice control command, and it will be determined that the user may want to control a device (e.g. air conditioner) capable of cooling the room, then, current temperature information is obtained, for example, the current temperature is 40 degrees, and then it can be determined that the target device to be controlled by the user is an air conditioner based on the training model, the voice control instruction and the current temperature information, and the temperature of the air conditioner is 27 ℃, at this time, a temperature parameter of 27 ℃ is sent to the air conditioner so as to start the air conditioner, and the working temperature of the air conditioner is adjusted to 27 ℃.
In the related art, if the air conditioner is to be turned on, the sent voice control instruction should be a standard instruction "turn on the air conditioner", but in the present disclosure, it can be known through the above embodiments that the air conditioner can be turned on based on a user non-standard voice control instruction "good heat", so that a personalized design of the user is achieved.
Furthermore, the target equipment to be controlled by the user can be accurately determined through the training model, the voice control instruction and the current temperature information, and the user experience is effectively improved.
In one embodiment, as shown in fig. 2, the step S104 includes the following sub-steps: S1044-S1046:
in step S1044, when it is detected that there is no keyword related to temperature in the voice control command, the current time information is obtained.
In step S1045, the current location information of the user is acquired.
In step S1046, a target device to be controlled by the user is determined according to the training model, the voice control instruction, the current time information, and the current location information of the user.
When the fact that the voice control instruction does not have the keywords related to the temperature is detected, whether the equipment which can change the indoor temperature is required to be controlled currently or not can be determined, at the moment, current time information and current position information of the user are obtained, and then the target equipment which is required to be controlled by the user is determined according to the training model, the voice control instruction, the current time information and the current position information of the user.
For example, the actual operation information of the user for training the machine learning model is: at 11 o' clock night, the user controls to turn on a small night lamp in a bedroom through a voice control instruction.
Continuing with the above example, the voice control command issued by the user is "turn on", and after receiving the voice control command issued by the user, then the voice control command is detected whether the keyword related to the temperature exists in the voice control command or not, and after the detection, the voice control command is determined not to have the keyword related to the temperature, then the current time information (for example, 11 o' clock in the evening) is obtained, the position information of the user (for example, bedroom) is obtained, then, based on the training model, the voice control instruction, the current time information and the current position information of the user, the small night lamp which is the bedroom and is to be controlled by the user can be determined, at the moment, the voice control instruction of turning on the lamp is sent to the small night lamp to enable the small night lamp to be turned on, and voice control instructions cannot be sent to all indoor lamps which can control the switch through the 'turn-on' instruction at the same time.
The target equipment to be controlled by the user can be accurately determined through the training model, the voice control instruction, the current time information and the current position information of the user, and the user experience is effectively improved.
FIG. 3 is a model diagram of a method for controlling a speech-based device according to an exemplary embodiment, as shown in FIG. 3:
in the related art, a user may find a plurality of controllable devices at the same time through a voice control command, but a specific rule is needed to implement the actual intention of the user not to control the devices at the same time but to control a certain target device of the devices.
At present, it is common practice to prefabricate some common rules or let the user define some rules by himself.
However, the prior art solutions to the above problems still have places that are not convenient enough for users and do not have good experience, specifically:
1. the universal system predefining rules cannot cover all users, and the types, placing positions and living habits of equipment in different user homes are different, so that great troubles are brought to the design of the universal rules, and the requirements of all users cannot be met.
2. The adoption of the user to define the rules by himself brings great trouble and trouble to the user due to the fact that the conditions of the rules to be defined are very various, and is not beneficial to the maintenance of the rules by the user in the future, and the rules cannot be automatically updated.
In order to solve the problems, the method continuously collects the actual operation information of the user on the equipment in daily life in an information flow mode, then records the actual operation information (such as information of equipment, time, current temperature, a room where the user operates and the like), stores the information in an information queue, and then carries out deep machine learning on the actual operation information by using a machine learning model, so that an independent training model can be respectively established for each user, and a target equipment to be controlled by the user can be inferred by the training model at the trained part when the user carries out voice equipment control later. Specifically, when the user controls the voice device, the user sends a voice control instruction, and after receiving the voice control instruction, the device control platform sends the voice control instruction to the training module, calculates the target device to be operated through the training model, and sends the voice instruction to the target device through the device control platform.
The equipment control platform can be connected with each piece of equipment and used for sending voice control instructions to each piece of equipment.
Because the data acquisition process is uninterrupted, the model training also adopts an incremental learning mode, so that the judgment of the user voice control command is closer to the actual operation behavior of the user, and the self-learning function is achieved. The collected user operation information comprises data of multiple dimensions, such as time, weather, temperature, space position and the like, which can reflect external state information of the user behavior at the time to the greatest extent.
The method has the characteristic of intelligent self-learning, the whole process does not need active intervention of a user, the experience of the voice control equipment of the user can be greatly optimized, and the method has the function of automatic learning.
According to the technical scheme, the function of the personalized voice control equipment of the user is achieved through technologies such as information flow and machine learning, the logic that different users respectively have the voice control equipment according with the daily use habits of the users can be solved from the collection of user control data, the training and processing of the collected data and the application of the data, and the advantages of automatic learning and real-time updating are achieved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 4 is a schematic structural diagram of a voice-based device control apparatus provided according to an exemplary embodiment, which may be implemented as part or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 4, the apparatus includes:
the acquisition module 11 is used for acquiring actual operation information of each device which can be controlled by voice in daily life of a user;
the training module 12 is used for training a machine learning model according to the actual operation information to obtain a training model;
the receiving module 13 is used for receiving a voice control instruction;
a determining module 14, configured to determine, according to the training model and the voice control instruction, a target device to be controlled by a user;
a sending module 15, configured to send the voice control instruction to the target device.
In one embodiment, the actual operation information includes at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
In one embodiment, as shown in fig. 5, the determining module 14 includes: the detection submodule 141, the first obtaining submodule 142 and the first determining submodule 143, and the sending module 15 includes: a transmission sub-module 151;
the detection submodule 141 is configured to detect whether a keyword related to temperature exists in the voice control instruction;
the first obtaining sub-module 142 is configured to obtain current temperature information when detecting that a keyword related to temperature exists in the voice control instruction;
the first determining submodule 143 is configured to determine, according to the training model, the voice control instruction, and the current temperature information, a target device to be controlled by the user and a temperature parameter of the target device;
the sending sub-module 151 is configured to send the temperature parameter to the target device.
In one embodiment, as shown in fig. 5, the determining module 14 further includes:
a second obtaining sub-module 144, configured to obtain current time information when it is detected that there is no keyword related to temperature in the voice control instruction;
a third obtaining submodule 145, configured to obtain current location information of the user;
and a second determining sub-module 146, configured to determine, according to the training model, the voice control instruction, the current time information, and the current location information of the user, a target device to be controlled by the user.
According to a third aspect of the embodiments of the present disclosure, there is provided a voice-based device control apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring actual operation information of each voice-usable device controlled by a user in daily life;
training a machine learning model according to the actual operation information to obtain a training model;
receiving a voice control instruction;
determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and sending the voice control instruction to the target equipment.
The processor may be further configured to:
the actual operation information at least comprises at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
In one embodiment, the determining a target device to be controlled by a user according to the training model and the voice control instruction includes:
detecting whether the voice control instruction has keywords related to the temperature;
when the fact that keywords related to the temperature exist in the voice control instruction is detected, current temperature information is obtained;
determining target equipment to be controlled by a user and temperature parameters of the target equipment according to the training model, the voice control instruction and the current temperature information;
the sending the voice control instruction to the target device includes:
and sending the temperature parameter to the target equipment.
In one embodiment, the processor is further configured to:
when detecting that the voice control instruction has no keywords related to the temperature, acquiring current time information;
acquiring current position information of a user;
and determining target equipment to be controlled by the user according to the training model, the voice control instruction, the current time information and the current position information of the user.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating a voice-based device control apparatus 90 according to an exemplary embodiment. For example, the apparatus 90 may be provided as a server. The apparatus 90 comprises a processing component 902 further comprising one or more processors, and memory resources, represented by memory 903, for storing instructions, e.g., applications, executable by the processing component 902. The application programs stored in memory 903 may include one or more modules that each correspond to a set of instructions. Further, the processing component 902 is configured to execute instructions to perform the above-described methods.
The apparatus 90 may also include a power component 906 configured to perform power management of the apparatus 90, a wired or wireless network interface 905 configured to connect the apparatus 90 to a network, and an input/output (I/O) interface 908. The apparatus 90 may operate based on an operating system stored in the memory 903, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an apparatus 90, enable the apparatus 90 to perform the above-described voice-based device control method, the method comprising:
acquiring actual operation information of each voice-usable device controlled by a user in daily life;
training a machine learning model according to the actual operation information to obtain a training model;
receiving a voice control instruction;
determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and sending the voice control instruction to the target equipment.
In one embodiment, the actual operation information includes at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
In one embodiment, the determining a target device to be controlled by a user according to the training model and the voice control instruction includes:
detecting whether the voice control instruction has keywords related to the temperature;
when the fact that keywords related to the temperature exist in the voice control instruction is detected, current temperature information is obtained;
determining target equipment to be controlled by a user and temperature parameters of the target equipment according to the training model, the voice control instruction and the current temperature information;
the sending the voice control instruction to the target device includes:
and sending the temperature parameter to the target equipment.
In one embodiment, the method further comprises:
when detecting that the voice control instruction has no keywords related to the temperature, acquiring current time information;
acquiring current position information of a user;
and determining target equipment to be controlled by the user according to the training model, the voice control instruction, the current time information and the current position information of the user.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for controlling a voice-based device, comprising:
acquiring actual operation information of each voice-usable device controlled by a user in daily life;
training a machine learning model according to the actual operation information to obtain a training model;
receiving a voice control instruction;
determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and sending the voice control instruction to the target equipment.
2. The method of claim 1, wherein the actual operation information comprises at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
3. The method of claim 2, wherein determining the target device to be controlled by the user based on the training model and the voice control instructions comprises:
detecting whether the voice control instruction has keywords related to the temperature;
when the fact that keywords related to the temperature exist in the voice control instruction is detected, current temperature information is obtained;
determining target equipment to be controlled by a user and temperature parameters of the target equipment according to the training model, the voice control instruction and the current temperature information;
the sending the voice control instruction to the target device includes:
and sending the temperature parameter to the target equipment.
4. The method of claim 3, further comprising:
when detecting that the voice control instruction has no keywords related to the temperature, acquiring current time information;
acquiring current position information of a user;
and determining target equipment to be controlled by the user according to the training model, the voice control instruction, the current time information and the current position information of the user.
5. A speech-based device control apparatus, comprising:
the acquisition module is used for acquiring the actual operation information of each device which can be controlled by voice in daily life of a user;
the training module is used for training a machine learning model according to the actual operation information to obtain a training model;
the receiving module is used for receiving a voice control instruction;
the determining module is used for determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and the sending module is used for sending the voice control instruction to the target equipment.
6. The apparatus of claim 1, wherein the actual operation information comprises at least one of the following information:
the method comprises the steps of a voice control instruction of a user, equipment to be controlled by the voice control instruction, time information, temperature information, position information of the user and position information of the equipment.
7. The apparatus of claim 6, wherein the determining module comprises: the detection submodule, the first obtaining submodule and the first determining submodule, and the sending module includes: a sending submodule;
the detection submodule is used for detecting whether the voice control instruction has keywords related to the temperature;
the first obtaining submodule is used for obtaining current temperature information when the fact that keywords related to temperature exist in the voice control instruction is detected;
the first determining submodule is used for determining target equipment to be controlled by a user and temperature parameters of the target equipment according to the training model, the voice control instruction and the current temperature information;
and the sending submodule is used for sending the temperature parameter to the target equipment.
8. The apparatus of claim 7, wherein the determining module further comprises:
the second obtaining submodule is used for obtaining current time information when the fact that no keyword related to the temperature exists in the voice control instruction is detected;
the third obtaining submodule is used for obtaining the current position information of the user;
and the second determining submodule is used for determining target equipment to be controlled by the user according to the training model, the voice control instruction, the current time information and the current position information of the user.
9. A speech-based device control apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring actual operation information of each voice-usable device controlled by a user in daily life;
training a machine learning model according to the actual operation information to obtain a training model;
receiving a voice control instruction;
determining target equipment to be controlled by a user according to the training model and the voice control instruction;
and sending the voice control instruction to the target equipment.
10. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, carry out the steps of the method according to any one of claims 1 to 4.
CN202010499951.3A 2020-06-04 2020-06-04 Equipment control method and device based on voice Pending CN111599353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499951.3A CN111599353A (en) 2020-06-04 2020-06-04 Equipment control method and device based on voice

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499951.3A CN111599353A (en) 2020-06-04 2020-06-04 Equipment control method and device based on voice

Publications (1)

Publication Number Publication Date
CN111599353A true CN111599353A (en) 2020-08-28

Family

ID=72192358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499951.3A Pending CN111599353A (en) 2020-06-04 2020-06-04 Equipment control method and device based on voice

Country Status (1)

Country Link
CN (1) CN111599353A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108919654A (en) * 2018-05-28 2018-11-30 马鞍山观点信息科技有限公司 A kind of intelligent operating method for family life
CN110060677A (en) * 2019-04-04 2019-07-26 平安科技(深圳)有限公司 Voice remote controller control method, device and computer readable storage medium
US20190379941A1 (en) * 2018-06-08 2019-12-12 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for outputting information
CN111128156A (en) * 2019-12-10 2020-05-08 上海雷盎云智能技术有限公司 Intelligent household equipment voice control method and device based on model training

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108919654A (en) * 2018-05-28 2018-11-30 马鞍山观点信息科技有限公司 A kind of intelligent operating method for family life
US20190379941A1 (en) * 2018-06-08 2019-12-12 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for outputting information
CN110060677A (en) * 2019-04-04 2019-07-26 平安科技(深圳)有限公司 Voice remote controller control method, device and computer readable storage medium
CN111128156A (en) * 2019-12-10 2020-05-08 上海雷盎云智能技术有限公司 Intelligent household equipment voice control method and device based on model training

Similar Documents

Publication Publication Date Title
US10584892B2 (en) Air-conditioning control method, air-conditioning control apparatus, and storage medium
TWI665584B (en) A voice controlling system and method
WO2019091171A1 (en) Voice control method, device, system, and electronic apparatus for smart home appliance
CN104852975B (en) Household equipment calling method and device
CN107168085B (en) Intelligent household equipment remote control method, device, medium and computing equipment
CN112286150B (en) Intelligent household equipment management method, device and system and storage medium
CN102901180A (en) Method and system for controlling air conditioner
CN109218145B (en) IOT equipment control interface display method, system, equipment and storage medium
CN110308660A (en) Smart machine control method and device
CN103982984A (en) Method and system for adjusting running parameters of air conditioner
CN112013520A (en) Air conditioning equipment, automatic control method thereof and terminal control equipment
CN110618614A (en) Control method and device for smart home, storage medium and robot
KR20180072468A (en) Intelligence Cooling and Heating Service Method of User Using Big Data
WO2015003377A1 (en) Smart house system and operation method therefor
CN108989162A (en) A kind of household intelligent robot steward system
CN109407538A (en) Intelligent home furnishing control method and system
CN103982985A (en) Air conditioner running parameter adjustment method and system
CN112130458A (en) Target device control method and device, storage medium and electronic device
CN114120996A (en) Voice interaction method and device
CN114203176A (en) Control method and device of intelligent equipment, storage medium and electronic device
CN112013513A (en) Air conditioning equipment, automatic control method thereof and terminal control equipment
CN111599353A (en) Equipment control method and device based on voice
CN115793481A (en) Device control method, device, electronic device and storage medium
CN111981632A (en) Information notification method and device and air conditioning system
CN109470303B (en) Method and device for acquiring temperature and humidity data information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination