WO2023168862A1 - Procédé et appareil de prédiction pour instruction de commande, support de stockage et appareil électronique - Google Patents

Procédé et appareil de prédiction pour instruction de commande, support de stockage et appareil électronique Download PDF

Info

Publication number
WO2023168862A1
WO2023168862A1 PCT/CN2022/102037 CN2022102037W WO2023168862A1 WO 2023168862 A1 WO2023168862 A1 WO 2023168862A1 CN 2022102037 W CN2022102037 W CN 2022102037W WO 2023168862 A1 WO2023168862 A1 WO 2023168862A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
voice data
user
current
scene
Prior art date
Application number
PCT/CN2022/102037
Other languages
English (en)
Chinese (zh)
Inventor
于海松
刘建国
Original Assignee
青岛海尔科技有限公司
海尔智家股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛海尔科技有限公司, 海尔智家股份有限公司 filed Critical 青岛海尔科技有限公司
Publication of WO2023168862A1 publication Critical patent/WO2023168862A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9532Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present disclosure relates to the field of smart home technology, and specifically to a method and device for predicting control instructions, a storage medium and an electronic device.
  • Embodiments of the present disclosure provide a method and device for predicting control instructions, a storage medium, and an electronic device to at least solve the problem in related technologies that smart devices are less accurate in predicting device control instructions required by users.
  • a method for predicting control instructions including:
  • target voice data and target environment data in the target scene where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used to indicate the current environment status of the target scene;
  • the target control instructions are used to adjust the device parameters of the target device from the current device parameters to the target device parameters, and the target device parameters are used to adjust the target user's device parameters.
  • Current user status is used to adjust the target voice data.
  • a device for predicting control instructions including:
  • the first acquisition module is configured to acquire target voice data and target environment data in the target scene, where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used to Indicates the current environment status of the target scene;
  • the second acquisition module is configured to acquire the target device that matches the target voice data and the target environment data from the smart devices deployed in the target scene;
  • the prediction module is configured to predict the target control instruction corresponding to the target voice data according to the current device parameters of the target device, wherein the target control instruction is used to adjust the device parameters of the target device from the current device parameters to the target device parameters, and the target device parameters Used to adjust the current user status of the target user.
  • a computer-readable storage medium stores a computer program, wherein the computer program is configured to execute the above control instructions when running. method of prediction.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program. Prediction methods for control instructions.
  • target voice data and target environment data in the target scene are obtained, where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used to indicate The current environment status of the target scene; obtain the target device that matches the target voice data and the target environment data from the smart devices deployed in the target scene; predict the target control instructions corresponding to the target voice data according to the current device parameters of the target device, where, The target control instruction is used to adjust the device parameters of the target device from the current device parameters to the target device parameters.
  • the target device parameters are used to adjust the current user status of the target user, that is, when the target voice data is detected in the target scene, according to the target
  • the voice data determines that the current user status of the target user is in a state to be adjusted, and detects the target environment data used to indicate the current environment status of the target scene, where the current user status indicated by the target voice data is a state of multiple possibilities, corresponding to
  • the device After predicting and determining the device type, it detects the current device parameters of the target device and accurately predicts the target control instructions corresponding to the target voice data based on the current device parameters.
  • the control instructions can change the current
  • the device parameters are adjusted to the target device parameters to achieve the purpose of adjusting the current user status of the target user. Adopting the above technical solution solves the problem in related technologies that the accuracy of smart devices in predicting the device control instructions required by the user is low, and achieves the technical effect of improving the accuracy of the smart device in predicting the device control instructions required by the user.
  • Figure 1 is a schematic diagram of the hardware environment of a control instruction prediction method according to an embodiment of the present disclosure
  • Figure 2 is a flow chart of a method for predicting control instructions according to an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of keyword classification of voice data according to an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of user attribute information according to an embodiment of the present disclosure.
  • Figure 5 is a schematic diagram of a candidate device according to an embodiment of the present disclosure.
  • Figure 6 is a flow chart of target device determination according to an embodiment of the present disclosure.
  • Figure 7 is a schematic diagram of generating target control instructions according to an embodiment of the present disclosure.
  • Figure 8 is a schematic diagram of generating target control instructions according to an embodiment of the present disclosure.
  • Figure 9 is a schematic diagram of a master control device and a controlled device according to an embodiment of the present disclosure.
  • Figure 10 is a schematic diagram of a method for predicting control instructions according to an embodiment of the present disclosure.
  • Figure 11 is an interaction diagram of a method for predicting control instructions according to an embodiment of the present disclosure
  • Figure 12 is a structural block diagram of a control instruction prediction device according to an embodiment of the present disclosure.
  • a method for predicting control instructions is provided.
  • This prediction method of control instructions is widely used in whole-house intelligent digital control application scenarios such as smart home, smart home, smart home device ecology, and smart residence (Intelligence House) ecology.
  • the above control instruction prediction method can be applied to the hardware environment composed of the terminal device 102 and the server 104 as shown in FIG. 1 .
  • the server 104 is connected to the terminal device 102 through the network, and can be set to provide services (such as application services, etc.) for the terminal or the client installed on the terminal.
  • the database can be set up on the server or independently from the server. It is configured to provide data storage services for the server 104, cloud computing and/or edge computing services can be configured on the server or independently of the server, and it is configured to provide data computing services for the server 104.
  • the above-mentioned network may include but is not limited to at least one of the following: wired network, wireless network.
  • the above-mentioned wired network may include but is not limited to at least one of the following: wide area network, metropolitan area network, and local area network.
  • the above-mentioned wireless network may include at least one of the following: WIFI (Wireless Fidelity, Wireless Fidelity), Bluetooth.
  • the terminal device 102 may be, but is not limited to, a PC (Personal Computer), a mobile phone, a tablet, a smart air conditioner, a smart hood, a smart refrigerator, a smart oven, a smart stove, a smart washing machine, a smart water heater, a smart washing equipment, a smart Dishwasher, smart projection equipment, smart TV, smart clothes drying rack, smart curtains, smart audio and video, smart sockets, smart audio, smart speakers, smart fresh air equipment, smart kitchen and bathroom equipment, smart bathroom equipment, smart sweeping robot, smart window cleaning Robots, smart mopping robots, smart air purification equipment, smart steamers, smart microwave ovens, smart kitchen appliances, smart purifiers, smart water dispensers, smart door locks, etc.
  • a PC Personal Computer
  • FIG. 1 is a flow chart of a method for predicting control instructions according to an embodiment of the present disclosure. The process includes the following steps:
  • Step S202 Obtain target voice data and target environment data in the target scene, where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used to indicate the current user status of the target scene. environmental status;
  • Step S204 obtain the target device that matches the target voice data and the target environment data from the smart devices deployed in the target scene;
  • Step S206 Predict the target control instruction corresponding to the target voice data according to the current device parameters of the target device, where the target control instruction is used to adjust the device parameters of the target device from the current device parameters to the target device parameters, and the target device parameters are used to adjust The current user status of the target user.
  • the target voice data when the target voice data is detected in the target scene, it is determined based on the target voice data that the current user status of the target user is in a state to be adjusted, and the target environment data used to indicate the current environment status of the target scene is detected, Among them, the current user status indicated by the target voice data is a multi-possible status, and there are also multiple possible types of equipment corresponding to the current user status. Therefore, it is necessary to combine the target environment data and based on the current environmental status, from multiple possible types. Among the possible types of devices, the target device corresponding to the target voice data is accurately matched. The target device can adjust the current user status.
  • the device type After the device type is determined by prediction, it detects the current device parameters of the target device and based on the current device parameters , accurately predict the target control instruction corresponding to the target voice data, where the control instruction can adjust the current device parameters to the target device parameters to achieve the purpose of adjusting the current user status of the target user.
  • Adopting the above technical solution solves the problem in related technologies that the accuracy of smart devices in predicting the device control instructions required by the user is low, and achieves the technical effect of improving the accuracy of the smart device in predicting the device control instructions required by the user.
  • the target scene may include, but is not limited to, any scene where smart devices are deployed, such as: smart home scene, teaching scene, office scene, warehouse scene, etc.
  • the target environment data may, but is not limited to, include any data indicating environmental parameters, wherein the data type of the target environment data may include, but is not limited to, any physical parameter, chemical quantity parameter, etc., such as, Target environmental data may include, but is not limited to, ambient temperature, dry humidity, air quality, noise parameters, volume decibels, etc.
  • the current user status of the target user is in a state to be adjusted, which may, but is not limited to, mean that the target voice data carries the voice intention of adjusting the current user status.
  • the target voice data is " "It's so hot”
  • the corresponding indication is that the current user status of the target user in the target scene is the state to be cooled down; if the target voice data is detected as "It's so noisy”, then the corresponding indication is that the current user status of the target user in the target scene is In the state of noise reduction.
  • the target voice data and target environment data in the target scene can be obtained, but are not limited to, in the following ways: collecting voice data in the target scene; matching the collected voice data with preset keywords, And match the collected voice data with the user bound to the target scene; when the collected voice data successfully matches the preset keywords, and the collected voice data successfully matches the user bound to the target scene
  • the collected voice data is determined as the target voice data; the voiceprint features corresponding to the target voice data are extracted, and the environmental data in the target scene is collected; the user attribute information corresponding to the voiceprint features and the collected environmental data are collected Determine the target environment data.
  • the preset keywords may, but are not limited to, include any words perceived by the user's five senses, such as: “hot”, “cold”, “dark”, “bright”, “noisy”, “dry” ” and other words that describe the user’s subjective feelings.
  • FIG 3 is a diagram according to the present disclosure.
  • a schematic diagram of keyword classification of speech data according to the embodiment is shown in Figure 3.
  • the classification of existing speech information can be standardized, the encoding method can be unified, and the location of each speech information can be standardized by constructing a table. Attribution categories, for example: "too hot” and “a little hot” belong to “hot (hot)”; “too cold” and “a little cold” belong to "cold (cold)”.
  • the voiceprint information of the voice data matches the voiceprint information of the user bound to the target scene, it is determined that the collected voice data is bound to the target scene.
  • a certain user is matched successfully, that is to say, even if the preset keywords are collected, if the voiceprint information of the voice data does not belong to the voiceprint information of the user bound to the target scene, then the voice data will not be used as the target voice data.
  • the voice data emitted by media devices such as televisions and radios is also determined as target voice data.
  • user attribute information may, but is not limited to, include the user's identity information, geographical location information, smart device usage habit information, etc.
  • Figure 4 is a schematic diagram of user attribute information according to an embodiment of the present disclosure. As shown in Figure 4, user attribute information can be recorded by constructing a table. For example, Zhang San’s user attribute information can include: gender, age, geographical location, and smart device usage habits. For example, Zhang San’s use of air conditioners. The habit is to locate 26°C and so on.
  • the matching of the target voice data and the target environment data may, but is not limited to, mean that the information contained in the target voice data is consistent with the target environment data. That is to say, the current environment state indicated by the target environment data and the target environment data are consistent with each other. There is a causal relationship between the current user status indicated by the target voice data. For example, if the target environment data is the temperature of 42°C and the target voice data is "It's so hot", then it can be considered that the temperature of 42°C is the cause of the user's "heat”, that is It can be regarded as matching the target voice data (it's so hot) and the target environment data (temperature 42°C). Then the target device can, but is not limited to, include smart devices that can adjust the temperature, such as air conditioners, cooling fans, etc.
  • the target device matching the target voice data and the target environment data may be obtained from the smart devices deployed in the target scene in the following manner: extracting target keywords from the target voice data, where , the target keyword is used to indicate the current user status of the target user; obtain the intelligent device that matches the target keyword from the intelligent devices deployed in the target scene as a candidate device; obtain the intelligent device that matches the target environment data from the candidate device as target device.
  • the target keywords may, but are not limited to, refer to the keywords contained in the target voice data for describing the current user status, for example: the target voice data is "too hot”, “a little hot” , “very hot”, then the corresponding target keyword can be but is not limited to "hot”, describing the current user state as a state with a higher temperature.
  • the candidate device may, but is not limited to, refer to one or more undetermined devices corresponding to the target keyword, where the candidate device is a device corresponding to the parameters indicated by the target keyword.
  • Figure 5 is According to a schematic diagram of candidate devices according to an embodiment of the present disclosure, as shown in Figure 5, the correspondence between target keywords and candidate devices can be recorded by constructing a table.
  • each target keyword can correspond to several devices, and the target For the keyword "heat", the corresponding candidate devices include air conditioners, water heaters, bathroom heaters, etc., because air conditioners, water heaters, and bathroom heaters are all smart devices related to temperature parameters.
  • the relationship between the target keyword and the candidate device can be, but is not limited to, recorded through a data string.
  • Cold-1110 means that the candidate equipment corresponding to the target keyword (cold) includes air conditioners, water heaters, and bathroom heaters, but not washing machines.
  • the target device may be, but is not limited to, a smart device that matches the target environment data among the candidate devices.
  • Figure 6 is a flow chart of target device determination according to an embodiment of the present disclosure, as shown in Figure 6 , the target voice data "It's so hot” contains the target keyword "hot”, and the corresponding candidate equipment is "air conditioners, refrigerators, bathroom heaters, water heaters and heating furnaces”. Combined with the target environment data "temperature 42°C", it belongs to high temperature weather. The cooling intention is stronger, and the target device can be predicted to be "air conditioner”.
  • smart devices matching the target keyword can be obtained as candidate devices from the smart devices deployed in the target scene in the following manner: from preset keywords and device types with corresponding relationships Obtain the target device type corresponding to the target keyword; obtain the intelligent devices belonging to the target device type from the intelligent devices deployed in the target scenario as candidate devices.
  • the target device type can be, but is not limited to, divided according to the device parameter type.
  • the device type corresponding to the keyword "heat” can include any temperature-related smart device, such as: air conditioners, Water heaters, electric heaters, heating furnaces, etc.
  • candidate devices may, but are not limited to, refer to smart devices that belong to the target device type and are deployed in the target scene.
  • the device types corresponding to the keyword “heat” may include: air conditioners, water heaters, and electric heaters. , heating furnace, but the only devices deployed in the target scenario "home” are: air conditioners, water heaters, and electric heaters. Then, air conditioners, water heaters, and electric heaters can be used as candidate devices corresponding to the keyword "heat”.
  • a smart device matching the target environment data may be obtained from the candidate devices as the target device in the following manner, but is not limited to: using each device in the candidate devices as a target candidate device, and obtaining the target device from the target environment data. Obtain the target environment parameters corresponding to the device type of the target candidate device; when the target environment parameters do not fall into the target parameter range corresponding to the device type of the target candidate device, determine the target candidate device as the target device.
  • candidate equipment includes: air conditioners, water heaters, and electric heaters. Pick out air conditioners as target candidate equipment.
  • the corresponding target environment data can be air temperature, and water heaters as target candidates. When the device is pulled out, the corresponding target environment data can be water temperature, etc.
  • the target environment parameter that does not fall into the target parameter range corresponding to the device type of the target candidate device may be, but is not limited to, the following situations.
  • the target environment data of the air conditioner may be air temperature, and the corresponding target The parameter range can be 20°C to 28°C.
  • the air conditioner can be determined as the target device.
  • the target control instruction may be, but is not limited to, predicted based on the target voice data, and is used to adjust the device parameters of the target device from the current device parameters to the target device parameters, thereby adjusting the current device parameters of the target user.
  • User status for example: based on the target voice data "It's so hot", the target control instruction generated by prediction can be but is not limited to "turn on the air conditioner". Then, the target control instruction (turn on the air conditioner) can adjust the equipment parameters of the air conditioner from 28°C to 24°C to cool down the target users.
  • the target control instruction corresponding to the target voice data can be predicted based on the current device parameters of the target device in the following manner, but is not limited to: detecting the current device parameters of the target device; based on the current device parameters and the target user's response to the target. Historical control operations performed by the device, predict target device parameters; generate target control instructions based on target device parameters.
  • the historical control operations may be, but are not limited to, the most operations performed by the target user on the target device, and may represent the target user's operating habits and operating preferences on the target device.
  • Figure 7 is a schematic diagram of generating a target control instruction according to an embodiment of the present disclosure. As shown in Figure 7, historical control operation information can be but is not limited to being recorded in a table. For example: when the air conditioner parameter is detected to be 28°C, the Zhang San's operating habit of air conditioning equipment is to lower the air conditioner by 3°C. Then the target equipment parameter can be predicted to be 25°C, and the target control instruction generated can be "set the air conditioner at 25°C.”
  • the target control instruction after predicting the target control instruction corresponding to the target voice data according to the current device parameters of the target device, it may, but is not limited to, also include: broadcasting prompt information to the target user, where the prompt information is used to prompt the target user. Confirm the target control instruction; receive the confirmation information returned by the target user in response to the prompt information; respond to the confirmation information and determine the main control device corresponding to the target device, where the main control device is set to control the target device; issue the target control instruction to the main control device device, wherein the target control instruction is used to instruct the main control device to control the target device to execute the target control instruction.
  • the prompt information broadcast to the target user can be, but is not limited to, any human-computer interaction method, such as: voice interaction, voice verification, etc.
  • Figure 8 is a generated target control according to an embodiment of the present disclosure. The schematic diagram of the command is shown in Figure 8. After the target control command is generated, the target user needs to perform voice verification. The target control command is only executed when the voice confirmation is obtained.
  • Figure 9 is a schematic diagram of the master control device and the controlled device according to the embodiment of the present disclosure.
  • the master control device and the controlled device can be recorded in a form, but are not limited to.
  • Device information for example: the main control device can be set to control the target device.
  • the smart air conditioner can be used as the main control device to control the gas water heater.
  • the target control instructions can be sent to the smart air conditioner, which can instruct the smart air conditioner.
  • the air conditioner controls the temperature of the gas water heater.
  • Figure 10 is a schematic diagram of a method for predicting control instructions according to an embodiment of the present disclosure. As shown in Figure 10, it includes the following steps:
  • Step S1001 Voice input
  • Step S1002 Match the input speech with the corpus, identify the user's intention, and at the same time, identify the device bound to the user to achieve the purpose of matching the intention to the user's device;
  • Step S1003 Obtain quintuple information, that is, target environment information and user attribute information, obtain the current environment status and device status, and determine the intention through the environment information and device status;
  • Step S1004 Obtain the predicted target control instruction after judgment, send the accurate instruction to the user, and execute it after verification.
  • Figure 11 is an interaction diagram of a method for predicting control instructions according to an embodiment of the present disclosure.
  • the user inputs speech to the cloud server.
  • the cloud server performs intent recognition on the input speech and transmits the recognition results to the data computing server.
  • AI Artificial Intelligence, artificial intelligence
  • the brain quintuple data and AI (Artificial Intelligence, artificial intelligence) in the data computing server, ) data and user equipment information to obtain the predicted instructions. After verifying the instruction device matching through the cloud server, the instructions are issued.
  • AI Artificial Intelligence, artificial intelligence
  • the corpus information is first matched after voice input to obtain the currently recommended industries, and then combined with the user binding device, the number of instructions issued is greatly reduced, and the matching degree between the recommended instructions and the user's true intention is improved. .
  • the user's initial intention is combined with historical habits and environmental information to further match the user's current intention and push satisfactory operation instructions to the user. It opens up the bridge between voice and device, voice and user. It can quickly identify the user's basic intention based on the user's data, and can predict the instructions and send them to the products bound by the user, avoiding some unanswered questions and receiving the user's unanswered questions. Relevant instructions for corresponding products.
  • the method according to the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), including several instructions to cause a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods of various embodiments of the present disclosure.
  • Figure 12 is a structural block diagram of a control instruction prediction device according to an embodiment of the present disclosure; as shown in Figure 12, it includes:
  • the first acquisition module 1202 is configured to acquire target voice data and target environment data in the target scene, where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used To indicate the current environmental status of the target scene;
  • the second acquisition module 1204 is configured to acquire the target device that matches the target voice data and the target environment data from the smart devices deployed in the target scene;
  • the prediction module 1206 is configured to predict the target control instruction corresponding to the target voice data according to the current device parameters of the target device, where the target control instruction is used to adjust the device parameters of the target device from the current device parameters to the target device parameters, and the target device Parameters are used to adjust the current user status of the target user.
  • the target voice data when the target voice data is detected in the target scene, it is determined based on the target voice data that the current user status of the target user is in a state to be adjusted, and the target environment data used to indicate the current environment status of the target scene is detected.
  • the current user status indicated by the target voice data is a multi-possible status, and there are also multiple possible types of equipment corresponding to the current user status. Therefore, it is necessary to combine the target environment data and based on the current environmental status, from multiple possible types.
  • the target device corresponding to the target voice data is accurately matched. The target device can adjust the current user status.
  • the target device is detected according to the current device parameters by detecting the current device parameters of the target device. parameters to accurately predict the target control instructions corresponding to the target voice data, where the control instructions can adjust the current device parameters to the target device parameters to achieve the purpose of adjusting the current user status of the target user.
  • Adopting the above technical solution solves the problem in related technologies that the accuracy of smart devices in predicting the device control instructions required by the user is low, and achieves the technical effect of improving the accuracy of the smart device in predicting the device control instructions required by the user.
  • the second acquisition module includes:
  • the first extraction unit is configured to extract target keywords from the target voice data, where the target keywords are used to indicate the current user status of the target user;
  • the first acquisition unit is configured to acquire intelligent devices that match the target keyword as candidate devices from the intelligent devices deployed in the target scene;
  • the second acquisition unit is configured to acquire the smart device that matches the target environment data from the candidate devices as the target device.
  • the first acquisition unit is set to:
  • the second acquisition unit is configured as:
  • the target candidate device is determined as the target device.
  • the prediction module includes:
  • the detection unit is configured to detect the current device parameters of the target device
  • a prediction unit configured to predict the target device parameters based on current device parameters and historical control operations performed by the target user on the target device;
  • the generating unit is configured to generate the target control instruction according to the target device parameters.
  • the device further includes:
  • the broadcast module is configured to broadcast prompt information to the target user after predicting the target control instruction corresponding to the target voice data based on the current device parameters of the target device, where the prompt information is used to prompt the target user to confirm the target control instruction;
  • the receiving module is configured to receive the confirmation information returned by the target user in response to the prompt information
  • the determination module is configured to respond to the confirmation information and determine the master control device corresponding to the target device, wherein the master control device is configured to control the target device;
  • the delivery module is configured to deliver the target control instruction to the main control device, where the target control instruction is used to instruct the main control device to control the target device to execute the target control instruction.
  • the first acquisition module includes:
  • the collection unit is configured to collect voice data in the target scene
  • the matching unit is configured to match the collected voice data with preset keywords, and to match the collected voice data with the user bound to the target scene;
  • the first determination unit is configured to determine the collected voice data when the collected voice data successfully matches the preset keywords, and the collected voice data successfully matches the user bound to the target scene. is the target voice data;
  • the second extraction unit is configured to extract the voiceprint features corresponding to the target voice data, and collect environmental data in the target scene;
  • the second determination unit is configured to determine the user attribute information corresponding to the voiceprint characteristics and the collected environment data as the target environment data.
  • the target voice data when the target voice data is detected in the target scene, it is determined based on the target voice data that the current user status of the target user is in a state to be adjusted, and the target environment data used to indicate the current environment status of the target scene is detected.
  • the current user status indicated by the target voice data is a multi-possible status, and there are also multiple possible types of equipment corresponding to the current user status. Therefore, it is necessary to combine the target environment data and based on the current environmental status, from multiple possible types.
  • the target device corresponding to the target voice data is accurately matched. The target device can adjust the current user status.
  • the target device is detected according to the current device parameters by detecting the current device parameters of the target device. parameters to accurately predict the target control instructions corresponding to the target voice data, where the control instructions can adjust the current device parameters to the target device parameters to achieve the purpose of adjusting the current user status of the target user.
  • Adopting the above technical solution solves the problem in related technologies that the accuracy of smart devices in predicting the device control instructions required by the user is low, and achieves the technical effect of improving the accuracy of the smart device in predicting the device control instructions required by the user.
  • An embodiment of the present disclosure also provides a storage medium that includes a stored program, wherein the method of any of the above items is executed when the program is run.
  • the above-mentioned storage medium may be configured to store program codes for performing the following steps:
  • S1 obtain the target voice data and target environment data in the target scene, where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used to indicate the current environment of the target scene. state;
  • S3 According to the current device parameters of the target device, predict the target control instruction corresponding to the target voice data, where the target control instruction is used to adjust the device parameters of the target device from the current device parameters to the target device parameters, and the target device parameters are used to adjust the target The user's current user status.
  • Embodiments of the present disclosure also provide an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to perform the following steps through a computer program:
  • S1 obtain the target voice data and target environment data in the target scene, where the target voice data is used to indicate that the current user status of the target user in the target scene is in a state to be adjusted, and the target environment data is used to indicate the current environment of the target scene. state;
  • S3 According to the current device parameters of the target device, predict the target control instruction corresponding to the target voice data, where the target control instruction is used to adjust the device parameters of the target device from the current device parameters to the target device parameters, and the target device parameters are used to adjust the target The user's current user status.
  • the above storage medium may include but is not limited to: U disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM), Various media such as removable hard drives, magnetic disks or optical disks that can store program code.
  • ROM read-only memory
  • RAM random access memory
  • Various media such as removable hard drives, magnetic disks or optical disks that can store program code.
  • modules or steps of the present disclosure can be implemented using general-purpose computing devices, and they can be concentrated on a single computing device, or distributed across a network composed of multiple computing devices. , optionally, they may be implemented in program code executable by a computing device, such that they may be stored in a storage device for execution by the computing device, and in some cases, may be in a sequence different from that herein.
  • the steps shown or described are performed either individually as individual integrated circuit modules, or as multiple modules or steps among them as a single integrated circuit module. As such, the present disclosure is not limited to any specific combination of hardware and software.

Abstract

Sont divulgués un procédé et un appareil de prédiction pour une instruction de commande, un support de stockage et un appareil électronique, qui se rapportent au domaine technique des maisons intelligentes. Le procédé de prédiction pour une instruction de commande consiste : à obtenir des données vocales cibles et des données d'environnement cibles dans une scène cible (S202) ; à obtenir, à partir d'un dispositif intelligent déployé dans la scène cible, un dispositif cible correspondant aux données vocales cibles et aux données d'environnement cibles (S204) ; et selon des paramètres de dispositif actuels du dispositif cible, à prédire une instruction de commande cible correspondant aux données vocales cibles (S206). La précision avec laquelle un dispositif intelligent prédit une instruction de commande de dispositif requise par un utilisateur est améliorée.
PCT/CN2022/102037 2022-03-10 2022-06-28 Procédé et appareil de prédiction pour instruction de commande, support de stockage et appareil électronique WO2023168862A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210240955.9A CN114755931A (zh) 2022-03-10 2022-03-10 控制指令的预测方法和装置、存储介质及电子装置
CN202210240955.9 2022-03-10

Publications (1)

Publication Number Publication Date
WO2023168862A1 true WO2023168862A1 (fr) 2023-09-14

Family

ID=82327432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102037 WO2023168862A1 (fr) 2022-03-10 2022-06-28 Procédé et appareil de prédiction pour instruction de commande, support de stockage et appareil électronique

Country Status (2)

Country Link
CN (1) CN114755931A (fr)
WO (1) WO2023168862A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309062A (zh) * 2022-07-20 2022-11-08 青岛海尔科技有限公司 设备的控制方法、装置、存储介质及电子装置
CN115373283A (zh) * 2022-07-29 2022-11-22 青岛海尔科技有限公司 控制指令的确定方法及装置、存储介质及电子装置
CN116631399B (zh) * 2023-07-06 2023-10-13 广州金燃智能系统有限公司 一种基于物联网的人工智能控制系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110286601A (zh) * 2019-07-01 2019-09-27 珠海格力电器股份有限公司 控制智能家居设备的方法、装置、控制设备及存储介质
CN110597082A (zh) * 2019-10-23 2019-12-20 北京声智科技有限公司 智能家居设备控制方法、装置、计算机设备及存储介质
CN110687817A (zh) * 2019-11-05 2020-01-14 深圳市欧瑞博科技有限公司 智能家居的控制方法、装置、终端及计算机可读存储介质
CN110970019A (zh) * 2018-09-28 2020-04-07 珠海格力电器股份有限公司 智能家居系统的控制方法和装置
WO2020138911A1 (fr) * 2018-12-24 2020-07-02 Samsung Electronics Co., Ltd. Procédé et appareil de commande d'un dispositif intelligent pour exécuter des opérations correspondantes
CN113111186A (zh) * 2021-03-31 2021-07-13 青岛海尔科技有限公司 用于控制家电设备的方法、存储介质及电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103730120A (zh) * 2013-12-27 2014-04-16 深圳市亚略特生物识别科技有限公司 电子设备的语音控制方法及系统
CN107908116B (zh) * 2017-10-20 2021-05-11 深圳市艾特智能科技有限公司 语音控制方法、智能家居系统、存储介质和计算机设备
CN112116910A (zh) * 2020-10-30 2020-12-22 珠海格力电器股份有限公司 语音指令的识别方法和装置、存储介质、电子装置
CN112415908A (zh) * 2020-11-26 2021-02-26 珠海格力电器股份有限公司 智能设备控制方法、装置、可读存储介质和计算机设备
CN113050445A (zh) * 2021-03-23 2021-06-29 安徽阜南县向发工艺品有限公司 一种智能家居用语音控制系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110970019A (zh) * 2018-09-28 2020-04-07 珠海格力电器股份有限公司 智能家居系统的控制方法和装置
WO2020138911A1 (fr) * 2018-12-24 2020-07-02 Samsung Electronics Co., Ltd. Procédé et appareil de commande d'un dispositif intelligent pour exécuter des opérations correspondantes
CN110286601A (zh) * 2019-07-01 2019-09-27 珠海格力电器股份有限公司 控制智能家居设备的方法、装置、控制设备及存储介质
CN110597082A (zh) * 2019-10-23 2019-12-20 北京声智科技有限公司 智能家居设备控制方法、装置、计算机设备及存储介质
CN110687817A (zh) * 2019-11-05 2020-01-14 深圳市欧瑞博科技有限公司 智能家居的控制方法、装置、终端及计算机可读存储介质
CN113111186A (zh) * 2021-03-31 2021-07-13 青岛海尔科技有限公司 用于控制家电设备的方法、存储介质及电子设备

Also Published As

Publication number Publication date
CN114755931A (zh) 2022-07-15

Similar Documents

Publication Publication Date Title
WO2023168862A1 (fr) Procédé et appareil de prédiction pour instruction de commande, support de stockage et appareil électronique
AU2019351894B2 (en) System and methods of operation of a smart plug
US10677486B1 (en) HVAC workload and cost logic
CN106952645B (zh) 语音指令的识别方法、语音指令的识别装置和空调器
WO2024045501A1 (fr) Procédé et appareil de détermination d'informations de recommandation, support de stockage et appareil électronique
WO2023168853A1 (fr) Procédé et appareil de prédiction d'intention d'utilisation, et support de stockage et appareil électronique
US20140236324A1 (en) Apparatus and method for controlling terminal based on living pattern
CN116540556A (zh) 基于用户习惯的设备控制方法及装置
US20200084061A1 (en) System for monitoring and controlling device activity
WO2023165051A1 (fr) Procédé de détermination d'identité, support de stockage et appareil électronique
CN113934926A (zh) 交互场景的推荐方法及其装置、电子设备
CN115200146A (zh) 关闭指令的发送方法和装置、存储介质及电子装置
JP2022174346A (ja) 情報処理装置、プログラム及びシステム
CN117892171A (zh) 基于gpt模型的场景规则信息的生成方法和装置
CN117908392A (zh) 设备的确定方法、装置、存储介质及电子装置
CN110858065A (zh) 一种数据处理方法及数据处理设备
CN115168699A (zh) 行为数据的处理方法、存储介质及电子装置
CN115373283A (zh) 控制指令的确定方法及装置、存储介质及电子装置
CN114691730A (zh) 储存位置的提示方法及装置、存储介质及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930491

Country of ref document: EP

Kind code of ref document: A1