CN113380250A - Information processing method, device and system - Google Patents

Information processing method, device and system Download PDF

Info

Publication number
CN113380250A
CN113380250A CN202110662790.XA CN202110662790A CN113380250A CN 113380250 A CN113380250 A CN 113380250A CN 202110662790 A CN202110662790 A CN 202110662790A CN 113380250 A CN113380250 A CN 113380250A
Authority
CN
China
Prior art keywords
command information
area
intelligent voice
information
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110662790.XA
Other languages
Chinese (zh)
Other versions
CN113380250B (en
Inventor
杜亮
陈会敏
吴洪金
国德防
李海军
王彩平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Air Conditioner Gen Corp Ltd
Qingdao Haier Air Conditioning Electric Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Air Conditioner Gen Corp Ltd
Qingdao Haier Air Conditioning Electric Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Air Conditioner Gen Corp Ltd, Qingdao Haier Air Conditioning Electric Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Air Conditioner Gen Corp Ltd
Priority to CN202110662790.XA priority Critical patent/CN113380250B/en
Publication of CN113380250A publication Critical patent/CN113380250A/en
Application granted granted Critical
Publication of CN113380250B publication Critical patent/CN113380250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application relates to the technical field of intelligent household electrical appliances and discloses an information processing method. Obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in a first area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area so that the plurality of intelligent voice devices in the first area store the corresponding relation. The intelligent voice equipment in the first area can conveniently acquire the corresponding relation between the shared command information and the control instruction, the situation that the intelligent voice equipment cannot be controlled due to inaccurate command information switching is effectively avoided, and the control intention of the user can be accurately determined according to the corresponding relation of the shared command information when the user controls the intelligent voice equipment in the first area again. The application also discloses an information processing device and system.

Description

Information processing method, device and system
Technical Field
The present application relates to the field of smart home appliances, and in particular, to an information processing method, apparatus and system.
Background
Along with the popularization of intelligent voice equipment, intelligent voice equipment such as intelligent air conditioners, intelligent refrigerators, intelligent washing machines and intelligent wall lamps gradually enter the life of users. As intelligent voice devices become more diverse, users often choose to purchase different brands of intelligent voice devices to meet their own personalized needs. However, since command word presetting is performed on each brand of intelligent voice device when the intelligent voice device leaves the factory, when the same user uses the intelligent voice devices of different brands at home, the command words of the intelligent voice devices must be known well, and then the intelligent voice devices can be accurately controlled. Therefore, in the control process of the intelligent voice equipment, the phenomenon that the intelligent voice equipment of different brands switches command words back and forth easily occurs, and the user is very inconvenient in the operation process. Once the user uses the wrong command word, the smart device will not be able to determine the user's control intent.
Therefore, how to make the intelligent device determine the control intention of the user more accurately becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides an information processing method, device and system, so as to solve the technical problem of how to enable intelligent equipment to determine the control intention of a user more accurately.
In some embodiments, the method comprises: obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in a first area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area so that the plurality of intelligent voice devices in the first area store the corresponding relation.
In some embodiments, the method comprises: determining control type information corresponding to the command information; and determining shared command information in a plurality of command information corresponding to the same control type information.
In some embodiments, the method comprises: acquiring the use frequency of each of a plurality of command information corresponding to the same control type information; and determining the command information with the highest use frequency as the shared command information corresponding to the control type information.
In some embodiments, the method comprises: acquiring respective use time of a plurality of command information corresponding to the same control type information; and determining the command information corresponding to the use time closest to the current time as the shared command information corresponding to the control type information.
In some embodiments, the method comprises: and under the condition that the corresponding relation between the control instruction and other command information is prestored in the plurality of intelligent voice devices in the first area, controlling the plurality of intelligent voice devices in the first area to replace the prestored corresponding relation between the control instruction and other command information with the corresponding relation between the shared command information and the control instruction.
In some embodiments, the method comprises: determining a second area with the same type as the first area, and obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in the second area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the second area so that the plurality of intelligent voice devices in the second area store the corresponding relation.
In some embodiments, the method comprises: determining a third area where the first area and the second area are located, and obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in the third area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the third area so that the plurality of intelligent voice devices in the third area store the corresponding relation.
In some embodiments, the method comprises: acquiring the position information of a user and command information input by a user voice; according to the position information, determining target intelligent voice equipment in the plurality of intelligent voice equipment; matching a target control instruction corresponding to command information input by a user voice in target intelligent voice equipment; and controlling the target intelligent voice equipment to execute the target control instruction.
In some embodiments, the apparatus comprises: comprising a processor and a memory storing program instructions, the processor being configured to perform the aforementioned information processing method when executing the program instructions.
In some embodiments, the system comprises: the information processing apparatus described above.
The information processing method, the device and the system provided by the embodiment of the disclosure can realize the following technical effects: obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in a first area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area so that the plurality of intelligent voice devices in the first area store the corresponding relation. By the scheme, each intelligent voice device in the first area can conveniently acquire the corresponding relation between the shared command information and the control instruction, the situation that the intelligent voice device cannot be controlled due to inaccurate command information switching is effectively avoided, and when the user controls the intelligent voice device in the first area again, the control intention of the user can be accurately determined according to the corresponding relation of the shared command information.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
FIG. 1 is a schematic diagram of an information processing method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an information processing apparatus provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another information processing apparatus provided in the embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
In practical application, the server can obtain a plurality of command information pre-stored by the air conditioner and the water heater in a first area where the user is located, and in the plurality of command information, the command information with the highest use frequency or the command information with the latest use time is determined as shared command information, so that a control instruction corresponding to the shared command information is matched in the server according to the incidence relation between the command information pre-stored in the server and the control instruction, and the corresponding relation between the shared command information and the control instruction is sent to all the intelligent voice devices in the first area, so that the intelligent voice devices in the first area all store the corresponding relation between the shared command information and the control instruction, and when the user controls the intelligent voice devices in the first area through voice again, the intelligent voice devices can determine the control instruction corresponding to the shared command information according to the shared command information input by the voice of the user, and controlling the corresponding intelligent voice equipment to execute the control instruction, so that each intelligent voice equipment in the first area can conveniently acquire the corresponding relation between the shared command information and the control instruction, the situation that the intelligent voice equipment cannot be controlled due to inaccurate command information switching is effectively avoided, and when the user controls the intelligent voice equipment in the first area again, the control intention of the user can be accurately determined according to the corresponding relation of the shared command information.
Fig. 1 is a schematic diagram of an information processing method provided in an embodiment of the present disclosure, and with reference to fig. 1, an embodiment of the present disclosure provides an information processing method, including:
and S11, the server side obtains a plurality of command information pre-stored by the plurality of intelligent voice devices in the first area.
S12, the server determines the shared command information and the control instruction corresponding to the shared command information in the command information.
And S13, the server sends the corresponding relation between the shared command information and the control instruction to the intelligent voice devices in the first area, so that the intelligent voice devices in the first area store the corresponding relation.
In step 11, the server obtains a plurality of pieces of command information pre-stored by each of a plurality of intelligent voice devices in the first area.
In this scheme, the server may be a cloud server. The intelligent voice device can be an intelligent household device with a voice recognition function. The intelligent household equipment can comprise an intelligent refrigerator, an intelligent television, an intelligent air conditioner, an intelligent water heater and the like. The first area may be preset in advance according to a user's demand. For example, the first area may be a bedroom 1, bedroom 2, living room, kitchen, study, etc. In another example, the first zone may be a room in which the user is located. Specifically, the server may be connected to a plurality of intelligent voice devices in the first area, and obtain command information pre-stored in the plurality of intelligent voice devices connected to the server. Here, the command information is a voice command input by the user. For example, when the user sends a voice command "please increase the display brightness" to the smart device a, the command information "please increase the display brightness" is pre-stored in the smart device a. In another example, the server may further connect an APP (application program) having a binding relationship with the plurality of intelligent voice devices, so as to obtain command information pre-stored in each of the plurality of intelligent voice devices. Further, a plurality of pieces of command information pre-stored in the intelligent voice device a may be sent to a server connected thereto, so that the server obtains the command information sent by the intelligent voice device a. By the scheme, the server side can effectively acquire the command information prestored in the first area by the intelligent voice devices. In another example, command information pre-stored by each of a plurality of intelligent voice devices connected to the intelligent gateway in the first area may also be acquired by the intelligent gateway, and the command information is forwarded to the server connected to the intelligent gateway through the intelligent gateway, so that the server obtains a plurality of command information pre-stored by each of the plurality of intelligent voice devices. According to the scheme, the command information sent by the intelligent voice devices can be collected in the intelligent gateway and sent to the server connected with the intelligent gateway after being collected, so that the data processing flow of the server is effectively simplified, the processing space of the server is reduced, and the data processing speed of the server is accelerated.
In step 12, the server determines the shared command information and the control command corresponding to the shared command information from the plurality of command information.
In the scheme, the incidence relation between a plurality of pieces of shared command information and the control instruction can be prestored in the server. For example, if the shared command information is to turn on the smart device B, the control instruction associated with the shared command information may be to control the smart device B to turn on. Further, the shared command information may be determined among the plurality of command information. In one example, the frequency of use of the plurality of command information by the user may be acquired, and the command information with the highest frequency of use may be determined as the shared command information. According to the scheme, the control habit of the user on the intelligent voice equipment is effectively determined by combining the use frequency of the user on the command information. In another example, the command information used by the user for consecutive days may be determined among the plurality of command information, and the command information used for consecutive days may be determined as the shared command information. According to the scheme, the sharing command information is determined by effectively combining the control preference of the intelligent voice equipment of the user, and the accuracy of the sharing command information is ensured. And after the shared command information is determined, determining a control instruction corresponding to the shared command information according to the incidence relation prestored by the server.
In step 13, the server may send the correspondence between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area, so that the plurality of intelligent voice devices in the first area store the correspondence.
In the scheme, after the sharing command information and the control instruction are determined, the corresponding relation between the sharing command information and the control instruction can be established, and the corresponding relation is sent to all the intelligent voice devices in the first area, so that all the intelligent voice devices in the first area can acquire and store the corresponding relation of the sharing instruction, and all the intelligent voice devices in the first area can acquire the control habit of the user on the intelligent voice devices. The method and the device avoid that a user continuously switches control instructions when controlling the intelligent voice equipment in the first area so as to influence the recognition accuracy of the command information of the intelligent voice equipment.
By adopting the information processing method provided by the embodiment of the disclosure, a plurality of pieces of command information pre-stored by a plurality of pieces of intelligent voice equipment in a first area are obtained; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area so that the plurality of intelligent voice devices in the first area store the corresponding relation. By the scheme, each intelligent voice device in the first area can conveniently acquire the corresponding relation between the shared command information and the control instruction, the situation that the intelligent voice device cannot be controlled due to inaccurate command information switching is effectively avoided, and when the user controls the intelligent voice device in the first area again, the control intention of the user can be accurately determined according to the corresponding relation of the shared command information.
Optionally, in order to determine the shared command information more accurately, in this scheme, the server may determine control type information corresponding to each of the plurality of command information; and determining shared command information in a plurality of command information corresponding to the same control type information.
In this scheme, the control type information may include a switch type, a function adjustment type, and the like. For example, if the command information is determined to control the air conditioner to start, the corresponding control type information is a switch type; if the command information is determined to control the television to start the old man mode, the corresponding control type information is a function switch class; and if the command information is determined to control the fresh-keeping temperature of the refrigerator to be 15 ℃, the corresponding control type information is a function adjusting type. Specifically, the server may determine control type information corresponding to each of the obtained command information, and classify the command information according to different control types. Further, shared command information that can be shared may be determined among a plurality of command information corresponding to the same control type information. According to the scheme, the command information can be grouped by combining with the control type information, the shared command information of each group is further determined in the command information of each group, and after the shared command information of each group is determined by the server, the corresponding relation between the shared command information of each group and the control instruction is sent to all intelligent voice devices in the first area, so that all the intelligent voice devices in the first area determine the voice control habits of the user from multiple dimensions.
Optionally, in order to determine the shared command information more accurately, in this scheme, the server may obtain respective use frequencies of multiple command information corresponding to the same control type information; and determining the command information with the highest use frequency as the shared command information corresponding to the control type information.
In the scheme, the use frequency of the command information of the same control type can be acquired in the server, and the command information with the highest use frequency is determined as the shared command information. For example, if the frequency of use of the plurality of command information for which the same control type is acquired includes "too cold" 3 times, "very cold" 1 time, "somewhat cold" 2 times, the "too cold" with the highest frequency of use is determined as the shared command information among the plurality of command information. With the scheme, the shared command information can be determined more accurately.
Optionally, in order to determine the shared command information more accurately, in the present solution, respective use times of a plurality of command information corresponding to the same control type information are obtained; and determining the command information corresponding to the use time closest to the current time as the shared command information corresponding to the control type information.
In this scheme, the service end may obtain the use time of the command information of the same control type, and determine the command information corresponding to the use time closest to the current time as the shared command information. For example, if the use time at which the plurality of command information of the same control type are acquired includes "too cold" 23 th 3, "very cold" 24 th 3, "somewhat cold" 21 th 3. The "very cold" closest to the current time is determined as the shared command information among the plurality of command information. With the scheme, the shared command information can be determined more accurately.
Optionally, in order to correct the correspondence between the command information and the control instruction, which is factory-set in the smart voice devices in the first area, in this scheme, when the correspondence between the control instruction and the other command information is pre-stored in the plurality of smart voice devices in the first area, the plurality of smart voice devices in the first area are controlled to replace the correspondence between the pre-stored control instruction and the other command information with the correspondence between the shared command information and the control instruction.
In this scheme, when the intelligent voice devices of different brands leave the factory, the corresponding relations between a plurality of command information and control instructions are pre-stored, and specifically, if the corresponding relations between the control instructions and other command information are pre-stored in the intelligent voice device in the first area, the intelligent voice device in the first area is controlled to replace the corresponding relations between the pre-stored control instructions and other command information with the corresponding relations between the control instructions and the shared command information. By the scheme, the existing corresponding relation in the intelligent voice equipment in the first area can be corrected by combining the voice control habit of the user on the intelligent voice equipment, and the corresponding relation is updated in a replacement mode, so that the storage capacity of the corresponding relation is greatly reduced. The command information sharing of various intelligent voice devices is realized, and a user can conveniently control the intelligent voice devices of different brands by sharing the command information. In another example, after the intelligent voice device in the first area receives the correspondence, the correspondence may be stored in a preference library of the intelligent voice device so as to distinguish the correspondence from the correspondence pre-stored in the intelligent voice device. With the adoption of the scheme, when other users control the intelligent voice equipment in the first area, the intelligent voice equipment can be controlled through a control mode pre-stored in a factory of the intelligent voice equipment.
Optionally, if no intelligent voice device exists in the first area, determining a second area with the same type as the first area, and obtaining a plurality of command information pre-stored by each of a plurality of intelligent voice devices in the second area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the second area so that the plurality of intelligent voice devices in the second area store the corresponding relation.
In the scheme, if the intelligent voice equipment does not exist in the first area, the second area with the same type as the first area is determined. Specifically, the same type of area may be determined according to the name of the area. For example, if a first area is bedroom 1, then it is determined that a second area of the same type as the first area may be bedroom 2. And after the second area is determined, the corresponding relation between the shared command information and the control instruction is sent to the plurality of intelligent voice devices in the second area, so that the plurality of intelligent voice devices in the second area store the corresponding relation. By the scheme, each intelligent voice device in the second area can conveniently acquire the corresponding relation between the shared command information and the control instruction, the situation that the intelligent voice device cannot be controlled due to inaccurate command information switching is effectively avoided, and when the user controls the intelligent voice device in the second area again, the control intention of the user can be accurately determined according to the corresponding relation of the shared command information. The command information sharing of the intelligent voice equipment in the same room in the intelligent home is achieved, and the intelligent voice equipment in the corresponding room is controlled by the user according to the control requirement.
Optionally, if the intelligent voice equipment does not exist in the first area and the second area, determining a third area where the first area and the second area are located, and obtaining a plurality of command information pre-stored by the plurality of intelligent voice equipment in the third area; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the intelligent voice devices in the third area so that the intelligent voice devices in the third area store the corresponding relation.
In the scheme, if the intelligent voice equipment does not exist in the first area and the second area, the third area where the first area and the second area are located can be determined. Specifically, the third region is larger than the sum of the areas of the first region and the second region. For example, the third area may be a house in which the first area and the second area are located. And after the third area is determined, sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the third area, so that the plurality of intelligent voice devices in the third area store the corresponding relation. With the adoption of the scheme, each intelligent voice device in the third area can conveniently acquire the corresponding relation between the shared command information and the control instruction, so that the situation that the intelligent voice device cannot be controlled due to inaccurate switching of the command information is effectively avoided, and when the user controls the intelligent voice device in the third area again, the control intention of the user can be accurately determined according to the corresponding relation of the shared command information. The command information sharing of the intelligent voice equipment in the whole house in the intelligent home is achieved, and the intelligent voice equipment in the whole house is controlled by the user according to the control requirement.
Optionally, in the present solution, the location information of the user and the command information input by the user in voice may be obtained; according to the position information, determining target intelligent voice equipment in the plurality of intelligent voice equipment; matching a target control instruction corresponding to command information input by a user voice in target intelligent voice equipment; and controlling the target intelligent voice equipment to execute the target control instruction.
In this embodiment, the command information input by the user may be shared command information. Specifically, command information input by a user through voice can be collected through mobile terminal equipment associated with the user, and the command information is sent to a server. In another example, the command information input by the user can be directly collected through the intelligent voice device, the target control instruction corresponding to the command information input by the voice is matched, meanwhile, the position information of the user can be obtained through the positioning information of the mobile terminal, and the intelligent voice device closest to the user is determined as the target intelligent voice device. Further, the target smart voice device may be controlled to execute the target control instruction. With the adoption of the scheme, after the intelligent voice equipment acquires the control habit of the intelligent equipment of the user, the control intention of the user is accurately determined, so that the intelligent voice equipment is accurately controlled.
As shown in fig. 2, an embodiment of the present disclosure provides an information processing apparatus, which includes an obtaining module 21, a determining module 22, and a sending module 23. The obtaining module 21 is configured to obtain a plurality of pieces of command information pre-stored by each of a plurality of intelligent voice devices in a first area; the determination module 22 is configured to determine, among the plurality of command information, shared command information and a control instruction corresponding to the shared command information; the sending module 23 is configured to send the correspondence between the shared command information and the control instruction to the plurality of smart voice devices in the first area, so that the plurality of smart voice devices in the first area store the correspondence.
By adopting the information processing device provided by the embodiment of the disclosure, a plurality of pieces of command information pre-stored by a plurality of pieces of intelligent voice equipment in a first area are obtained; determining shared command information and a control instruction corresponding to the shared command information in the command information; and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area so that the plurality of intelligent voice devices in the first area store the corresponding relation. By the scheme, each intelligent voice device in the first area can conveniently acquire the corresponding relation between the shared command information and the control instruction, the situation that the intelligent voice device cannot be controlled due to inaccurate command information switching is effectively avoided, and when the user controls the intelligent voice device in the first area again, the control intention of the user can be accurately determined according to the corresponding relation of the shared command information.
As shown in fig. 3, an embodiment of the present disclosure provides an information processing apparatus including a processor (processor)100 and a memory (memory) 101. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call the logic instructions in the memory 101 to execute an information processing method of the above-described embodiment.
In addition, the logic instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing by executing program instructions/modules stored in the memory 101, that is, implements an information processing method in the above-described embodiments.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
The embodiment of the disclosure provides an information processing system, which comprises the information processing device.
The disclosed embodiments provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-mentioned information processing method.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform one of the above-mentioned information processing methods.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. An information processing method characterized by comprising:
obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in a first area;
determining shared command information and a control instruction corresponding to the shared command information in the command information;
and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the first area so that the plurality of intelligent voice devices in the first area store the corresponding relation.
2. The method of claim 1, wherein the determining shared command information among the plurality of command information comprises:
determining control type information corresponding to the plurality of command information;
and determining shared command information in a plurality of command information corresponding to the same control type information.
3. The method according to claim 2, wherein the determining shared command information among the plurality of command information corresponding to the same control type information includes:
acquiring the use frequency of each of a plurality of command information corresponding to the same control type information;
and determining the command information with the highest use frequency as the shared command information corresponding to the control type information.
4. The method according to claim 2, wherein the determining shared command information among the plurality of command information corresponding to the same control type information includes:
acquiring respective use time of a plurality of command information corresponding to the same control type information;
and determining the command information corresponding to the use time closest to the current time as the shared command information corresponding to the control type information.
5. The method of claim 1, further comprising:
and under the condition that the corresponding relation between the control command and other command information is prestored in the plurality of intelligent voice devices in the first area, controlling the plurality of intelligent voice devices in the first area to replace the prestored corresponding relation between the control command and other command information with the corresponding relation between the shared command information and the control command.
6. The method of claim 1, wherein if the smart audio device is not present in the first area, the method further comprises:
determining a second area with the same type as the first area, and obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in the second area;
determining shared command information and a control instruction corresponding to the shared command information in the command information;
and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the second area so that the plurality of intelligent voice devices in the second area store the corresponding relation.
7. The method according to claim 1 or 6, wherein if the smart voice device is absent in neither the first area nor the second area, the method further comprises:
determining a third area where the first area and the second area are located, and obtaining a plurality of command information pre-stored by a plurality of intelligent voice devices in the third area;
determining shared command information and a control instruction corresponding to the shared command information in the command information;
and sending the corresponding relation between the shared command information and the control instruction to the plurality of intelligent voice devices in the third area so that the plurality of intelligent voice devices in the third area store the corresponding relation.
8. The method of any one of claims 1 to 7, further comprising:
acquiring the position information of a user and command information input by a user voice;
according to the position information, determining target intelligent voice equipment in the plurality of intelligent voice equipment;
matching a target control instruction corresponding to the command information input by the user voice in the target intelligent voice equipment;
and controlling the target intelligent voice equipment to execute the target control instruction.
9. An information processing apparatus comprising a processor and a memory storing program instructions, characterized in that the processor is configured to execute the information processing method according to any one of claims 1 to 8 when executing the program instructions.
10. An information processing system characterized by comprising the information processing apparatus according to claim 9.
CN202110662790.XA 2021-06-15 2021-06-15 Information processing method, device and system Active CN113380250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110662790.XA CN113380250B (en) 2021-06-15 2021-06-15 Information processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110662790.XA CN113380250B (en) 2021-06-15 2021-06-15 Information processing method, device and system

Publications (2)

Publication Number Publication Date
CN113380250A true CN113380250A (en) 2021-09-10
CN113380250B CN113380250B (en) 2023-10-20

Family

ID=77574439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110662790.XA Active CN113380250B (en) 2021-06-15 2021-06-15 Information processing method, device and system

Country Status (1)

Country Link
CN (1) CN113380250B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971557A (en) * 2005-11-25 2007-05-30 英业达股份有限公司 Glossary shared system and method
US20190020493A1 (en) * 2017-07-12 2019-01-17 Universal Electronics Inc. Apparatus, system and method for directing voice input in a controlling device
CN110286601A (en) * 2019-07-01 2019-09-27 珠海格力电器股份有限公司 Control the method, apparatus, control equipment and storage medium of smart home device
CN110867188A (en) * 2018-08-13 2020-03-06 珠海格力电器股份有限公司 Method and device for providing content service, storage medium and electronic device
CN112151035A (en) * 2020-10-14 2020-12-29 珠海格力电器股份有限公司 Voice control method and device, electronic equipment and readable storage medium
CN112331190A (en) * 2020-09-04 2021-02-05 深圳Tcl新技术有限公司 Intelligent equipment and method and device for self-establishing voice command thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971557A (en) * 2005-11-25 2007-05-30 英业达股份有限公司 Glossary shared system and method
US20190020493A1 (en) * 2017-07-12 2019-01-17 Universal Electronics Inc. Apparatus, system and method for directing voice input in a controlling device
CN110867188A (en) * 2018-08-13 2020-03-06 珠海格力电器股份有限公司 Method and device for providing content service, storage medium and electronic device
CN110286601A (en) * 2019-07-01 2019-09-27 珠海格力电器股份有限公司 Control the method, apparatus, control equipment and storage medium of smart home device
CN112331190A (en) * 2020-09-04 2021-02-05 深圳Tcl新技术有限公司 Intelligent equipment and method and device for self-establishing voice command thereof
CN112151035A (en) * 2020-10-14 2020-12-29 珠海格力电器股份有限公司 Voice control method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN113380250B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN113251613B (en) Method and device for air conditioner control and air conditioner
CN113595838A (en) Information processing method, system and device
CN113111186A (en) Method for controlling household appliance, storage medium and electronic device
JP6711635B2 (en) Network system, electrical equipment, communication terminal, and program for communication terminal
CN113341808A (en) Method, system, device and server for controlling household appliance
CN113091244B (en) Control method and device for household appliance and equipment
CN113375297A (en) Method and device for controlling air conditioner and air conditioner
CN113531818A (en) Running mode pushing method and device for air conditioner and air conditioner
CN112925219A (en) Method and device for executing smart home scene
WO2022247373A1 (en) Method and apparatus for controlling home appliance, and appliance
CN113380245B (en) Information processing method, device and system
CN113515053A (en) Method and device for controlling running of household appliance and household appliance
CN113436631B (en) Voice information processing method and system and device for voice information processing
CN114811898A (en) Method and device for controlling air conditioner and air conditioner
CN113375294B (en) Method and device for controlling air conditioner display temperature and air conditioner
CN110657556A (en) Method for controlling remote controller and remote controller
CN112880137A (en) Control method, system and device for air conditioner
CN113380250A (en) Information processing method, device and system
CN105763913A (en) Control terminal, method and device for remote control code matching of controlled terminal
WO2023168933A1 (en) Information processing method, device and system
CN113870849A (en) Information processing method, device and system
CN113359499B (en) Method and device for controlling household appliance, household appliance and readable storage medium
CN114484761A (en) Method and device for controlling wire controller and wire controller
CN113777948A (en) Control method and device of household appliance and nonvolatile storage medium
CN113251632A (en) Method and device for controlling air supply of air conditioner and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant