CN107528753B - Intelligent household voice control method, intelligent equipment and device with storage function - Google Patents
Intelligent household voice control method, intelligent equipment and device with storage function Download PDFInfo
- Publication number
- CN107528753B CN107528753B CN201710706333.XA CN201710706333A CN107528753B CN 107528753 B CN107528753 B CN 107528753B CN 201710706333 A CN201710706333 A CN 201710706333A CN 107528753 B CN107528753 B CN 107528753B
- Authority
- CN
- China
- Prior art keywords
- information
- equipment
- intelligent
- position information
- voice control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000009471 action Effects 0.000 claims abstract description 111
- 230000000875 corresponding effect Effects 0.000 claims abstract description 73
- 238000012544 monitoring process Methods 0.000 claims description 8
- 230000006386 memory function Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 description 15
- 239000000284 extract Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 6
- 230000006855 networking Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000001276 controlling effect Effects 0.000 description 4
- 238000005192 partition Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/282—Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
- G05B19/4185—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by the network communication
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Manufacturing & Machinery (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Selective Calling Equipment (AREA)
Abstract
The invention discloses an intelligent home voice control method, intelligent equipment and a device with a storage function. The method comprises the steps that target equipment is determined by combining equipment information and position information of intelligent equipment, and the equipment information and the corresponding position information of each intelligent equipment are obtained firstly; after receiving a voice control instruction sent by a user, analyzing the instruction, and extracting equipment information, position information and action information of equipment to be controlled; matching the equipment information and the position information of the equipment to be controlled with the acquired equipment information and the position information of each intelligent equipment so as to search and determine target equipment; and if the matching is successful, the target equipment is found, and the action information is sent to the target equipment so as to control the target equipment to execute the corresponding action. By the mode, the method and the device can avoid the trouble that a user needs to remember a large number of device names, and the target device is high in searching accuracy, convenient and fast.
Description
Technical Field
The invention relates to the technical field of intelligent home, in particular to an intelligent home voice control method, intelligent equipment and a device with a storage function.
Background
Along with the improvement of living standard of people, people put forward higher requirements to living environment, and pay more and more attention to the comfort, safety and convenience of living at home. The intelligent home complies with the requirements of people, and aims to integrate various technologies of computers, automatic control, artificial intelligence and network communication, connect various equipment terminals in the home environment together through a home network and realize the intelligent control of the home environment. With the progress of science and technology, various man-machine products capable of realizing control by utilizing voice recognition are continuously emerged, and the convenience of the intelligent home is further enhanced by the voice control technology.
However, when using the voice control device, it is necessary to wake up the device first, and the wake-up word is usually a preset device name, and the user designates the target device by calling a specific name and performs subsequent voice control. The method for specifying the target device by calling the device has good experience when the target device is a single intelligent device, but with the increase of intelligent voice devices in a family, as each device has a device name, a user needs to remember a large number of device names, and the experience is reduced; in addition, if there are two identical devices, there may be a case where the devices are awake and controlled at the same time, resulting in erroneous control.
Disclosure of Invention
The invention provides an intelligent home voice control method, intelligent equipment and a device with a storage function, and can solve the problems that in the existing intelligent home voice control method, a target device is appointed by calling a device name, a user needs to remember a large number of device names, the operation is troublesome, the user experience is reduced, and if the same device exists, error control is easily caused.
In order to solve the technical problems, the invention adopts a technical scheme that: the method for controlling the smart home voice comprises the following steps:
acquiring equipment information and corresponding position information of each intelligent equipment;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the equipment information and the position information of the equipment to be controlled with the equipment information and the position information of each intelligent equipment to search target equipment;
if the matching is successful, the target equipment is found, and the action information is sent to the target equipment so as to control the target equipment to execute corresponding actions.
In order to solve the technical problem, the invention adopts another technical scheme that: the method for controlling the smart home voice comprises the following steps:
the intelligent equipment acquires equipment information and position information corresponding to the intelligent equipment;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the device information and the position information of the device to be controlled with the acquired device information and the acquired position information to judge whether the intelligent device is a target device;
and if the matching is successful, determining that the intelligent equipment is the target equipment, and executing the action corresponding to the action information.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a smart device comprising a processor which, when executing program data, is operative to perform the steps of:
acquiring equipment information and corresponding position information of each intelligent equipment;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the equipment information and the position information of the equipment to be controlled with the equipment information and the position information of each intelligent equipment to search target equipment;
if the matching is successful, the target equipment is found, and the action information is sent to the target equipment so as to control the target equipment to execute corresponding actions.
Or the processor, when executing the program data, is to perform the steps of:
the intelligent equipment acquires equipment information and position information corresponding to the intelligent equipment;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the device information and the position information of the device to be controlled with the acquired device information and the acquired position information to judge whether the intelligent device is a target device;
and if the matching is successful, determining that the intelligent equipment is the target equipment, and executing the action corresponding to the action information.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided a device having a memory function, the device having stored therein program data which when executed by a processor performs the steps of:
acquiring equipment information and corresponding position information of each intelligent equipment;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the equipment information and the position information of the equipment to be controlled with the equipment information and the position information of each intelligent equipment to search target equipment;
if the matching is successful, the target equipment is found, and the action information is sent to the target equipment so as to control the target equipment to execute corresponding actions.
Alternatively, the program data when executed by a processor implements the steps of:
the intelligent equipment acquires equipment information and position information corresponding to the intelligent equipment;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the device information and the position information of the device to be controlled with the acquired device information and the acquired position information to judge whether the intelligent device is a target device;
and if the matching is successful, determining that the intelligent equipment is the target equipment, and executing the action corresponding to the action information.
The invention has the beneficial effects that: the invention provides an intelligent home voice control method which is different from the prior art, and the method comprises the steps of firstly obtaining equipment information and corresponding position information of each intelligent equipment in a space; when a user carries out voice control operation, sending a voice control instruction comprising equipment information, position information and action information of equipment to be controlled, wherein the equipment information and the position information are used for specifying the equipment to be controlled, and the action information is used for controlling target equipment to execute corresponding actions; the command receiving end receives a voice control command sent by a user, further analyzes the command, and extracts equipment information, position information and action information of equipment to be controlled; then matching the equipment information and the position information of the equipment to be controlled with the equipment information and the position information of each intelligent equipment, and searching the target equipment through matching the equipment information and the position information; if the matching is successful, the target equipment is found, and the action information in the voice control instruction is sent to the target equipment so as to control the target equipment to execute the corresponding action. The method determines the target equipment by combining the equipment information and the position information of the intelligent equipment, matches the equipment information and the position information of the equipment to be controlled in the voice control instruction sent by the user with the equipment information and the position information of each intelligent equipment which are acquired in advance, and searches the target equipment by matching the equipment information and the position information, so that the trouble that the user needs to remember a large number of equipment names can be avoided, and the searching accuracy is high, convenient and quick.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a voice control method for smart home;
FIG. 2 is a schematic view of a partition according to an embodiment of performing space division according to a current location of a user in the voice control method for smart home shown in FIG. 1;
FIG. 3 is a schematic diagram of an application scenario house layout of the smart home voice control method shown in FIG. 1;
FIG. 4 is a schematic flow chart illustrating a voice control method for smart home according to another embodiment of the present invention;
FIG. 5 is a schematic flow chart diagram illustrating a voice control method for a smart home according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of a smart device of the present invention;
FIG. 7 is a schematic structural diagram of an embodiment of an apparatus with a storage function according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a voice control method for smart home according to the present invention. As shown in fig. 1, the smart home voice control method of the present embodiment includes the following steps:
101: and acquiring the equipment information and the corresponding position information of each intelligent equipment.
The intelligent equipment is intelligent equipment capable of being controlled by voice, and comprises equipment with fixed positions (such as a lamp, a television, a computer, an air conditioner, a fan and the like) and movable equipment (such as a sweeping robot and the like) under normal conditions. The equipment information and the corresponding position information of each intelligent equipment can be obtained through the real-time monitoring of the depth camera, and the depth camera can adopt an independent camera or a camera configured by a certain intelligent equipment. Of course, the device information and the corresponding location information of each device may also be obtained by other intelligent devices having the function, which is not limited in the present invention. For convenience of explanation, the depth camera is taken as an example in the present embodiment.
Specifically, each intelligent device in the space is identified through the depth camera, and then device information of each intelligent device, such as one or more combinations of device information of device name, device brand, device model and the like, is obtained in a networking or system data searching mode. If the networking mode is adopted to obtain the equipment information of each intelligent equipment, after the intelligent equipment is identified through the depth camera, each intelligent equipment is searched through networking search comparison, and then the equipment information corresponding to each intelligent equipment is obtained. For example, a picture of a certain intelligent device is shot through a depth camera, then picture search is performed through networking, the intelligent device is searched through comparison, and then device information corresponding to the intelligent device is obtained. If the device information of each intelligent device is acquired by a system data searching mode, a user needs to manually input the device information of each intelligent device into the system in advance, and after an object is identified by using an object identification function through the depth camera, the device information is compared with the device information input in the system in advance and searched, and then the corresponding device information is acquired.
Meanwhile, position information corresponding to each intelligent device, such as absolute position information of each intelligent device in space and/or relative position information relative to the current position of the user, is obtained through the depth camera.
Where absolute position information may include spatial coordinate information and/or orientation information of an object at a fixed position relative to the same space. For the space coordinate information, three-dimensional modeling can be carried out on the space through the depth camera to generate space coordinates, then object identification and screening are carried out through an image identification technology, and the space coordinate information of each intelligent device in the space is obtained, wherein the space coordinate data can be generated by modeling in advance for the device with a fixed position, and real-time space coordinate data can be generated by tracking for the movable device. For the direction information of the object with the fixed position relative to the same space, the object identification can be carried out by the depth camera through an image identification technology, and the direction information of each intelligent device relative to the object with the fixed position in the space, such as a television in a living room/close to a wall, a lamp in the living room/beside the television and the like, is formed through analysis.
Relative position information relative to the current position of the user can be monitored and obtained through the depth camera, and real-time position and posture information of the user in the space can be obtained, wherein the posture information comprises the orientation and the limb characteristics (such as hands, feet, head and the like) of the user; then, space division is carried out according to the acquired real-time position and posture information of the user; and then obtaining the relative position information of each intelligent device relative to the current position of the user under the space division according to the absolute position information of each intelligent device.
For the space division, please refer to fig. 2, fig. 2 is a schematic view of a partition according to an embodiment of the space division according to the current position of the user in the voice control method for smart home shown in fig. 1, and as shown in fig. 2, the space division according to the current position of the user can be divided into front, rear, left, right, front left, front right, rear left, and upper (top of head). In addition, in other embodiments, other division modes may be adopted, for example, a directional function may be added to a depth camera or other intelligent devices with such functions, and then the space may be divided into: east, south, west, north, northeast, southeast, northwest, southwest, above (vertex) and below (underfoot), etc.
After the device information and the corresponding position information of each intelligent device are obtained, a corresponding device-position relation topological graph can be established according to the obtained device information and the obtained position information of each intelligent device, and the specific information of the device corresponds to the position information in the space, so that the obtained information is integrated and managed.
For example, each intelligent device in a certain space is identified through a depth camera, and device information of each intelligent device is obtained, wherein the device information comprises a device name, a brand and a model; meanwhile, three-dimensional modeling is carried out on the space through the depth camera to generate space coordinates, then object identification screening is carried out through an image identification technology, the space coordinates of each intelligent device in the space are obtained, and the direction information of each intelligent device in the space relative to the object with the fixed position is obtained through analysis, so that the position information of each intelligent device is obtained; then, according to the obtained device name, brand, model, spatial coordinate and orientation information of each intelligent device, a corresponding device-position relationship topological graph is established, which is specifically shown in the following table 1:
TABLE 1
Device name | Brand | Model number | Space coordinate (xyz) | Orientation information |
Television receiver | TCL | 55H7800A-UD | (1.5,0.5,1) | In/against wall of living room |
Dining hall lamp | TCL | TCLSZ-0447 | (2,1.5,2.7) | Restaurant inner/ceiling |
Air conditioner | XXX | KFR-XXX | (1,0,2.2) | In living room/beside television |
Refrigerator with a door | TCL | BCD-282KR50 | (1.7,1,2) | In/against wall of living room |
In order to further facilitate the integration management of each intelligent device, each device can be numbered directly or according to the device type, and then tabulated according to the number. Of course, the device information and the location information of each intelligent device can be directly tabulated or numbered according to other rules (such as regions).
102: and receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of the equipment to be controlled.
Specifically, when a user wants to control a certain intelligent device to execute corresponding operation through voice, a corresponding voice control instruction is sent out, and the voice control instruction comprises device information, position information and action information of the device to be controlled. The device information and the position information are used for specifying a device to be controlled, and the action information is used for controlling the target device to execute corresponding operation. The device information may be one or more types of device names, device brands, device models, and the like of the devices to be controlled, and if the user can call the device conveniently, the device information may be one type, such as the device name; in order to further improve the accuracy of call assignment, various combinations can be used, such as device name and device brand combination. The position information can be absolute position information of the device to be controlled in the space and/or relative position information relative to the current position of the user, and the absolute position information can adopt space coordinate information and/or orientation information of an object with a fixed position relative to the same space. It should be noted that the device information type and the location information type in the voice control command need to correspond to the device information type and the location information type of each smart device obtained in step 101 above, or the device information type and the location information type obtained in step 101 above include the device information type and the location information type in the voice control command, so as to ensure that corresponding matching can be performed to find the target device.
For example, the device information of each intelligent device obtained in step 101 is a device name and a device brand, the location information is orientation information of an object fixed in a position relative to the same space and relative location information of a current location of the user, and the device information and the location information of the device to be controlled in the voice control instruction sent by the user may respectively adopt the device name and the orientation information of the object fixed in the position relative to the same space, or respectively adopt (the device name + the device brand) and (the orientation information of the object fixed in the position relative to the same space + the relative location information of the current location of the user).
After the user sends the voice control command, the command receiving end correspondingly receives the voice control command, and then step 103 is executed.
103: and analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled.
When the command receiving end receives the voice control command sent by the user through step 102, the command is analyzed, the device information, the position information and the action information of the device to be controlled in the command are extracted, and then step 104 is executed.
For example, a user wants to open a television in a living room, sends a voice control instruction "open the television in the living room", instructs a receiving end to receive the instruction, further analyzes the instruction, and extracts and obtains device information "television" of a device to be controlled, "in the living room" of location information and "open" of action information.
104: and matching the equipment information and the position information of the equipment to be controlled with the acquired equipment information and the position information of each intelligent equipment to search the target equipment.
Analyzing a voice control instruction sent by a user in step 103, extracting and obtaining equipment information, position information and action information of equipment to be controlled, matching the equipment information and the position information of the equipment to be controlled with the equipment information and the position information of each intelligent equipment obtained in step 101, and if the equipment information and the position information of a certain equipment in the obtained information of each intelligent equipment correspond to or comprise the equipment information and the position information of the equipment to be controlled obtained by analyzing and extracting in step 103, successfully matching and finding target equipment; otherwise, the matching fails, and the target information cannot be found. If the matching is successful, go to step 105.
If a corresponding device-relationship topology graph is established according to the acquired device information and location information of each intelligent device, the device information and location information of the device to be controlled can be matched with the device information and location information of each intelligent device in the established device-relationship topology graph as above to find the target device.
105: if the matching is successful, the target equipment is found, and the action information is sent to the target equipment so as to control the target equipment to execute the corresponding action.
If the device information and the location information of a certain device are matched with the information of the intelligent devices acquired in step 101 or the device information and the location information of the device to be controlled acquired through analysis and extraction in step 103 are included, and the device is a target device to be controlled, the action information acquired through analysis and extraction of the voice control instruction in step 103 is sent to the target device so as to control the target device to execute a corresponding action.
If the device information and the position information of the device to be controlled, which are obtained by analyzing and extracting the user voice control instruction, do not match with the corresponding devices in the device information and the corresponding position information of each intelligent device, which are obtained in the step 101, and the target device is not found, the user is prompted to fail in matching, the target device is not found, and then the control operation is ended or a new voice control instruction is sent by the user, and if a new voice instruction sent by the user is received, the above operation steps are repeated.
Referring to fig. 3, fig. 3 is a schematic diagram of a house layout in an application scenario of the smart home voice control method shown in fig. 1. As shown in fig. 3, assuming a scenario where the user 1 has eaten dinner in a restaurant at night, walks to a location between a living room and the restaurant (the left hand side of the current location of the user 1 is the restaurant and the right hand side is the living room), he wants to turn on the television 4 in the living room and turn off the lamp 3-1 of the restaurant, which can be done by voice control. Specifically, each intelligent device in the space is identified through a depth camera 2 arranged at the corner of the upper left wall of a user 1, and device information of each intelligent device is obtained, wherein the device information comprises a device name, a brand and a model; meanwhile, the position information of each intelligent device in the space relative to the object at the fixed position and the relative position information relative to the current position of the user 1 are obtained through the depth camera 2, so that the position information of each intelligent device is obtained; then, according to the obtained device information and the position information of each intelligent device, a corresponding device-position relationship topological graph is established, and the obtained device-position relationship topological graph is shown in the following table 2:
TABLE 2
The user 1 firstly opens the television 4 in the living room, sends a voice control instruction 'open the television on the right hand side', and instructs the receiving end to receive the voice control instruction of the user 1, so as to analyze the instruction, extract and obtain the equipment information 'television', the position information 'right hand side' and the action information 'open' of the equipment to be controlled; and then matching the extracted equipment information 'television' and the extracted position information 'right-hand side' of the equipment to be controlled with the equipment information and the position information of each intelligent equipment in the table 2, finding that the equipment information and the position information of the equipment to be controlled are matched with the information of the equipment 4 in the table 2, determining the equipment 4 as target equipment, and then sending the action information 'opening' to the equipment 4 so as to control the equipment 4 to execute corresponding opening action.
Then, the user 1 closes the lamp 3-1 of the restaurant, sends a voice control instruction of closing the lamp in the restaurant at the upper left part, and instructs the receiving end to receive the voice control instruction of the user 1, so as to analyze the instruction, and extract and obtain the equipment information lamp, the position information left upper part and in the restaurant and the action information closing of the equipment to be controlled; and then matching the device information and the position information of each intelligent device in the table 2, finding the information matching the device 3-1 in the table 2, determining that the device 3-1 is a target device, and then sending the action information 'close' to the device 3-1 to control the device 3-1 to execute a corresponding closing action.
According to the intelligent home voice control method, the target equipment is determined by combining the equipment information and the position information of the intelligent equipment, the equipment information and the position information of the equipment to be controlled in the voice control instruction sent by the user are matched with the equipment information and the position information of the intelligent equipment which are acquired in advance, the target equipment is searched through matching of the equipment information and the position information, the trouble that the user needs to remember a large number of equipment names can be avoided, the searching accuracy is high, and the method is convenient and fast.
According to the intelligent home voice control method, when the target equipment is found in a matching mode, the action information is immediately sent to the target equipment. In some cases, however, the user does not want the device to perform its instructional actions at the moment, but rather at other times, such as at a delayed timing. Therefore, the invention provides another embodiment of the intelligent home voice control method to realize the controllable action execution time of the voice control instruction. Referring to fig. 4, fig. 4 is a schematic flow chart of another embodiment of the smart home voice control method according to the present invention. As shown in fig. 4, step 401 and step 404 of the smart home voice control method according to this embodiment are respectively the same as step 101 and step 104 of the smart home voice control method shown in fig. 1; the difference between the intelligent home voice control method of this embodiment and the intelligent home voice control method shown in fig. 1 is that:
step 402: receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information, action information and action execution time of equipment to be controlled;
step 403: analyzing the voice control instruction, and extracting equipment information, position information, action information and action execution time of the equipment to be controlled;
step 405: if the matching is successful, the target equipment is searched, whether the current time reaches the action execution time or not is monitored and judged in real time, and when the action execution time is reached, the action information is sent to the target equipment so as to control the target equipment to execute the corresponding action.
That is, the voice control instruction sent by the user includes the action execution time in addition to the device information, the position information and the action information of the device to be controlled; after receiving a voice control instruction of a user, an instruction receiving end analyzes and extracts equipment information, position information and action information of equipment to be controlled and extracts action execution time in the instruction; if the equipment information and the position information of the equipment to be controlled are matched with the acquired equipment information and the position information of each intelligent equipment, the target equipment is searched, whether the current time reaches the action execution time in the voice control instruction or not is monitored and judged in real time, and when the action execution time is reached, the action information is sent to the target equipment so as to control the target equipment to execute corresponding actions, so that the corresponding actions can be executed at regular time according to the instruction of a user.
It should be noted that, if the action execution time does not occur in the voice control instruction sent by the user, the execution may be performed immediately by default, that is, if the target device is found in a matching manner, the action information in the voice action instruction is immediately sent to the target device to control the target device to execute the corresponding action.
For example, if the current time is 2017, 8, 10, 23:20, a user wants to turn on a room air conditioner when sleeping, but wants to turn off the room air conditioner after 2 hours, a voice control instruction is sent to turn on the room air conditioner, the instruction receiving end receives the user instruction, then the device information, the position information, the room air conditioner and the action information of the device to be controlled are obtained by analyzing and extracting, the device information, the air conditioner and the position information, the room air conditioner and the position information of the device to be controlled are matched with the device information and the position information of each intelligent device in the space, the action information is sent to be turned on to the target device after the target device is found through matching, and the target device executes corresponding turning-on action after receiving the action information.
If the user closes the air conditioner after 2 hours, another voice control instruction is sent, namely the air conditioner in the room is closed after 2 hours, the instruction receiving end receives the instruction of the user, the equipment information 'air conditioner', the position information 'in the room', the action information 'closed' and the action execution time '2017, 8, 11, day 1: 20' of the equipment to be controlled are obtained through analysis and extraction, then matching the equipment information 'air conditioner' and the position information 'in room' of the equipment to be controlled with the equipment information and the position information of each intelligent equipment in the space which are acquired in advance, after the target equipment is found through matching, real-time monitoring judges whether the current time reaches the action execution time of 1:20 in 8/11/2017, when the action execution time is reached, the action information "close" is sent to the target device, and the target device executes the corresponding closed action after receiving the action information.
According to the intelligent home voice control method, the target equipment is determined by combining the equipment information and the position information of the intelligent equipment, the equipment information and the position information of the equipment to be controlled in the voice control instruction sent by the user are matched with the equipment information and the position information of the intelligent equipment which are acquired in advance, the target equipment is searched through matching of the equipment information and the position information, the trouble that the user needs to remember a large number of equipment names can be avoided, the searching accuracy is high, and the method is convenient and fast. Meanwhile, a monitoring function of the action execution time is added, when the current time is monitored and judged to reach the action execution time in the user voice control instruction, the action information is sent to the target equipment so as to control the target equipment to execute the corresponding action, and the corresponding action can be executed at regular time according to the instruction of the user.
According to the intelligent home voice control method of each embodiment, the device information and the position information of the devices to be controlled in the voice control instruction sent by the user are matched with the obtained information of each intelligent device to search the target device through the device information and the position information of each intelligent device which are obtained in advance, so that unified management and control of each intelligent device in the space can be realized. In addition, the invention also provides another implementation mode of the intelligent home voice control method, and the intelligent device can judge whether the intelligent device is the target device or not by matching the device information and the position information of the device to be controlled in the voice control instruction of the user with the device information and the position information of the intelligent device. Referring to fig. 5, fig. 5 is a flowchart illustrating a voice control method for a smart home according to another embodiment of the present invention. As shown in fig. 5, the smart home voice control method of the present embodiment includes the following steps:
501: the intelligent device obtains the device information and the position information corresponding to the intelligent device.
The intelligent device obtains device information and position information corresponding to the intelligent device.
The device information may include one or more of a device name, a device brand, and a device model, among others. Specifically, the intelligent equipment can be identified through a depth camera or other equipment, then the equipment information of the intelligent equipment is obtained through networking, and then the equipment information is sent to the intelligent equipment; or the user can manually input the equipment information of the intelligent equipment into the system in advance, and the intelligent equipment acquires the corresponding equipment information through the system.
The position information of the intelligent device comprises absolute position information of the intelligent device in the space and/or relative position information relative to the position where the user is located currently, and the absolute position information can comprise space coordinate information and/or orientation information of an object with a fixed position relative to the same space. The position information can be acquired through a depth camera configured by the intelligent equipment, and also can be acquired by receiving the position information corresponding to the intelligent equipment sent to the intelligent equipment by other equipment, namely, the position information of the intelligent equipment is acquired by other equipment and then sent to the intelligent equipment, and the intelligent equipment receives and acquires the position information corresponding to the intelligent equipment
502: and receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of the equipment to be controlled.
The intelligent device receives a voice control instruction of a user, wherein the instruction comprises device information, position information and action information of the device to be controlled. The device information type and the position information type of the device to be controlled correspond to the device information type and the position information type which are acquired by the intelligent device before, or the device information type and the position information type which are acquired by the intelligent device before comprise the device information type and the position information type of the device to be controlled in the voice control instruction of the user.
503: and analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled.
After receiving the voice control instruction of the user in step 502, the smart device parses the instruction and extracts the device information, the location information, and the action information of the device to be controlled.
504: and matching the extracted equipment information and position information of the equipment to be controlled with the obtained equipment information and position information to judge whether the intelligent equipment is the target equipment.
After the voice control instruction of the user is analyzed in step 503, the extracted device information and location information of the device to be controlled are matched with the device information and location information corresponding to the device obtained by the intelligent device in step 501, so as to determine whether the intelligent device is a target device. If the device information and the position information corresponding to the intelligent device obtained in the step 501 correspond to each other or include the device information and the position information of the device to be controlled obtained through analysis and extraction in the step 503, matching is successful, and the intelligent device is determined to be the target device; otherwise, the matching fails, and the intelligent device is determined not to be the target device. If the matching is successful, go to step 505.
505: and if the matching is successful, determining that the intelligent equipment is the target equipment, and executing the action corresponding to the action information.
If the device information and the location information corresponding to the device obtained in step 501 correspond to each other or include the device information and the location information of the device to be controlled, which are obtained through analysis and extraction in step 503, the intelligent device can be determined to be the target device, and then the action corresponding to the action information in the user voice control instruction is executed.
According to the intelligent home voice control method, the intelligent device obtains the device information and the position information corresponding to the intelligent device, after the voice control instruction of the user is received, the device information and the position information of the device to be controlled are obtained through analysis and extraction, then the information of the device to be controlled is matched with the device information and the position information of the intelligent device, whether the intelligent device is a target device or not is judged, and when the intelligent device is determined to be the target device through matching, corresponding instruction action is executed. The method judges and determines the target equipment by combining the equipment information and the position information of the intelligent equipment, is convenient and quick, has high accuracy, and can avoid the trouble that a user needs to remember a large number of equipment names.
In another embodiment, an action execution time monitoring function may be added to implement controlled timing execution of the command action. Specifically, if a user wants to control a certain intelligent device to execute corresponding operations at regular time, a corresponding voice control instruction is sent out, wherein the instruction comprises action execution time besides device information, position information and action information of the device to be controlled; after receiving a voice control instruction of a user, an instruction receiving end analyzes and extracts equipment information, position information and action information of equipment to be controlled and extracts action execution time in the instruction; if the equipment information and the position information of the equipment to be controlled are matched with the equipment information and the position information which are acquired by the intelligent equipment and correspond to the equipment information and the position information, the equipment is determined to be target equipment, whether the current time reaches the action execution time in the voice control instruction is monitored and judged in real time, and when the action execution time is reached, the action corresponding to the action information in the user voice control instruction is executed, so that the corresponding action can be executed at regular time according to the instruction of the user.
If the voice control instruction sent by the user does not include the action execution time, the voice control instruction can be executed immediately by default, namely if the intelligent device is determined to be the target device by matching judgment, the action corresponding to the action information is executed immediately.
The method is applied to intelligent equipment, and the logic process of the method can be represented by a computer program and can be realized by the intelligent equipment.
Referring to fig. 6, please refer to a hardware structure of the smart device, and fig. 6 is a schematic structural diagram of an embodiment of the smart device according to the present invention. As shown in fig. 6, the smart device 601 of this embodiment includes a processor 602, and the processor 602 implements the steps of the foregoing smart home voice control method embodiments when executing program data.
In this embodiment, the processor 602 of the smart device 601 executes the program data, and can determine the target device by combining the device information and the location information of the smart device, which is convenient and fast, and has high accuracy, and can avoid the trouble that the user needs to remember a large number of device names.
As for the computer program, when the computer program is implemented in a software form and sold or used as an independent product, the computer program may be stored in a storage medium readable by an electronic device, that is, the present invention further provides a device with a storage function, please refer to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of the device with a storage function of the present invention, the device with a storage function 701 has program data 702 stored thereon, and the program data 702 can be executed by a processor to implement the steps of the above embodiments of the smart home voice control method. The processor may be a processor of the device 701 having the storage function itself, or may be a processor of another device. The storage device 701 may include any device capable of carrying the above program data 702, such as at least one of a usb disk, an optical disk, and a device, a server, and the like, without limitation.
The device 701 with the storage function of the embodiment has the advantages that when the program data 702 stored on the device is executed by the processor, the target device can be determined by combining the device information and the position information of the intelligent device, the device is convenient and quick, the accuracy is high, and the trouble that a user needs to remember a large number of device names can be avoided.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (8)
1. The intelligent household voice control method is characterized by comprising the following steps:
acquiring equipment information and corresponding position information of each intelligent equipment, wherein the position information comprises absolute position information of each intelligent equipment in space and relative position information of the intelligent equipment relative to the current position of a user;
the relative position information relative to the current position of the user is obtained in the following way:
monitoring and acquiring real-time position and posture information of a user in a space, then performing space division according to the acquired real-time position and posture information of the user, and acquiring relative position information of each intelligent device relative to the current position of the user under the space division according to absolute position information of each intelligent device; wherein the pose information comprises an orientation and limb characteristics of the user;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the equipment information and the position information of the equipment to be controlled with the equipment information and the position information of each intelligent equipment to search target equipment;
if the matching is successful, the target equipment is found, and the action information is sent to the target equipment so as to control the target equipment to execute corresponding actions.
2. The method of claim 1, wherein the absolute position information comprises spatial coordinate information and/or orientation information of an object at a fixed position relative to the same space.
3. The method of claim 1, wherein the step of obtaining the device information and the corresponding location information of each smart device comprises monitoring and obtaining the device information and the corresponding location information of each smart device in real time.
4. The method according to claim 1, wherein after the step of obtaining the device information and the corresponding location information of each smart device, the method further comprises: establishing a corresponding device-position relation topological graph according to the device information and the position information of each intelligent device;
the step of matching the device information and the location information of the target device with the device information and the location information of each intelligent device to search for the target device specifically includes: and matching the equipment information and the position information of the target equipment with the equipment information and the position information of each intelligent equipment in the equipment-position relation topological graph so as to search the target equipment.
5. The method of any one of claims 1 to 4, further comprising:
the voice control instruction further comprises action execution time;
after the voice control instruction is analyzed, the extracted information also comprises the action execution time of the equipment to be controlled;
before the sending the action information to the target device, the method further includes: monitoring and judging whether the current time reaches the action execution time or not in real time; and if the action execution time is reached, sending the action information to the target equipment so as to control the target equipment to execute the corresponding action.
6. The intelligent household voice control method is characterized by comprising the following steps:
the method comprises the steps that intelligent equipment obtains corresponding equipment information and position information, wherein the position information comprises absolute position information of each intelligent equipment in space and relative position information of the intelligent equipment relative to the current position of a user;
the relative position information relative to the current position of the user is obtained in the following way:
monitoring and acquiring real-time position and posture information of a user in a space, then performing space division according to the acquired real-time position and posture information of the user, and acquiring relative position information of each intelligent device relative to the current position of the user under the space division according to absolute position information of each intelligent device; wherein the pose information comprises an orientation and limb characteristics of the user;
receiving a voice control instruction of a user, wherein the voice control instruction comprises equipment information, position information and action information of equipment to be controlled;
analyzing the voice control instruction, and extracting the equipment information, the position information and the action information of the equipment to be controlled;
matching the device information and the position information of the device to be controlled with the acquired device information and the acquired position information to judge whether the intelligent device is a target device;
and if the matching is successful, determining that the intelligent equipment is the target equipment, and executing the action corresponding to the action information.
7. An intelligent device, characterized in that the intelligent device comprises a processor which, when executing program data, is adapted to carry out the method according to any one of claims 1 to 6.
8. An apparatus having a memory function, wherein the apparatus has stored therein program data which, when executed by a processor, implements the method of any one of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710706333.XA CN107528753B (en) | 2017-08-16 | 2017-08-16 | Intelligent household voice control method, intelligent equipment and device with storage function |
PCT/CN2018/100662 WO2019034083A1 (en) | 2017-08-16 | 2018-08-15 | Voice control method for smart home, and smart device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710706333.XA CN107528753B (en) | 2017-08-16 | 2017-08-16 | Intelligent household voice control method, intelligent equipment and device with storage function |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107528753A CN107528753A (en) | 2017-12-29 |
CN107528753B true CN107528753B (en) | 2021-02-26 |
Family
ID=60681238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710706333.XA Active CN107528753B (en) | 2017-08-16 | 2017-08-16 | Intelligent household voice control method, intelligent equipment and device with storage function |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107528753B (en) |
WO (1) | WO2019034083A1 (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107528753B (en) * | 2017-08-16 | 2021-02-26 | 捷开通讯(深圳)有限公司 | Intelligent household voice control method, intelligent equipment and device with storage function |
CN108304497B (en) * | 2018-01-12 | 2020-06-30 | 深圳壹账通智能科技有限公司 | Terminal control method and device, computer equipment and storage medium |
CN108459510A (en) * | 2018-02-08 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Control method, equipment, system and the computer-readable medium of intelligent appliance |
CN108538290A (en) * | 2018-04-06 | 2018-09-14 | 东莞市华睿电子科技有限公司 | A kind of intelligent home furnishing control method based on audio signal detection |
CN108735214A (en) * | 2018-05-30 | 2018-11-02 | 出门问问信息科技有限公司 | The sound control method and device of equipment |
JPWO2019239738A1 (en) * | 2018-06-12 | 2021-07-15 | ソニーグループ株式会社 | Information processing device, information processing method |
CN110619739A (en) * | 2018-06-20 | 2019-12-27 | 深圳市领芯者科技有限公司 | Bluetooth control method and device based on artificial intelligence and mobile equipment |
CN109347709B (en) * | 2018-10-26 | 2021-02-23 | 北京蓦然认知科技有限公司 | Intelligent equipment control method, device and system |
CN109979449A (en) * | 2019-02-15 | 2019-07-05 | 江门市汉的电气科技有限公司 | A kind of sound control method of Intelligent lamp, device, equipment and storage medium |
EP3709194A1 (en) | 2019-03-15 | 2020-09-16 | Spotify AB | Ensemble-based data comparison |
CN110278291B (en) * | 2019-03-19 | 2022-02-11 | 新华三技术有限公司 | Wireless device naming method, storage medium and system |
CN111833862B (en) * | 2019-04-19 | 2023-10-20 | 佛山市顺德区美的电热电器制造有限公司 | Control method of equipment, control equipment and storage medium |
CN112053683A (en) * | 2019-06-06 | 2020-12-08 | 阿里巴巴集团控股有限公司 | Voice instruction processing method, device and control system |
CN112413834B (en) * | 2019-08-20 | 2021-12-17 | 广东美的制冷设备有限公司 | Air conditioning system, air conditioning instruction detection method, control device and readable storage medium |
US11094319B2 (en) | 2019-08-30 | 2021-08-17 | Spotify Ab | Systems and methods for generating a cleaned version of ambient sound |
US10827028B1 (en) | 2019-09-05 | 2020-11-03 | Spotify Ab | Systems and methods for playing media content on a target device |
CN110708220A (en) * | 2019-09-27 | 2020-01-17 | 恒大智慧科技有限公司 | Intelligent household control method and system and computer readable storage medium |
CN110687815B (en) * | 2019-10-29 | 2023-07-14 | 北京小米智能科技有限公司 | Equipment control method, device, terminal equipment and storage medium |
CN112987580B (en) * | 2019-12-12 | 2022-10-11 | 华为技术有限公司 | Equipment control method and device, server and storage medium |
CN111243588A (en) * | 2020-01-13 | 2020-06-05 | 北京声智科技有限公司 | Method for controlling equipment, electronic equipment and computer readable storage medium |
US11328722B2 (en) | 2020-02-11 | 2022-05-10 | Spotify Ab | Systems and methods for generating a singular voice audio stream |
US11308959B2 (en) | 2020-02-11 | 2022-04-19 | Spotify Ab | Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices |
CN113823280A (en) * | 2020-06-19 | 2021-12-21 | 华为技术有限公司 | Intelligent device control method, electronic device and system |
CN112270924A (en) * | 2020-09-18 | 2021-01-26 | 青岛海尔空调器有限总公司 | Voice control method and device of air conditioner |
CN112468377B (en) * | 2020-10-23 | 2023-02-24 | 和美(深圳)信息技术股份有限公司 | Control method and system of intelligent voice equipment |
CN113014633B (en) * | 2021-02-20 | 2022-07-01 | 杭州云深科技有限公司 | Method and device for positioning preset equipment, computer equipment and storage medium |
CN113359501A (en) * | 2021-06-29 | 2021-09-07 | 前海沃乐家(深圳)智能生活科技有限公司 | Remote control system and method based on intelligent switch |
CN117413493A (en) * | 2021-07-14 | 2024-01-16 | 海信视像科技股份有限公司 | Control device, household electrical appliance and control method |
CN113608449B (en) * | 2021-08-18 | 2023-09-15 | 四川启睿克科技有限公司 | Speech equipment positioning system and automatic positioning method in smart home scene |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014190496A1 (en) * | 2013-05-28 | 2014-12-04 | Thomson Licensing | Method and system for identifying location associated with voice command to control home appliance |
CN105700389B (en) * | 2014-11-27 | 2020-08-11 | 青岛海尔智能技术研发有限公司 | Intelligent home natural language control method |
CN105629750A (en) * | 2015-10-29 | 2016-06-01 | 东莞酷派软件技术有限公司 | Smart home control method and system |
CN105700375A (en) * | 2016-03-24 | 2016-06-22 | 四川邮科通信技术有限公司 | Intelligent household control system |
CN105739320A (en) * | 2016-04-29 | 2016-07-06 | 四川邮科通信技术有限公司 | Smart home control method based on application scene |
CN106448658B (en) * | 2016-11-17 | 2019-09-20 | 海信集团有限公司 | The sound control method and intelligent domestic gateway of smart home device |
CN106847269A (en) * | 2017-01-20 | 2017-06-13 | 浙江小尤鱼智能技术有限公司 | The sound control method and device of a kind of intelligent domestic system |
CN106707788B (en) * | 2017-03-09 | 2019-05-28 | 上海电器科学研究院 | A kind of intelligent home voice control identifying system and method |
CN107528753B (en) * | 2017-08-16 | 2021-02-26 | 捷开通讯(深圳)有限公司 | Intelligent household voice control method, intelligent equipment and device with storage function |
-
2017
- 2017-08-16 CN CN201710706333.XA patent/CN107528753B/en active Active
-
2018
- 2018-08-15 WO PCT/CN2018/100662 patent/WO2019034083A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
CN107528753A (en) | 2017-12-29 |
WO2019034083A1 (en) | 2019-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107528753B (en) | Intelligent household voice control method, intelligent equipment and device with storage function | |
CN107703872B (en) | Terminal control method and device of household appliance and terminal | |
US20180048482A1 (en) | Control system and control processing method and apparatus | |
CN104852975B (en) | Household equipment calling method and device | |
CN105045140B (en) | The method and apparatus of intelligent control controlled plant | |
CN105471705B (en) | Intelligent control method, equipment and system based on instant messaging | |
CN110308660B (en) | Intelligent equipment control method and device | |
US20190302714A1 (en) | Systems and methods to operate controllable devices with gestures and/or noises | |
US20200257274A1 (en) | Method, Apparatus and System for Controlling Device | |
KR20190101862A (en) | System and method for providing customized connected device functionality and for operating a connected device via an alternate object | |
US10895863B2 (en) | Electronic device and method for controlling the same | |
US10948995B2 (en) | Method and system for supporting object control, and non-transitory computer-readable recording medium | |
US9984563B2 (en) | Method and device for controlling subordinate electronic device or supporting control of subordinate electronic device by learning IR signal | |
CN112053683A (en) | Voice instruction processing method, device and control system | |
EP3387821B1 (en) | Electronic device and method for controlling the same | |
CN112740640A (en) | System and method for disambiguation of internet of things devices | |
CN106227059A (en) | Intelligent home furnishing control method based on indoor threedimensional model and equipment | |
WO2019067642A1 (en) | Environment-based application presentation | |
CN105042789A (en) | Control method and system of intelligent air conditioner | |
CN111915870A (en) | Method and device for adding remote controller code value through voice, television and storage medium | |
CN113110095A (en) | HomeMap construction method and system based on sweeping robot | |
US20210174151A1 (en) | Response based on hierarchical models | |
CN114791704A (en) | Device control method, device control apparatus, electronic device, program, and medium | |
CN114063572B (en) | Non-perception intelligent device control method, electronic device and control system | |
CN108415572B (en) | Module control method and device applied to mobile terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |