WO2020244573A1 - 一种语音指令的处理方法、设备及控制系统 - Google Patents

一种语音指令的处理方法、设备及控制系统 Download PDF

Info

Publication number
WO2020244573A1
WO2020244573A1 PCT/CN2020/094323 CN2020094323W WO2020244573A1 WO 2020244573 A1 WO2020244573 A1 WO 2020244573A1 CN 2020094323 W CN2020094323 W CN 2020094323W WO 2020244573 A1 WO2020244573 A1 WO 2020244573A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
controlled
area
voice
object information
Prior art date
Application number
PCT/CN2020/094323
Other languages
English (en)
French (fr)
Inventor
林文彬
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020244573A1 publication Critical patent/WO2020244573A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/34Adaptation of a single recogniser for parallel processing, e.g. by use of multiple processors or cloud computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to the technical field of voice processing, in particular to a method, equipment and control system for processing voice instructions.
  • smart voice devices have appeared on the market, such as smart speakers and various smart electronic devices (such as mobile devices, wearable electronic devices, etc.) containing smart interactive modules.
  • smart voice devices can recognize voice data input by users through voice recognition technology, and then provide users with personalized services.
  • voice recognition technology such as a Wi-Fi technology
  • users can recognize voice data input by users through voice recognition technology, and then provide users with personalized services.
  • voice recognition technology such as a Wi-Fi technology
  • the present invention provides a voice command processing method, equipment and control system, in an effort to solve or at least alleviate at least one of the above problems.
  • a voice instruction processing method including the steps of: recognizing the user’s behavioral intention and control object information from the voice instruction; determining the device to be controlled based on the area where the user is located and the control object information ; And based on the behavioral intention, generating a control instruction for the device to be controlled.
  • the method according to the present invention further includes the step of sending a control instruction to the device to be controlled, so that the device to be controlled performs the operation in the control instruction.
  • the method according to the present invention further includes the steps of: acquiring a monitoring image, which includes at least one device; generating at least one area in advance based on the monitoring image; and respectively associating at least one area for the device.
  • the step of determining the device to be controlled based on the area where the user is located and the control target information includes: determining the area where the user is located; determining the device associated with the area where the user is located; and Based on the control object information, the device to be controlled is determined from the determined devices.
  • the step of determining the area where the user is located includes: acquiring a current monitoring image, the monitoring image includes the user and at least one device; and determining the area where the user is located from the monitoring image.
  • the step of determining the area where the user is located from the surveillance image includes: detecting the user from the current surveillance image through human body detection; determining the area where the detected user is located .
  • the step of determining the device to be controlled from the determined device based on the control target information further includes: based on the control target information, selecting the device closest to the user from the determined device Equipment, as the equipment to be controlled.
  • the step of determining the device to be controlled from the determined device based on the control object information further includes: extracting the detected predetermined posture of the user; combining the control object information and the predetermined posture , Determine the device to be controlled from the determined devices.
  • the step of generating at least one area in advance based on the monitoring image includes: generating at least one area in advance based on the monitoring image, in combination with indoor spatial distribution and the location of the device.
  • the step of generating at least one area in advance based on the monitoring image includes: generating at least one area in advance based on the monitoring image and a user-defined area distribution.
  • a method for processing voice instructions including the steps of: identifying control target information from the voice instructions; determining the device to be controlled based on the area where the user is located and the control target information; Control instructions for control equipment.
  • a voice instruction processing method including the steps of: receiving voice instructions; based on the voice instructions and monitoring images, determining the user’s behavioral intention and monitoring the device to be controlled in the image; according to the determined behavior Intent, generate control instructions for the device to be controlled.
  • a voice instruction processing device including: a first processing unit adapted to recognize user behavior intentions and control object information from voice instructions; and a second processing unit adapted to Information about the area where the user is and the controlled object determines the device to be controlled; the instruction generating unit is adapted to generate a control instruction for the device to be controlled based on the behavior intention.
  • a voice command control system including: a voice interaction device adapted to receive user voice commands; an image acquisition device adapted to collect monitoring images; at least one device; as described above
  • the processing device is respectively coupled to the voice interaction device, the image acquisition device, and the device, and is suitable for determining the user’s behavioral intention and the device to be controlled from the at least one device based on voice commands and monitoring images, and generating a response to the The control instruction of the device to be controlled so that the device to be controlled can execute the operation in the control instruction.
  • a computing device including: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, and the program instructions include Instructions for executing the method described above.
  • a readable storage medium storing program instructions.
  • the program instructions When the program instructions are read and executed by a computing device, the computing device executes the method described above.
  • the user's behavioral intention and control object information are recognized, and then the device to be controlled by the user is determined according to the control object information. More specifically, the device is associated with the area in the surveillance image, and the device that the user wants to control is analyzed based on the surveillance image.
  • FIG. 1 shows a schematic diagram of a scene of a voice command control system 100 according to some embodiments of the present invention
  • FIG. 2 shows a schematic diagram of a computing device 200 according to some embodiments of the present invention
  • FIG. 3 shows a schematic flowchart of a method 300 for processing voice instructions according to some embodiments of the present invention
  • Fig. 4 shows a schematic diagram of a monitoring image according to an embodiment of the present invention
  • Figure 5 shows a schematic diagram of a surveillance image according to another embodiment of the present invention.
  • Fig. 6 shows a schematic diagram of a voice command processing device 140 according to some embodiments of the present invention.
  • FIG. 1 shows a schematic diagram of a scene of a voice instruction control system 100 according to some embodiments of the present invention.
  • the system 100 includes a voice interaction device 110, an image acquisition device 120, at least one device 130, and a voice command processing device 140.
  • the system 100 shown in FIG. 1 is only an example.
  • the system 100 may include multiple voice interaction devices 110 and image acquisition devices 120.
  • a voice interaction device 110 and an image capture device 120 are respectively arranged in each room.
  • the present invention does not limit the number of devices included in the system 100.
  • the voice interaction device 110 is a device with a voice interaction module, which can receive a voice instruction issued by a user, and can also return a corresponding response to the user, and the response may include voice or non-voice information.
  • a typical voice interaction module includes a voice input unit such as a microphone, a voice output unit such as a speaker, and a processor.
  • the voice interaction module can be built into the voice interaction device 110, or it can be used as an independent module in conjunction with the voice interaction device 110 (for example, communicate with the voice interaction device 110 via API or other means to call the functions on the voice interaction device 110). Or application interface service), the embodiment of the present invention does not limit this.
  • the voice interaction device 110 may be, for example, a smart speaker with a voice interaction module, a smart robot, other mobile devices, etc., and is not limited thereto.
  • the image acquisition device 120 is used to monitor the dynamics in the scene, and the scene includes the user and the device 130. In some embodiments, the image capture device 120 captures a video image in a scene as a monitoring image.
  • An application scenario of the system 100 is a household scenario. At this time, there may be more than one image capture device 120. In some embodiments, one image acquisition device 120 is arranged in each bedroom, living room, dining room, kitchen, balcony and other spaces; even when the space is relatively large (such as the living room), more than one image acquisition device 120 may be arranged.
  • the device 130 may be, for example, various smart devices, such as mobile terminals, wearable devices, etc.; it may also be some simple devices.
  • the device 130 can be a smart TV, a smart refrigerator, a smart air conditioner, a smart microwave, a smart curtain, etc., or a simple household device such as a switch, as long as it can be performed through the communication module and the voice command processing device 140 Just communicate.
  • the user can issue voice instructions to the voice interaction device 110 to implement certain functions, such as surfing the Internet, playing songs, shopping, understanding weather forecasts, etc.; the device 130 can also be controlled by voice instructions, such as controlling a smart air conditioner Adjust to a certain temperature, control the smart TV to play movies, control the switch of smart lamps, adjust the color temperature, control the switch of smart curtains, etc.
  • voice instructions such as controlling a smart air conditioner Adjust to a certain temperature, control the smart TV to play movies, control the switch of smart lamps, adjust the color temperature, control the switch of smart curtains, etc.
  • the voice interaction device 110, the image acquisition device 120, and the device 130 described above are all coupled to the voice command processing device 140 via a network to implement communication.
  • the voice interaction device 110 receives the user's voice instruction in the wake-up state, and transmits the voice instruction to the processing device 140, so that the processing device 140 recognizes the user's behavior when receiving the voice instruction Intent and control object information.
  • the control object information includes information of any device in the device 130, such as device name, device category, device identification, etc., and is not limited thereto.
  • the processing device 140 can determine the device to be controlled to which the control target information points by identifying the control target information.
  • the voice interaction device 110 may also have the capability of voice recognition.
  • voice recognition When receiving a user's voice command, it first recognizes the voice command, recognizes the user's behavioral intention and control object information, and sends these recognition results to the processing device 140. For example, the user issues a voice command-"Turn on the air conditioner". After recognizing the voice command, it is concluded that the user's behavioral intention is to "turn on” and the control object information is "air conditioner".
  • the processing device 140 obtains the monitoring image at the moment from the image acquisition device 120.
  • the processing device 140 can obtain the monitoring image at the moment when the voice command is received, and can also obtain the monitoring image at the moment of receiving the voice command and a short period of time before it (for example, 5 seconds before receiving the voice command)
  • the image is not limited to this.
  • the processing device 140 can obtain monitoring images from all the image acquisition devices 120.
  • the processing device 140 may be pre-associated and stored in the voice interaction device 110 and the image acquisition device 120. In this way, the processing device 140 may receive a voice command from the voice interaction device 110 after receiving a voice command from the voice interaction device 110.
  • the monitoring image is acquired at the image acquisition device 120. The embodiments of the present invention do not limit this.
  • the voice command processing device 140 processes the voice command based on the voice command and the monitoring image, determines the user's behavioral intention and the device to be controlled 130 in the monitoring image, and then, according to the behavioral intention and the determined to-be-controlled device 130
  • the device 130 generates a control instruction for the device to be controlled 130, and sends the control instruction to the device to be controlled 130, so that the device to be controlled performs the operation in the control instruction (the processing device 140 for the voice instruction processes the voice instruction
  • the processing device 140 for the voice instruction processes the voice instruction
  • the voice command processing device 140 may be, for example, a cloud server physically located in one or more locations. It should be noted that the voice command processing device 140 can also be implemented as other electronic devices (for example, other computing devices in the same IoT environment) connected to the voice interaction device 110 or the like via a network. When the voice interaction device 110 has sufficient storage capacity and computing power, the voice command processing device 140 may also be implemented as the voice interaction device 110 itself.
  • the image acquisition device 120 can also be arranged as a part of the voice interaction device 110, that is, the voice interaction device 110 that integrates voice interaction, image collection, and voice command processing is realized. The embodiments of the present invention are not limited to this.
  • Fig. 2 shows a schematic diagram of a computing device 200 according to an embodiment of the present invention.
  • the computing device 200 typically includes a system memory 206 and one or more processors 204.
  • the memory bus 208 may be used for communication between the processor 204 and the system memory 206.
  • the processor 204 may be any type of processing, including but not limited to: a microprocessor ( ⁇ P), a microcontroller ( ⁇ C), a digital information processor (DSP), or any combination thereof.
  • the processor 204 may include one or more levels of cache, such as the first level cache 210 and the second level cache 212, the processor core 214, and the registers 216.
  • the example processor core 214 may include an arithmetic logic unit (ALU), a floating point number unit (FPU), a digital signal processing core (DSP core), or any combination thereof.
  • the exemplary memory controller 218 may be used with the processor 204, or in some implementations, the memory controller 218 may be an internal part of the processor 204.
  • the system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof.
  • the system memory 206 may include an operating system 220, one or more applications 222, and program data 224.
  • the application 222 may be arranged to be executed by one or more processors 204 using program data 224 on an operating system to execute instructions.
  • the computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (eg, output device 242, peripheral interface 244, and communication device 246) to the basic configuration 202 via the bus/interface controller 230.
  • the example output device 242 includes an image processing unit 248 and an audio processing unit 250. They can be configured to facilitate communication with various external devices such as displays or speakers via one or more A/V ports 252.
  • the example peripheral interface 244 may include a serial interface controller 254 and a parallel interface controller 256, which may be configured to facilitate communication via one or more I/O ports 258 and input devices such as keyboards, mice, pens, etc. , Voice input devices, touch input devices) or other peripherals (such as printers, scanners, etc.) to communicate.
  • the example communication device 246 may include a network controller 260, which may be arranged to facilitate communication with one or more other computing devices 262 via the one or more communication ports 264 over a network communication link.
  • a network communication link may be an example of a communication medium.
  • the communication medium may generally be embodied as computer readable instructions, data structures, and program modules in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium.
  • a "modulated data signal" can be a signal, one or more of its data set or its change can be done in a way of encoding information in the signal.
  • communication media may include wired media such as a wired network or a dedicated line network, and various wireless media such as sound, radio frequency (RF), microwave, infrared (IR), or other wireless media.
  • RF radio frequency
  • IR infrared
  • the term computer readable media used herein may include both storage media and communication media.
  • the computing device 200 may further include a storage device 232 capable of data storage.
  • the storage device 232 may include a removable memory 236, a non-removable memory 238, and a storage interface bus 234.
  • the storage device 232 can store data of the basic configuration 202 and the output device 242.
  • the computing device 200 can be implemented as a personal computer including a desktop computer and a notebook computer configuration, and can also be implemented as a server, such as a file server, a database server, an application server, and a WEB server. Of course, the computing device 200 can also be implemented as a part of a small-sized portable (or mobile) electronic device. In the embodiment according to the present invention, the computing device 200 is configured to execute the method 300 for processing voice instructions according to the present invention.
  • the application 222 of the computing device 200 contains multiple program instructions for executing the method 300 according to the present invention.
  • FIG. 3 shows a schematic flowchart of a method 300 for processing a voice instruction according to some embodiments of the present invention.
  • the method is suitable for execution in the processing device 140 of voice commands. As shown in Fig. 3, the method 300 starts at step S310.
  • step S310 the user's behavioral intention and control target information are recognized from the voice instruction.
  • the voice command processing device 140 recognizes the voice command through ASR (Automatic Speech Recognition) voice recognition technology.
  • the voice command can be expressed as text data first, and then word segmentation is performed on the text data to obtain the corresponding text representation (it should be noted that other ways can also be used to represent the voice command, and the embodiments of the present invention are not limited to text representation) .
  • a typical ASR speech recognition method can be, for example, a method based on a vocal tract model and speech knowledge, a method of template matching, etc., and it is not limited thereto.
  • the voice command processing device 140 processes the text representation to understand the user's intention, and finally obtain the representation of the user's intention.
  • the processing device 140 may use NLP (Natural Language Processing) natural language processing methods to understand the user’s voice instructions and recognize the user’s behavioral intentions.
  • the user’s behavioral intentions often correspond to actual operations, such as opening , Close, play, etc.
  • the processing device 140 can further determine other parameters of the user's intention, such as control object information, which records the information of the device to be controlled by the user, so that the device 130 to be controlled by the user can be determined according to the control object information, that is, Which device to turn on, which device to turn off.
  • processing device 140 may also include some preprocessing operations on voice commands when recognizing through ASR technology, such as sampling, quantizing, removing voice data that does not contain voice content (such as silent voice data), The data is processed by framing, windowing, and so on.
  • preprocessing operations on voice commands when recognizing through ASR technology such as sampling, quantizing, removing voice data that does not contain voice content (such as silent voice data).
  • the data is processed by framing, windowing, and so on.
  • the embodiments of the present invention will not be expanded here too much.
  • the embodiments of the present invention do not make too many restrictions on which ASR algorithm or NLP algorithm is used to understand the user's intention from the voice command. Any known or future such algorithm can be used. Combined with the embodiment of the present invention, the method 300 of the present invention can be realized.
  • the voice interaction device 110 can also recognize the user's voice instructions, and directly send the recognized user's behavior intention and control object information to the voice instruction ⁇ 140 ⁇ Processing equipment 140.
  • the embodiment of the present invention does not make too many restrictions on this.
  • the user inputs a voice command-"turn on the air conditioner
  • the processing device 140 recognizes after analysis that the user's behavioral intention is to "turn on”
  • the control object information is "air conditioner”.
  • the processing device 140 can directly generate a corresponding control instruction to the air conditioner to indicate that it is in the on state.
  • air conditioners air conditioners are installed in the living room, dining room, bedroom, and study room.
  • the processing device 140 needs to further determine which air conditioner the user wants to turn on. Therefore, in the subsequent step S320, the device to be controlled is determined based on the area where the user is located and the control target information.
  • the device to be controlled by the user is determined according to the location of the user at this time.
  • the device pointed to by the control object information is within a certain range of the user's location, the device is considered to be the device to be controlled.
  • the location of the user is determined by the monitoring image collected by the image acquisition device 120, and the devices around the user are determined.
  • the method 300 further includes the following three steps.
  • Fig. 4 shows a schematic diagram of a monitoring image according to an embodiment of the present invention. As shown in Figure 4, this monitoring image captures images of the living room and dining room.
  • the image acquisition device 120 is arranged on the left side of the dining room curtain, and is not limited to this.
  • the equipment 130 included in the living room and dining room includes: living room lamps 401, TV sets 402, living room air conditioners 403, living room curtains 404, dining room lamps 405, dining room air conditioners 406, and dining room curtains 407.
  • At least one area is generated in advance.
  • At least one area is generated in advance based on the monitoring image, combined with the indoor spatial distribution and the location of the device 130.
  • the part of the living room in the surveillance image is regarded as one area
  • the part of the restaurant in the surveillance image is regarded as another area.
  • the surveillance image is divided into two areas, left and right.
  • At least one area is generated in advance based on the monitoring image and the user-defined area distribution.
  • the user can customize the area according to his own living habits, for example, the central area of the living room is regarded as area 1, the central area of the restaurant is regarded as area 2, and the remaining area is regarded as area 3.
  • the monitoring image is divided into 6 regions, which are labeled ROI1, ROI2, ROI3, ROI4, ROI5, and ROI6.
  • ROI1, ROI2, ROI3, ROI4, ROI5, and ROI6 are labeled ROI1, ROI2, ROI3, ROI4, ROI5, and ROI6.
  • the area may be a rectangle, a circle or any irregularly shaped area, and the embodiment of the present invention does not limit the shape, size, and number of the area divisions.
  • the device A is in the area R1
  • the device A is associated with the area R1.
  • it can also be set according to the user's preference.
  • the user can customize whether the device B is associated with the region R1 or the region R2.
  • an area is associated with each device.
  • more than one area can be associated with the device.
  • the device C can be associated with the area R1 and the area R2 at the same time.
  • a corresponding area can be generated for the monitoring image of each image acquisition device 120 respectively. I won't repeat them here.
  • step S320 is implemented through the following three steps.
  • the first step is to determine the area where the user is located.
  • the current surveillance image is acquired first, and the surveillance image contains the user and at least one device.
  • the "current monitoring image” can be the monitoring image at the moment the user's voice instruction is received, or it can be the monitoring image within a short period of time before the user's voice instruction is received. This is the case in the embodiment of the present invention. Do not make too many restrictions.
  • the human body ie, the user
  • the area where the detected user is located is determined.
  • the user is in the region ROI1.
  • the traditional target recognition algorithm can be used to detect the human body in the monitored image, and the algorithm based on deep learning or the algorithm based on motion detection can also be used to detect the human body in the monitored image. Do too much restriction.
  • the second step is to determine the device associated with the user's area.
  • the devices associated with the area ROI1 include the living room air conditioner 403 and the living room curtain 404.
  • the third step is to determine the device to be controlled from the determined device based on the control object information.
  • the voice command is-"Turn on the air conditioner
  • the control object information is "air conditioner”.
  • the device closest to the user is selected as the device to be controlled. For example, when the voice command is "turn on the lights", the control object information is "lamps". If the equipment associated with the area has multiple lamps, such as desk lamps and spotlights, select the one closest to the user. Lamps, as the equipment to be controlled.
  • the position of the user in the surveillance image can be determined through human body detection, and the position of the device in the surveillance image can be calibrated in advance, so that the closest device to the user can be determined based on the position coordinates.
  • the device to be controlled is determined in the following manner.
  • the voice interaction device 110 transmits the voice instruction to the voice instruction processing device 140, which analyzes the user's behavior intention and control object information.
  • the processing device 140 first obtains a corresponding monitoring image from the image acquisition device 120, detects at least one human body through human body detection, and determines at least one region based on the at least one human body.
  • the embodiments of the present invention aim to provide a solution for matching devices through the foregoing implementation manners, and do not place too many restrictions on the specific image processing algorithm used.
  • the predetermined posture can also be set as another posture according to the user's habits. This is only an example, and the embodiment of the present invention does not limit the predetermined posture.
  • Fig. 5 shows a schematic diagram of a monitoring image according to another embodiment of the present invention.
  • the monitoring image collected is the image of the bedroom.
  • the equipment included in the bedroom includes: bedroom central chandelier 501, bedroom lamp belt 502, bedroom TV 503, bedroom air conditioner 504, bedroom curtain 505, bedroom table lamp 506.
  • the surveillance image is divided into 3 regions, which are denoted as ROI7, ROI8 and ROI9.
  • the association relationship between the area and the device 130 is shown in Table 2.
  • the user issues a voice command-"turn on the light", and at the same time points his finger to the direction of the desk lamp 506.
  • the processing device 140 first recognizes that the user's behavioral intention is "turn on”, and the control object information is "lamp”. Then, through the analysis of the monitoring image, the user is detected and the area where the user is located is ROI8. At this time, two devices corresponding to the control object information will be determined: bedroom lamp belt 502 and bedroom table lamp 506. Further, the user's gesture is extracted and the direction of the gesture is determined to be the direction of the desk lamp 506, and then it is determined that the device to be controlled is the bedroom desk lamp 506.
  • the following scenarios may also appear: more than one user is detected in the surveillance image (this means that the area where more than one user is located may be determined).
  • the above-mentioned various methods can be combined to finally determine the device to be controlled. For example, first determine at least one device corresponding to the control object information from multiple areas, and then calculate the distance between the device and its corresponding user (ie, the user in the area associated with the device), and select the smallest distance value As the equipment to be controlled. For another example, it is determined whether each detected user has a predetermined posture, the area where the user with the predetermined posture is located is determined as the final area, and then the device associated with the area is screened out as the device to be controlled.
  • step S330 based on the behavior intention, a control instruction for the device to be controlled is generated.
  • the voice command is "Turn on the air conditioner”
  • the control command generated by the processing device 140 can be "Turn on the living room air conditioner 403"
  • "Turn on” is the instruction to be executed
  • "living room air conditioner 403” is the instruction receiver, that is, the device to be controlled.
  • the processing device 140 sends the generated control instruction to the device to be controlled, and the processing device 140 performs operations according to the control instruction.
  • the processing device 140 sends a control instruction to the living room air conditioner 403, and after receiving the control instruction, the living room air conditioner 403 performs an opening operation in response to the user.
  • the voice command input by the user may be more concise.
  • the voice instruction issued by the user may only contain the control object information.
  • the user only needs to issue a voice command—"light/TV", etc., and the processing device 140 analyzes the user's behavioral intentions according to the current state of the device 130.
  • step S310 the processing device 140 recognizes the control target information from the voice instruction. For example, if the user inputs a voice command-"light", the processing device 140 can recognize from the voice command that the control object information is "light".
  • the device to be controlled is determined based on the area where the user is located and the information of the controlled object, and then the control instruction for the device to be controlled is generated, which will not be repeated here. It should be understood that the control of lamps generally involves turning on the lights and turning off the lights. Therefore, the processing device 140 can determine the user's behavioral intention in combination with the current state of the "light" (whether it is on or off).
  • the user may have expressed intent before issuing a voice command about controlling device information.
  • the user may express intent in advance by means such as voice or gesture, which is not limited thereto. For example, the user first issues a voice command-"The bedroom is dark", and then a voice command-"Light". At this time, the processing device 140 recognizes that the control object information is "light” according to the received voice instruction, and at the same time, combined with the previous voice instruction, analyzes that the user's behavioral intention is to "turn on the light”.
  • step S320 and step S330 continue the description of the previous step S320 and step S330, based on the user's area and control object information, determine the device to be controlled (that is, which light the user wants to turn on), and then generate the control for the device to be controlled
  • the instructions are not repeated here.
  • the device is associated with the area in the monitoring image, and the device that the user wants to control is automatically determined by analyzing the user's voice command and the current monitoring image.
  • the device that the user wants to control is automatically determined by analyzing the user's voice command and the current monitoring image.
  • the solution of the present invention when a user wants to control a device by voice, there is no need to attach the location of the device every time (For example, "Turn on the air conditioner in the living room”, “Turn on the air conditioner in the master bedroom”, “Turn on the air conditioner in the study room”, etc.), the user only needs to directly turn on or off a certain device, which greatly improves the user experience.
  • Fig. 6 shows a schematic diagram of a voice command processing device 140 according to some embodiments of the present invention.
  • the voice command processing device 140 includes a first processing unit 142, a second processing unit 144, and an instruction generating unit 146 that are coupled to each other. among them,
  • the first processing unit 142 recognizes the user's behavior intention and control target information from the voice instruction.
  • the second processing unit 144 determines the device to be controlled based on the area where the user is located and the control target information.
  • the instruction generating unit 146 generates a control instruction for the device to be controlled based on the behavior intention.
  • the various technologies described here can be implemented in combination with hardware or software, or a combination of them. Therefore, the method and device of the present invention, or some aspects or parts of the method and device of the present invention may be embedded in a tangible medium, such as a removable hard disk, U disk, floppy disk, CD-ROM, or any other machine-readable storage medium
  • program code ie, instructions
  • the machine becomes a device for practicing the present invention.
  • the computing device When the program code is executed on a programmable computer, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and nonvolatile memory and/or storage elements), and at least one input device, And at least one output device.
  • the memory is configured to store program code; the processor is configured to execute the method of the present invention according to instructions in the program code stored in the memory.
  • readable media include readable storage media and communication media.
  • the readable storage medium stores information such as computer readable instructions, data structures, program modules, or other data.
  • Communication media generally embody computer readable instructions, data structures, program modules or other data in modulated data signals such as carrier waves or other transmission mechanisms, and include any information delivery media. Combinations of any of the above are also included in the scope of readable media.
  • the algorithms and displays are not inherently related to any particular computer, virtual system or other equipment.
  • Various general-purpose systems can also be used with the examples of the present invention. From the above description, the structure required to construct this type of system is obvious.
  • the present invention is not directed to any specific programming language. It should be understood that various programming languages can be used to implement the content of the present invention described herein, and the above description of a specific language is to disclose the best embodiment of the present invention.
  • modules or units or components of the device in the example disclosed herein can be arranged in the device as described in this embodiment, or alternatively can be positioned differently from the device in this example In one or more devices.
  • the modules in the foregoing examples can be combined into one module or further divided into multiple sub-modules.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • some of the embodiments are described herein as methods or combinations of method elements that can be implemented by a processor of a computer system or by other devices that perform the described functions. Therefore, a processor with the necessary instructions for implementing the method or method element forms a device for implementing the method or method element.
  • the elements described herein of the device embodiments are examples of devices for implementing functions performed by the elements for the purpose of implementing the invention.

Abstract

一种语音指令的处理方法、设备及控制系统,方法包括步骤:从语音指令中识别出用户的行为意图和控制对象信息(S310);基于用户所处的区域和控制对象信息,确定待控制设备(S320);以及基于行为意图,生成针对待控制设备的控制指令(S330)。

Description

一种语音指令的处理方法、设备及控制系统
本申请要求2019年06月06日递交的申请号为201910492557.4、发明名称为“一种语音指令的处理方法、设备及控制系统”中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及语音处理技术领域,尤其涉及一种语音指令的处理方法、设备及控制系统。
背景技术
过去十几年来,互联网在人们生活的各个领域不断深化,人们可以通过互联网方便地进行购物、社交、娱乐、理财等活动。互联网和智能化设备渗入到人们生活的方方面面。
市场上出现了一些智能语音设备,例如智能音箱、包含智能交互模块的各种智能电子设备(如移动设备、可穿戴电子设备等)。在一些使用场景中,智能语音设备可以通过语音识别技术来识别用户输入的语音数据,进而为用户提供个性化服务。然而,针对单一的语音信息,智能语音设备所能理解的用户意图尚存在一些局限。
基于此,需要一种语音识别方案,能够提高语音识别的效率,为用户提供更好地交互体验。
发明内容
为此,本发明提供了一种语音指令的处理方法、设备及控制系统,以力图解决或至少缓解上面存在的至少一个问题。
根据本发明的一个方面,提供了一种语音指令的处理方法,包括步骤:从语音指令中识别出用户的行为意图和控制对象信息;基于用户所处的区域和控制对象信息,确定待控制设备;以及基于行为意图,生成针对该待控制设备的控制指令。
可选地,根据本发明的方法还包括步骤:将控制指令发送给所述待控制设备,以便待控制设备执行控制指令中的操作。
可选地,根据本发明的方法还包括步骤:获取监控图像,监控图像中包含至少一个设备;基于监控图像,预先生成至少一个区域;以及为设备分别关联至少一个区域。
可选地,在根据本发明的方法中,基于用户所处的区域和控制对象信息,确定待控制设备的步骤包括:确定用户所处的区域;确定与用户所处区域相关联的设备;以及基于控制对象信息,从所确定的设备中确定出待控制设备。
可选地,在根据本发明的方法中,确定用户所处的区域的步骤包括:获取当前的监控图像,监控图像中包含用户和至少一个设备;从监控图像中确定用户所处的区域。
可选地,在根据本发明的方法中,从监控图像中确定用户所处的区域的步骤包括:通过人体检测,从当前的监控图像中检测出用户;确定所检测出的用户所处的区域。
可选地,在根据本发明的方法中,基于控制对象信息,从所确定的设备中确定出待控制设备的步骤还包括:基于控制对象信息,从所确定的设备中选取与用户距离最近的设备,作为待控制设备。
可选地,在根据本发明的方法中,基于控制对象信息,从所确定的设备中确定出待控制设备的步骤还包括:提取所检测出的用户的预定姿势;结合控制对象信息和预定姿势,从所确定的设备中确定出待控制设备。
可选地,在根据本发明的方法中,基于监控图像,预先生成至少一个区域的步骤包括:基于监控图像、结合室内空间分布和设备的位置,来预先生成至少一个区域。
可选地,在根据本发明的方法中,基于监控图像,预先生成至少一个区域的步骤包括:基于监控图像和用户自定义的区域分布,预先生成至少一个区域。
根据本发明的一个方面,还提供了一种语音指令的处理方法,包括步骤:从语音指令中识别出控制对象信息;基于用户所处的区域和控制对象信息,确定待控制设备;生成针对待控制设备的控制指令。
根据本发明的一个方面,还提供了一种语音指令的处理方法,包括步骤:接收语音指令;基于语音指令和监控图像,确定用户的行为意图和监控图像中待控制设备;根据所确定的行为意图,生成针对待控制设备的控制指令。
根据本发明的另一个方面,提供了一种语音指令的处理设备,包括:第一处理单元,适于从语音指令中识别出用户的行为意图和控制对象信息;第二处理单元,适于基于用户所处的区域和控制对象信息,确定待控制设备;指令生成单元,适于基于行为意图,生成针对待控制设备的控制指令。
根据本发明的另一个方面,还提供了一种语音指令的控制系统,包括:语音交互设备,适于接收用户的语音指令;图像采集设备,适于采集监控图像;至少一个设备;如上所述的处理设备,分别与语音交互设备、图像采集设备、设备相耦接,适于基于语音 指令和监控图像,从所述至少一个设备中确定出用户的行为意图和待控制设备,并生成针对该待控制设备的控制指令,以便待控制设备执行控制指令中的操作。
根据本发明的另一个方面,还提供了一种计算设备,包括:至少一个处理器;和存储有程序指令的存储器,其中,程序指令被配置为适于由至少一个处理器执行,程序指令包括用于执行如上所述方法的指令。
根据本发明的另一个方面,提供了一种存储有程序指令的可读存储介质,当程序指令被计算设备读取并执行时,使得该计算设备执行如上所述的方法。
根据本发明的方案,通过对语音指令的分析,识别出用户的行为意图和控制对象信息,进而根据控制对象信息确定出用户待控制设备。更具体地,将设备与监控图像中的区域进行关联,基于监控图像来分析出用户想要控制的设备。
在当前家用设备(尤其是智能设备)种类繁杂、数量越来越多的场景下,根据本发明的方案,当用户想要通过语音控制某个设备时,无需每次都附加设备的位置(如“打开客厅空调”、“打开主卧空调”、“打开书房空调”等),用户只需直接说打开或关闭某个设备即可,极大地提升了用户体验。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
为了实现上述以及相关目的,本文结合下面的描述和附图来描述某些说明性方面,这些方面指示了可以实践本文所公开的原理的各种方式,并且所有方面及其等效方面旨在落入所要求保护的主题的范围内。通过结合附图阅读下面的详细描述,本公开的上述以及其它目的、特征和优势将变得更加明显。遍及本公开,相同的附图标记通常指代相同的部件或元素。
图1示出了根据本发明一些实施例的语音指令的控制系统100的场景示意图;
图2示出了根据本发明一些实施例的计算设备200的示意图;
图3示出了根据本发明一些实施例的语音指令的处理方法300的流程示意图;
图4示出了根据本发明一个实施例的监控图像的示意图;
图5示出了根据本发明另一个实施例的监控图像的示意图;以及
图6示出了根据本发明一些实施例的语音指令的处理设备140的示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本发明一些实施例的语音指令的控制系统100的场景示意图。如图1所示,系统100中包括语音交互设备110、图像采集设备120、至少一个设备130、以及语音指令的处理设备140。应当指出,图1所示的系统100仅作为一个示例,本领域技术人员可以理解,在实际应用中,系统100可以包含多个语音交互设备110和图像采集设备120,例如在家用场景中,可以在每个房间分别布置一个语音交互设备110和一个图像采集设备120。本发明对系统100中所包括的各设备的数量均不做限制。
语音交互设备110为具有语音交互模块的设备,其可以接收用户发出的语音指令,还可以向用户返回相应的响应,该响应中可以包含语音或非语音的信息。一个典型的语音交互模块包括麦克风等语音输入单元、扬声器等语音输出单元以及处理器。语音交互模块可以内置在语音交互设备110中,也可以作为一个独立的模块与语音交互设备110配合使用(例如经由API或通过其它方式与语音交互设备110进行通信,调用语音交互设备110上的功能或应用接口的服务),本发明的实施例对此不做限制。语音交互设备110例如可以是具有语音交互模块的智能音箱、智能机器人、其他移动设备等,不限于此。
图像采集设备120用来监测场景中的动态,该场景中包含了用户及设备130。在一些实施例中,图像采集设备120采集场景中的视频图像,作为监控图像。系统100的一个应用场景是家用场景,此时,图像采集设备120可能不止一个。在一些实施例中,在每个卧室、客厅、餐厅、厨房、阳台等空间分别布置一个图像采集设备120;甚至,当空间比较大时(如客厅),还可以布置不止一个图像采集设备120。
设备130例如可以是各种智能设备,如移动终端、可穿戴设备等;也可以是一些简单的设备。如,在家用场景中,设备130可以是智能电视、智能冰箱、智能空调、智能微波炉、智能窗帘等,也可以是开关等简单的家用设备,只要能够通过通信模块与语音指令的处理设备140进行通信即可。
根据一些实施方式,用户可以向语音交互设备110发出语音指令以实现某些功能, 例如上网、点播歌曲、购物、了解天气预报等;也可以通过语音指令来对设备130进行控制,如控制智能空调调整到某个温度、控制智能电视播放电影、控制智能灯具的开关、色温调节、控制智能窗帘的开关等等。
上述语音交互设备110、图像采集设备120和设备130均通过网络与语音指令的处理设备140相耦接,以实现通信。
根据本发明的实施方式,语音交互设备110在唤醒状态下,接收用户的语音指令,并将该语音指令传送给处理设备140,以便处理设备140在接收到该语音指令时,识别出用户的行为意图和控制对象信息。控制对象信息包含设备130中任一设备的信息,例如设备名称、设备类别、设备标识等,不限于此。处理设备140通过识别控制对象信息,能够确定出控制对象信息所指向的待控制设备。
当然,语音交互设备110也可以具备语音识别的能力,在接收到用户的语音指令时,先对语音指令进行识别,识别出用户的行为意图和控制对象信息,并将这些识别结果发给处理设备140。例如,用户发出语音指令——“打开空调”,通过对该语音指令进行识别后,得出用户的行为意图是“打开”,控制对象信息是“空调”。
而后,该处理设备140向图像采集设备120获取此刻的监控图像。根据本发明的实施例,处理设备140可以获取在接收到语音指令此刻的监控图像,也可以获取在接收语音指令时刻及其之前一小段时间内(如,在接收语音指令之前5秒)的监控图像,不限于此。在一些实施例中,该处理设备140可以向所有的图像采集设备120获取监控图像。在另一些实施例中,该处理设备140中可以预先关联存储语音交互设备110和图像采集设备120,这样,处理设备140在接收到来自语音交互设备110的语音指令后,可以从与其相关联的图像采集设备120处获取监控图像。本发明的实施例对此均不作限制。
这样,语音指令的处理设备140基于语音指令和监控图像,对语音指令进行处理,确定出用户的行为意图和监控图像中的待控制设备130,而后,根据该行为意图和所确定出的待控制设备130,生成针对该待控制设备130的控制指令、并发送该控制指令给该待控制设备130,以便该待控制设备执行控制指令中的操作(关于语音指令的处理设备140对语音指令进行处理的具体过程,将在下文关于方法300的描述中具体介绍)。
在一种实施例中,语音指令的处理设备140例如可以是物理上位于一个或多个地点的云服务器。应当指出,语音指令的处理设备140也可以实现为通过网络与语音交互设备110等相连的其他电子设备(如,同处于一个物联网环境中的其他计算设备)。当语音交互设备110具备足够的存储能力和算力的情况下,语音指令的处理设备140还可以 实现为语音交互设备110本身。此外,还可以将图像采集设备120布置为语音交互设备110上的一部分,也就是实现了集语音交互、图像采集和语音指令处理于一体的语音交互设备110。本发明的实施例均不限于此。
根据本发明的实施方式,系统100中的语音交互设备110、图像采集设备120、设备130、以及语音指令的处理设备140均可以通过如下所述的计算设备200来实现。图2示出了根据本发明一个实施例的计算设备200的示意图。
如图2所示,在基本配置202中,计算设备200典型地包括系统存储器206和一个或者多个处理器204。存储器总线208可以用于在处理器204和系统存储器206之间的通信。
取决于期望的配置,处理器204可以是任何类型的处理,包括但不限于:微处理器(μP)、微控制器(μC)、数字信息处理器(DSP)或者它们的任何组合。处理器204可以包括诸如一级高速缓存210和二级高速缓存212之类的一个或者多个级别的高速缓存、处理器核心214和寄存器216。示例的处理器核心214可以包括运算逻辑单元(ALU)、浮点数单元(FPU)、数字信号处理核心(DSP核心)或者它们的任何组合。示例的存储器控制器218可以与处理器204一起使用,或者在一些实现中,存储器控制器218可以是处理器204的一个内部部分。
取决于期望的配置,系统存储器206可以是任意类型的存储器,包括但不限于:易失性存储器(诸如RAM)、非易失性存储器(诸如ROM、闪存等)或者它们的任何组合。系统存储器206可以包括操作系统220、一个或者多个应用222以及程序数据224。在一些实施方式中,应用222可以布置为在操作系统上由一个或多个处理器204利用程序数据224执行指令。
计算设备200还可以包括有助于从各种接口设备(例如,输出设备242、外设接口244和通信设备246)到基本配置202经由总线/接口控制器230的通信的接口总线240。示例的输出设备242包括图像处理单元248和音频处理单元250。它们可以被配置为有助于经由一个或者多个A/V端口252与诸如显示器或者扬声器之类的各种外部设备进行通信。示例外设接口244可以包括串行接口控制器254和并行接口控制器256,它们可以被配置为有助于经由一个或者多个I/O端口258和诸如输入设备(例如,键盘、鼠标、笔、语音输入设备、触摸输入设备)或者其他外设(例如打印机、扫描仪等)之类的外部设备进行通信。示例的通信设备246可以包括网络控制器260,其可以被布置为便于经由一个或者多个通信端口264与一个或者多个其他计算设备262通过网络通信链路的 通信。
网络通信链路可以是通信介质的一个示例。通信介质通常可以体现为在诸如载波或者其他传输机制之类的调制数据信号中的计算机可读指令、数据结构、程序模块,并且可以包括任何信息递送介质。“调制数据信号”可以是这样的信号,它的数据集中的一个或者多个或者它的改变可以在信号中编码信息的方式进行。作为非限制性的示例,通信介质可以包括诸如有线网络或者专线网络之类的有线介质,以及诸如声音、射频(RF)、微波、红外(IR)或者其它无线介质在内的各种无线介质。这里使用的术语计算机可读介质可以包括存储介质和通信介质二者。
计算设备200还可以包括能够进行数据存储的存储设备232,存储设备232中可以包括可移除存储器236、不可移除存储器238以及存储接口总线234。存储设备232能够存储基本配置202、输出设备242的数据。
计算设备200可以实现为包括桌面计算机和笔记本计算机配置的个人计算机,也可以实现为服务器,例如文件服务器、数据库服务器、应用程序服务器和WEB服务器等。当然,计算设备200也可以实现为小尺寸便携(或者移动)电子设备的一部分。在根据本发明的实施例中,计算设备200被配置为执行根据本发明的语音指令的处理方法300。计算设备200的应用222中包含执行根据本发明的方法300的多条程序指令。
图3示出了根据本发明一些实施例的语音指令的处理方法300的流程示意图。该方法适于在语音指令的处理设备140中执行。如图3,方法300始于步骤S310。
在步骤S310中,从语音指令中识别出用户的行为意图和控制对象信息。
在一些实施例中,语音指令的处理设备140通过ASR(Automatic Speech Recognition)语音识别技术对语音指令进行识别。例如,可以先将语音指令表示为文本数据,再对文本数据进行分词处理,得到相应的文本表示(应当指出,也可以采用其他方式来表示语音指令,本发明的实施例并不限于文本表示)。典型的ASR语音识别方法例如可以是:基于声道模型和语音知识的方法、模板匹配的方法等,不限于此。接着,语音指令的处理设备140再对文本表示进行处理,以理解用户意图,最终得到用户意图的表示。在一些实施例中,处理设备140可以采用NLP(Natural Language Processing)自然语言处理方法来对用户的语音指令进行理解,识别出用户的行为意图,用户的行为意图往往对应着实际的操作,如打开、关闭、播放等。同时,处理设备140还可以进一步确定用户意图的其他参数,如控制对象信息,控制对象信息记录了用户要控制的设备的信息,以根据控制对象信息能够确定出用户要控制的设备130,即,要打开哪个设备、 要关闭哪个设备。
此外,处理设备140在通过ASR技术进行识别时,还可以包括对语音指令的一些预处理操作,如:采样、量化、去除不包含语音内容的语音数据(如,静默的语音数据)、对语音数据进行分帧、加窗等处理,等等。本发明的实施例在此处不做过多展开。
需要说明的是,本发明的实施例对采用何种ASR算法、NLP算法来从语音指令中理解出用户的意图,并不做过多限制,任何已知的或未来可知的此类算法均可以与本发明的实施例相结合,以实现本发明的方法300。
如前文所述,当语音交互设备110具备足够的算力时,也可以由语音交互设备110对用户的语音指令进行识别,直接将识别出的用户的行为意图和控制对象信息发送给语音指令的处理设备140。本发明的实施例对此不做过多限制。
在根据本发明的一种实施例中,用户输入语音指令——“打开空调”,处理设备140在分析后识别出用户的行为意图是“打开”,控制对象信息是“空调”。此时,如果仅有一台空调接入处理设备140,处理设备140就可以直接生成相应的控制指令给该空调,以指示其处于打开状态。然而,在家用场景中,一般会有多台空调(在客厅、餐厅、卧室、书房中均安装有空调),此时,处理设备140就需要进一步判断出,用户想打开哪台空调。因此,在随后的步骤S320中,基于用户所处的区域和控制对象信息,确定待控制设备。
根据一种实施方式,当控制对象信息对应不止一个设备130时,根据用户此时所处的位置来确定用户要控制的设备。优选地,当控制对象信息所指向的设备处于用户所处位置的一定范围内时,就认为该设备是待控制设备。根据本发明的实施例,借助图像采集设备120所采集的监控图像来确定用户的位置,进而确定出用户周围的设备。具体地,方法300还包括如下的3个步骤。
1)获取监控图像,该监控图像中包含至少一个设备。
图4示出了根据本发明一个实施例的监控图像的示意图。如图4,该监控图像采集到的是客厅和餐厅的图像。图像采集设备120布置在餐厅窗帘的左侧,不限于此。以普通的家用场景为例,客厅和餐厅中包含的设备130有:客厅灯具401、电视机402、客厅空调403、客厅窗帘404、餐厅灯具405、餐厅空调406、餐厅窗帘407。
2)基于所获取的监控图像,预先生成至少一个区域。
根据一种实施例,基于监控图像、结合室内空间分布和设备130的位置,来预先生成至少一个区域。例如,根据室内空间分布,将监控图像中客厅的部分作为一个区域, 将监控图像中餐厅的部分作为另一个区域。又如,将监控图像分成左右两个区域。此外,还可以考虑设备的位置,来将监控图像划分成多个区域。
根据又一种实施例,基于监控图像和用户自定义的区域分布,预先生成至少一个区域。用户可以根据自身的生活习惯等自定义区域,例如,将客厅的中心区域作为区域1、将餐厅的中心区域作为区域2、将剩余区域作为区域3。
如图4,将监控图像划分成了6个区域,分别标记为ROI1、ROI2、ROI3、ROI4、ROI5、ROI6。应当指出,区域可以是矩形、圆形或者任何不规则形状的区域,本发明的实施例对区域划分的形状、大小和数目均不作限制。
3)为监控图像中的每个设备分别关联至少一个所生成的区域。
一般地,若设备A处于区域R1,则将设备A与区域R1相关联。当然,也可以依据用户的喜好来设定,例如,设备B处于区域R1和R2的分界处时,用户可自定义设备B与区域R1还是区域R2相关联。优选地,为每个设备关联一个区域。当然,在一些特殊情况下,也可以为设备关联不止一个区域。如,设备C同时处于区域R1和区域R2中时,可以为设备C同时关联区域R1和区域R2。
如表1,示例性地示出了图4中各设备与区域的关联关系。
表1 设备与区域的关联关系示例
区域 设备
ROI1 客厅空调403、客厅窗帘404
ROI2 客厅灯具401、客厅窗帘404
ROI3 电视机402、客厅窗帘404
ROI4  
ROI5 餐厅灯具405、餐厅窗帘407
ROI6 餐厅空调406
根据本发明的实施方式,当系统100中存在多个图像采集设备120时,可以针对每个图像采集设备120的监控图像,分别生成相应的区域。此处不再一一赘述。
在生成区域、并且为每个设备关联了区域后,根据本发明的一种实施方式,通过如下三步来实现步骤S320。
第一步,确定用户所处的区域。
根据一种实施例,先获取当前的监控图像,该监控图像中包含用户和至少一个设备。如前文所述,“当前的监控图像”可以是接收到用户的语音指令时刻的监控图像,也可 以是接收到用户的语音指令之前的一小段时间内的监控图像,本发明的实施例对此不做过多限制。
接着,从该监控图像中确定用户所处的区域。在一种实施例中,通过人体检测,从当前的监控图像中检测出人体(即,用户),再确定所检测出的用户所处的区域。如图4,用户处于区域ROI1中。需要说明的是,可以采用传统的目标识别算法来检测监控图像中的人体,也可以采用基于深度学习的算法或者基于运动检测的算法来检测监控图像中的人体,本发明的实施例对此不做过多限制。
第二步,确定与用户所处区域相关联的设备。
结合前文描述,在图4示出的监控图像中,与区域ROI1相关联的设备有客厅空调403和客厅窗帘404。
第三步,基于控制对象信息,从所确定的设备中确定出待控制设备。
继续如前例,语音指令为——“打开空调”,控制对象信息是“空调”,这样,结合用户所处区域,就可以确定出待控制设备就是“客厅空调403”。
在另一些实施例中,所获取的设备中可能存在不止一个与控制对象信息对应的设备,此时,基于控制对象信息,选取与用户距离最近的一个设备,作为待控制设备。例如,当语音指令是“开灯”时,控制对象信息为“灯具”,若此时获取到与区域相关联的设备有台灯、射灯等多个灯具,则从中选取与用户距离最近的一个灯具,作为待控制设备。在该实施例中,通过人体检测可以确定出用户在监控图像中的位置,设备在监控图像中的位置可以事先标定,这样,基于位置坐标就可以确定出距离用户最近的一个设备。
在又一些实施例中,当所获取的设备中可能存在不止一个与控制对象信息对应的设备时,通过如下所述的方式来确定待控制设备。
当用户要发起语音指令时,同步指向待控制设备。将用户指向待控制设备的姿势作为预定姿势。这样,在步骤S310中,语音交互设备110将语音指令传送给语音指令的处理设备140,由其分析出用户的行为意图和控制对象信息。在步骤S320中,处理设备140先从图像采集设备120处获取到相应的监控图像,并通过人体检测,检测出至少一个人体,根据这至少一个人体确定出至少一个区域。在此基础上,提取所检测出的人体(即用户)的预定姿势(即,用手指向待控制设备的动作),而后,结合控制对象信息和预定姿势,根据该预定姿势,确定用户手指向的方向,进而从所确定的区域中确定出手指向的方向上、与控制对象信息相对应的设备,作为待控制设备。根据本发明的实施例,可以利用传统的图像处理算法来确定出预定姿势、并确定其指向,以及根据指向确定出 设备(例如,根据手的指向计算出一个大致角度,在该角度范围内确定相关联的设备),本发明的实施例旨在通过上述实施方式提供一种匹配设备的方案,对具体采用何种图像处理算法并不做过多限制。此外,预定姿势还可以根据用户习惯设置为别的姿势,此处仅作为示例,本发明的实施例对预定姿势并不做限制。
图5示出了根据本发明另一个实施例的监控图像的示意图。如图5,该监控图像采集到的是卧室的图像。卧室中包含的设备有:卧室中央吊灯501、卧室灯带502、卧室电视机503、卧室空调504、卧室窗帘505、卧室台灯506。如图5,将监控图像划分成了3个区域,分别记作ROI7、ROI8和ROI9。区域与设备130的关联关系如表2。
表2 设备与区域的关联关系示例
区域 设备
ROI7 卧室中央吊灯501、卧室窗帘505
ROI8 卧室灯带502、卧室电视机503、卧室台灯506
ROI9 卧室空调504
在图5的监控图像中,用户发出语音指令——“开灯”,同时用手指向台灯506方向。处理设备140先识别出用户的行为意图是“打开”,控制对象信息是“灯具”。接着,通过对监控图像的分析,检测出用户并确认用户所处区域为ROI8,此时会确定出两个与控制对象信息相对应的设备:卧室灯带502和卧室台灯506。进一步地,提取用户的手势并确定手势的指向是台灯506方向,进而确定出待控制设备为卧室台灯506。
应当指出,除上述场景外,还可能会出现下述场景:监控图像中检测出不止一个用户(这就意味着可能会确定出不止一个用户所处的区域)。当确定出多个用户所处区域时,可以结合上述各种方式,来最终确定出待控制设备。例如,先从多个区域中确定出与控制对象信息对应的至少一个设备,然后分别计算设备和与其对应的用户(即,与该设备相关联的区域内的用户)的距离,选取距离值最小的设备作为待控制设备。又如,确定所检测出的各用户是否具有预定姿势,将具有预定姿势的用户所处的区域确定为最终的区域,进而筛选出与该区域相关联的设备,作为待控制设备。
随后在步骤S330中,基于行为意图,生成针对该待控制设备的控制指令。以图4的场景为例,语音指令为——“打开空调”,确定出待控制设备是“客厅空调403”,那么处理设备140生成的控制指令就可以是“打开客厅空调403”,其中,“打开”是要执行的指令,“客厅空调403”是指令接收方,即待控制设备。
根据本发明的实施例,处理设备140将生成的控制指令发送给待控制设备,由其根 据控制指令执行操作。例如,处理设备140发送控制指令给客厅空调403,客厅空调403在接收到该控制指令后,执行开启操作,来响应用户。
在另一些实施场景中,用户输入的语音指令可能更简洁。根据一种实施例,当设备的控制状态比较简单时,例如只有开启和关闭两种状态,用户发出的语音指令可以仅包含控制对象信息。例如,用户仅需发出语音指令——“灯/电视”等,处理设备140根据设备130当前的状态,来分析出用户的行为意图。
此时,在步骤S310中,处理设备140从语音指令中识别出控制对象信息。如,用户输入语音指令——“灯”,处理设备140就可以从语音指令中识别出控制对象信息是“灯”。在之后的步骤中,还是延续之前步骤S320和步骤S330的描述,基于用户所处的区域和控制对象信息,确定待控制设备,再生成针对待控制设备的控制指令,此处不做赘述。应当了解,对于灯具的控制,一般就是开灯和关灯两种,因此,处理设备140可以结合当前“灯”的状态(是开还是关),来确定用户的行为意图。例如,若当前灯是开着的,那么用户的行为意图就是关闭,进而生成控制指令——“关灯”;若当前灯是关闭的,那么用户的行为意图就是开启,进而生成控制指令——“开灯”。
在又一些实施场景中,用户在发出关于控制设备信息的语音指令之前,可能已经表达过意图。根据一种实施例,用户可以通过诸如语音或手势的方式来事先表达意图,不限于此。例如,用户先发出语音指令——“卧室好暗”,接着又发出语音指令——“灯”。此时,处理设备140根据接收到的语音指令,识别出控制对象信息为“灯”,同时结合之前的语音指令,分析出用户的行为意图是“开灯”。在之后的步骤中,延续之前步骤S320和步骤S330的描述,基于用户所处的区域和控制对象信息,确定待控制设备(即,用户想要打开哪个灯),再生成针对待控制设备的控制指令,此处不做赘述。
根据本发明的方案,将设备与监控图像中的区域进行关联,通过对用户的语音指令及当前的监控图像的分析,自动确定出用户想要控制的设备。在当前家用设备(尤其是各种智能设备)种类繁杂、数量越来越多的场景下,根据本发明的方案,当用户想要通过语音控制某个设备时,无需每次都附加设备的位置(如“打开客厅空调”、“打开主卧空调”、“打开书房空调”等),用户只需直接说打开或关闭某个设备即可,极大地提升了用户体验。
图6示出了根据本发明一些实施例的语音指令的处理设备140的示意图。如图6所示,语音指令的处理设备140包括相互耦接的第一处理单元142、第二处理单元144和指令生成单元146。其中,
第一处理单元142从语音指令中识别出用户的行为意图和控制对象信息。第二处理单元144基于用户所处的区域和控制对象信息,确定待控制设备。指令生成单元146基于行为意图,生成针对该待控制设备的控制指令。
应当理解,关于处理设备140的具体描述可参考前文关于方法300的相关描述,篇幅所限,此处不再一一展开。
这里描述的各种技术可结合硬件或软件,或者它们的组合一起实现。从而,本发明的方法和设备,或者本发明的方法和设备的某些方面或部分可采取嵌入有形媒介,例如可移动硬盘、U盘、软盘、CD-ROM或者其它任意机器可读的存储介质中的程序代码(即指令)的形式,其中当程序被载入诸如计算机之类的机器,并被所述机器执行时,所述机器变成实践本发明的设备。
在程序代码在可编程计算机上执行的情况下,计算设备一般包括处理器、处理器可读的存储介质(包括易失性和非易失性存储器和/或存储元件),至少一个输入装置,和至少一个输出装置。其中,存储器被配置用于存储程序代码;处理器被配置用于根据该存储器中存储的所述程序代码中的指令,执行本发明的方法。
以示例而非限制的方式,可读介质包括可读存储介质和通信介质。可读存储介质存储诸如计算机可读指令、数据结构、程序模块或其它数据等信息。通信介质一般以诸如载波或其它传输机制等已调制数据信号来体现计算机可读指令、数据结构、程序模块或其它数据,并且包括任何信息传递介质。以上的任一种的组合也包括在可读介质的范围之内。
在此处所提供的说明书中,算法和显示不与任何特定计算机、虚拟系统或者其它设备固有相关。各种通用系统也可以与本发明的示例一起使用。根据上面的描述,构造这类系统所要求的结构是显而易见的。此外,本发明也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本发明的内容,并且上面对特定语言所做的描述是为了披露本发明的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所 要求保护的本发明要求比在每个权利要求中所明确记载的特征更多特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员应当理解在本文所公开的示例中的设备的模块或单元或组件可以布置在如该实施例中所描述的设备中,或者可替换地可以定位在与该示例中的设备不同的一个或多个设备中。前述示例中的模块可以组合为一个模块或者此外可以分成多个子模块。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
此外,所述实施例中的一些在此被描述成可以由计算机系统的处理器或者由执行所述功能的其它装置实施的方法或方法元素的组合。因此,具有用于实施所述方法或方法元素的必要指令的处理器形成用于实施该方法或方法元素的装置。此外,装置实施例的在此所述的元素是如下装置的例子:该装置用于实施由为了实施该发明的目的的元素所执行的功能。
如在此所使用的那样,除非另行规定,使用序数词“第一”、“第二”、“第三”等等来描述普通对象仅仅表示涉及类似对象的不同实例,并且并不意图暗示这样被描述的对象必须具有时间上、空间上、排序方面或者以任意其它方式的给定顺序。
尽管根据有限数量的实施例描述了本发明,但是受益于上面的描述,本技术领域内的技术人员明白,在由此描述的本发明的范围内,可以设想其它实施例。此外,应当注 意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的而非限制性的,本发明的范围由所附权利要求书限定。

Claims (16)

  1. 一种语音指令的处理方法,包括步骤:
    从语音指令中识别出用户的行为意图和控制对象信息;
    基于用户所处的区域和所述控制对象信息,确定待控制设备;以及
    基于所述行为意图,生成针对所述待控制设备的控制指令。
  2. 如权利要求1所述的方法,其中,在生成针对待控制设备的控制指令后,还包括步骤:
    将所述控制指令发送给所述待控制设备,以便所述待控制设备执行控制指令中的操作。
  3. 如权利要求1或2所述的方法,还包括步骤:
    获取监控图像,所述监控图像中包含至少一个设备;
    基于所述监控图像,预先生成至少一个区域;以及
    为所述设备分别关联至少一个区域。
  4. 如权利要求1所述的方法,其中,所述基于用户所处的区域和控制对象信息,确定待控制设备的步骤包括:
    确定用户所处的区域;
    确定与用户所处区域相关联的设备;以及
    基于所述控制对象信息,从所确定的设备中确定出待控制设备。
  5. 如权利要求4所述的方法,其中,所述确定用户所处的区域的步骤包括:
    获取当前的监控图像,所述监控图像中包含用户和至少一个设备;
    从所述监控图像中确定用户所处的区域。
  6. 如权利要求5所述的方法,其中,所述从监控图像中确定用户所处的区域的步骤包括:
    通过人体检测,从当前的监控图像中检测出用户;
    确定所检测出的用户所处的区域。
  7. 如权利要求6所述的方法,其中,所述基于控制对象信息,从所确定的设备中确定出待控制设备的步骤还包括:
    基于所述控制对象信息,从所确定的设备中选取与用户距离最近的设备,作为待控制设备。
  8. 如权利要求6所述的方法,其中,所述基于控制对象信息,从所确定的设备中 确定出待控制设备的步骤还包括:
    提取所检测出的用户的预定姿势;
    结合所述控制对象信息和所述预定姿势,从所确定的设备中确定出待控制设备。
  9. 如权利要求3所述的方法,其中,所述基于监控图像,预先生成至少一个区域的步骤包括:
    基于监控图像、结合室内空间分布和设备的位置,来预先生成至少一个区域。
  10. 如权利要求3所述的方法,其中,所述基于监控图像,预先生成至少一个区域的步骤包括:
    基于监控图像和用户自定义的区域分布,预先生成至少一个区域。
  11. 一种语音指令的处理方法,包括步骤:
    从语音指令中识别出控制对象信息;
    基于用户所处的区域和控制对象信息,确定待控制设备;
    生成针对所述待控制设备的控制指令。
  12. 一种语音指令的处理方法,包括步骤:
    接收语音指令;
    基于语音指令和监控图像,确定用户的行为意图和监控图像中待控制设备;
    根据所确定的行为意图,生成针对该待控制设备的控制指令。
  13. 一种语音指令的处理设备,包括:
    第一处理单元,适于从语音指令中识别出用户的行为意图和控制对象信息;
    第二处理单元,适于基于用户所处的区域和所述控制对象信息,确定待控制设备;
    指令生成单元,适于基于所述行为意图,生成针对所述待控制设备的控制指令。
  14. 一种语音指令的控制系统,包括:
    语音交互设备,适于接收用户的语音指令;
    图像采集设备,适于采集监控图像;
    至少一个设备;
    如权利要求12所述的处理设备,分别与所述语音交互设备、图像采集设备、设备相耦接,适于基于语音指令和监控图像,从所述至少一个设备中确定出用户的行为意图和待控制设备,并生成针对所述待控制设备的控制指令,以便待控制设备执行控制指令中的操作。
  15. 一种计算设备,包括:
    至少一个处理器;和
    存储有程序指令的存储器,其中,所述程序指令被配置为适于由所述至少一个处理器执行,所述程序指令包括用于执行如权利要求1-12中任一项所述方法的指令。
  16. 一种存储有程序指令的可读存储介质,当所述程序指令被计算设备读取并执行时,使得所述计算设备执行如权利要求1-12中任一项所述的方法。
PCT/CN2020/094323 2019-06-06 2020-06-04 一种语音指令的处理方法、设备及控制系统 WO2020244573A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910492557.4 2019-06-06
CN201910492557.4A CN112053683A (zh) 2019-06-06 2019-06-06 一种语音指令的处理方法、设备及控制系统

Publications (1)

Publication Number Publication Date
WO2020244573A1 true WO2020244573A1 (zh) 2020-12-10

Family

ID=73609605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/094323 WO2020244573A1 (zh) 2019-06-06 2020-06-04 一种语音指令的处理方法、设备及控制系统

Country Status (2)

Country Link
CN (1) CN112053683A (zh)
WO (1) WO2020244573A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244882A (zh) * 2021-12-20 2022-03-25 珠海格力电器股份有限公司 智能设备的控制方法、装置、终端及存储介质
CN114882883A (zh) * 2022-05-31 2022-08-09 四川虹美智能科技有限公司 智能设备控制方法、装置及系统
WO2022188552A1 (zh) * 2021-03-10 2022-09-15 Oppo广东移动通信有限公司 设备控制方法及相关装置
WO2023035676A1 (zh) * 2021-09-09 2023-03-16 青岛海尔空调器有限总公司 用于控制家电设备的方法及装置、存储介质
CN117219071A (zh) * 2023-09-20 2023-12-12 北京惠朗时代科技有限公司 一种基于人工智能的语音交互服务系统
WO2023236848A1 (zh) * 2022-06-08 2023-12-14 深圳绿米联创科技有限公司 设备控制方法、装置、系统、电子设备及可读存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750437A (zh) * 2021-01-04 2021-05-04 欧普照明股份有限公司 控制方法、控制装置及电子设备
TWI780891B (zh) * 2021-09-03 2022-10-11 劉千鳳 穿戴式動態指示系統
CN113611305A (zh) * 2021-09-06 2021-11-05 云知声(上海)智能科技有限公司 自主学习家居场景下的语音控制方法、系统、设备及介质
CN113641110B (zh) * 2021-10-14 2022-03-25 深圳传音控股股份有限公司 处理方法、处理设备及可读存储介质
CN114363384B (zh) * 2021-12-22 2023-04-07 珠海格力电器股份有限公司 设备指向控制方法、装置、系统、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045122A (zh) * 2015-06-24 2015-11-11 张子兴 一种基于音频和视频的智能家居自然交互系统
CN105206275A (zh) * 2015-08-31 2015-12-30 小米科技有限责任公司 一种设备控制方法、装置及终端
CN105785782A (zh) * 2016-03-29 2016-07-20 北京小米移动软件有限公司 智能家居设备控制方法及装置
CN107490971A (zh) * 2016-06-09 2017-12-19 苹果公司 家庭环境中的智能自动化助理
CN108369630A (zh) * 2015-05-28 2018-08-03 视觉移动科技有限公司 用于智能家居的手势控制系统和方法
CN108398906A (zh) * 2018-03-27 2018-08-14 百度在线网络技术(北京)有限公司 设备控制方法、装置、电器、总控设备及存储介质
US20180358009A1 (en) * 2017-06-09 2018-12-13 International Business Machines Corporation Cognitive and interactive sensor based smart home solution

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9729821B1 (en) * 2016-03-31 2017-08-08 Amazon Technologies, Inc. Sensor fusion for location based device grouping
WO2017215986A1 (en) * 2016-06-13 2017-12-21 Koninklijke Philips N.V. System and method for capturing spatial and temporal relationships between physical content items
CN105957519B (zh) * 2016-06-30 2019-12-10 广东美的制冷设备有限公司 多区域同时进行语音控制的方法和系统、服务器及麦克风
CN107528753B (zh) * 2017-08-16 2021-02-26 捷开通讯(深圳)有限公司 智能家居语音控制方法、智能设备及具有存储功能的装置
CN108154878A (zh) * 2017-12-12 2018-06-12 北京小米移动软件有限公司 控制监控设备的方法及装置
CN108320742B (zh) * 2018-01-31 2021-09-14 广东美的制冷设备有限公司 语音交互方法、智能设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108369630A (zh) * 2015-05-28 2018-08-03 视觉移动科技有限公司 用于智能家居的手势控制系统和方法
CN105045122A (zh) * 2015-06-24 2015-11-11 张子兴 一种基于音频和视频的智能家居自然交互系统
CN105206275A (zh) * 2015-08-31 2015-12-30 小米科技有限责任公司 一种设备控制方法、装置及终端
CN105785782A (zh) * 2016-03-29 2016-07-20 北京小米移动软件有限公司 智能家居设备控制方法及装置
CN107490971A (zh) * 2016-06-09 2017-12-19 苹果公司 家庭环境中的智能自动化助理
US20180358009A1 (en) * 2017-06-09 2018-12-13 International Business Machines Corporation Cognitive and interactive sensor based smart home solution
CN108398906A (zh) * 2018-03-27 2018-08-14 百度在线网络技术(北京)有限公司 设备控制方法、装置、电器、总控设备及存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022188552A1 (zh) * 2021-03-10 2022-09-15 Oppo广东移动通信有限公司 设备控制方法及相关装置
WO2023035676A1 (zh) * 2021-09-09 2023-03-16 青岛海尔空调器有限总公司 用于控制家电设备的方法及装置、存储介质
CN114244882A (zh) * 2021-12-20 2022-03-25 珠海格力电器股份有限公司 智能设备的控制方法、装置、终端及存储介质
CN114882883A (zh) * 2022-05-31 2022-08-09 四川虹美智能科技有限公司 智能设备控制方法、装置及系统
CN114882883B (zh) * 2022-05-31 2023-07-25 合肥长虹美菱生活电器有限公司 智能设备控制方法、装置及系统
WO2023236848A1 (zh) * 2022-06-08 2023-12-14 深圳绿米联创科技有限公司 设备控制方法、装置、系统、电子设备及可读存储介质
CN117219071A (zh) * 2023-09-20 2023-12-12 北京惠朗时代科技有限公司 一种基于人工智能的语音交互服务系统
CN117219071B (zh) * 2023-09-20 2024-03-15 北京惠朗时代科技有限公司 一种基于人工智能的语音交互服务系统

Also Published As

Publication number Publication date
CN112053683A (zh) 2020-12-08

Similar Documents

Publication Publication Date Title
WO2020244573A1 (zh) 一种语音指令的处理方法、设备及控制系统
CN105118257B (zh) 智能控制系统及方法
KR102453603B1 (ko) 전자 장치 및 그 제어 방법
CN104049721B (zh) 信息处理方法及电子设备
CN110291489B (zh) 计算上高效的人类标识智能助理计算机
CN113095798B (zh) 社交提醒
AU2021205110B2 (en) Controlling a device based on processing of image data that captures the device and/or an installation environment of the device
CN108023934B (zh) 电子装置及其控制方法
WO2019019255A1 (zh) 建立预测模型的装置、方法、预测模型建立程序及计算机可读存储介质
JP2019536072A (ja) デバイストポロジーに基づく音声コマンドの処理
TW202025138A (zh) 語音互動方法、裝置及系統
CN113329545B (zh) 一种智能照明方法、装置、智能控制装置及存储介质
TW201805744A (zh) 控制系統、控制處理方法及裝置
CN112051743A (zh) 设备控制方法、冲突处理方法、相应的装置及电子设备
WO2020168571A1 (zh) 设备控制方法、装置、系统、电子设备以及云服务器
US11784845B2 (en) System and method for disambiguation of Internet-of-Things devices
CN109032345B (zh) 设备控制方法、装置、设备、服务端和存储介质
CN112740626A (zh) 用于通过使多个电子装置协同工作来提供通知的方法和设备
WO2020119541A1 (zh) 一种语音数据的识别方法、装置及系统
WO2020192215A1 (zh) 一种交互方法及可穿戴交互设备
WO2018032895A1 (zh) 一种风扇装置送风方法及风扇装置
CN109240641B (zh) 音效调整方法、装置、电子设备以及存储介质
WO2018023515A1 (zh) 一种手势及情感识别家居控制系统
CN111801650A (zh) 电子装置及基于对应于用户的使用模式信息控制外部电子装置的方法
US20210082405A1 (en) Method for Location Reminder and Electronic Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20818751

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20818751

Country of ref document: EP

Kind code of ref document: A1