CN106875946B - Voice control interactive system - Google Patents
Voice control interactive system Download PDFInfo
- Publication number
- CN106875946B CN106875946B CN201710155933.1A CN201710155933A CN106875946B CN 106875946 B CN106875946 B CN 106875946B CN 201710155933 A CN201710155933 A CN 201710155933A CN 106875946 B CN106875946 B CN 106875946B
- Authority
- CN
- China
- Prior art keywords
- voice
- information
- module
- server
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
Abstract
The invention discloses a voice control interactive system, which comprises: a plurality of voice interaction devices distributed in different rooms and a first server. The voice interaction device is used for detecting and sensing human body activity information, recording the human body activity information, acquiring voice information, preprocessing the acquired voice information, sending the preprocessed voice information and the human body activity information to the first server, and playing information sent by the first server; the first server is used for analyzing and processing the received voice information, matching corresponding information and sending the corresponding information to corresponding voice interaction equipment according to the received human body activity information. Compared with the prior art, the voice interaction equipment in the voice control interaction system can detect and sense human body activity information, and the equipment can communicate with each other, so that a plurality of voice interaction equipment can work cooperatively, and the convenience of man-machine voice interaction and the smooth feeling of use of a user can be improved.
Description
Technical Field
The invention relates to the technical field of man-machine interaction and voice recognition, in particular to a voice control interaction system.
Background
With the rapid development of the voice recognition technology, the application scene of human-computer voice interaction is more and more common, and the human-computer voice interaction system can provide effective human-computer interaction functions for people, communicate with a machine through voice, make the machine tell what people said, and give corresponding answers. For example, the weather condition of a certain place is acquired through the man-machine voice interaction system, and a path can be guided for people through the man-machine voice interaction system, so that the people can be told about a route of the place where the people want to go, and the like.
The voice interaction system is a very important man-machine interaction mode in smart home. Through the voice interaction equipment fixedly installed in the room, the voice control command of people in the room can be collected, and information can be fed back to people through the voice playback function. However, the existing voice interaction equipment can only be used in the room, and if people walk to another room, the voice interaction function cannot be realized; or the voice interaction devices are installed in a plurality of rooms, but at present, the devices cannot communicate with each other and cannot work cooperatively. For example, when a person in one room a gives an instruction through voice and then walks to another room B, any voice feedback result is played back by the device in the room a, and the person in the room B cannot hear the feedback result, which reduces the convenience of human-computer interaction and the user experience.
In view of this, it is necessary to provide a voice control interactive system that can utilize the voice interactive device in the residence to perform data interaction no matter in which room a person is, so as to increase the space range of human-computer interaction and the convenience of use.
Disclosure of Invention
The invention aims to solve the technical problem of providing a voice control interaction system which can utilize voice interaction equipment in a residence to carry out data interaction no matter which room a person is in, and is convenient for increasing the space range of human-computer interaction and the use convenience.
In order to solve the above technical problem, the present invention provides a voice control interactive system, which comprises: the system comprises a plurality of voice interaction devices distributed in different rooms and a first server, wherein the voice interaction devices are mutually communicated. The voice interaction equipment is used for detecting and sensing human body activity information, recording the human body activity information, acquiring voice information, preprocessing the acquired voice information, sending the preprocessed voice information and the human body activity information to the first server, and playing information sent by the first server; the first server is used for analyzing and processing the received voice information, matching corresponding information and sending the corresponding information to corresponding voice interaction equipment according to the received human body activity information.
The further technical scheme is as follows: the voice interaction devices comprise a main device and a plurality of slave devices, the slave devices perform data interaction with the main device, and the main device performs data interaction with the first server.
The further technical scheme is as follows: the voice interaction device comprises a human body induction module, a voice acquisition module, a central processing module, a first network connection module and a voice decoding and playing module; the central processing module is connected with the human body induction module, the voice acquisition module, the first network connection module and the voice decoding and playing module. The human body induction module is used for detecting and inducing human body activity information; the voice acquisition module is used for acquiring voice information sent by a human body; the central processing module is used for recording human body activity information and preprocessing the acquired voice information; the first network connection module is used for communicating with other voice interaction devices to send the human body activity information to other voice interaction devices, and communicating with the first server to send the preprocessed voice information and the human body activity information to the first server and receive the information sent by the first server; and the voice decoding and playing module is used for decompressing and playing the information sent by the first server.
The further technical scheme is as follows: the human body induction module comprises an infrared detector and/or a human body thermal sensor and/or a video camera.
The further technical scheme is as follows: the voice acquisition module comprises a microphone matrix formed by one or more microphones.
The further technical scheme is as follows: the central processing module comprises a central processing unit/microprocessor, a nonvolatile memory and a random access memory.
The further technical scheme is as follows: the first server comprises a second network connection module, a data storage module and a data processing module. The second network connection module is used for communicating with the voice interaction equipment to receive the voice information and the human body activity information sent by the voice interaction equipment, and sending the matched information to the corresponding voice interaction equipment according to the received human body activity information; the data storage module is used for storing information and updating the information in real time; and the data processing module is used for analyzing and processing the received voice information and communicating with the data storage module to acquire information matched with the voice information after analysis and processing.
The further technical scheme is as follows: the first server also comprises a first data generation module and a first data recording module. The first data generation module is used for generating a corresponding timestamp according to the received human body activity information and generating a voice analysis zone bit according to the received preprocessed voice information; the first data recording module is used for recording a timestamp generated according to the human body activity information and a voice analysis zone bit generated according to the received preprocessed voice information, and acquiring and recording information matched with the received voice information from the data processing module according to the voice analysis zone bit.
The further technical scheme is as follows: the voice control interaction system further comprises a second server, the second server is communicated with the voice interaction device and the first server, and is used for generating and recording a time stamp of human activity information, receiving voice information preprocessed by the voice interaction device, generating a voice analysis zone bit, recording the voice analysis zone bit, and acquiring and recording information matched with the voice information from the voice interaction device from the first server according to the zone bit.
The further technical scheme is as follows: the second server comprises a third network connection module, a second data generation module and a second data recording module. The third network connection module is used for communicating with the voice interaction device and the first server to receive the human body activity information, the preprocessed voice information and the information which is sent by the first server and matched with the preprocessed voice information; the second data generation module is used for generating a corresponding timestamp according to the human body activity information and generating a voice analysis zone bit according to the received preprocessed voice information; the second data recording module is used for recording a timestamp generated according to the human activity information, a voice analysis flag bit generated according to the received preprocessed voice information and information which is acquired from the first server and matched with the received voice information.
Compared with the prior art, the voice interaction equipment in the voice control interaction system can detect and sense the human body activity information, and the equipment can communicate with each other, so that a plurality of voice interaction equipment can work cooperatively. The voice interaction equipment in the room can collect instructions sent by people, when a user leaves the room and goes to another room, the voice interaction equipment in the other room can continuously collect commands sent by people, receives voice information collected by equipment in the previous room and carries out information splicing and preprocessing, the first server can receive the preprocessed voice information and carries out analysis processing on the received voice information to match corresponding information, and sends the corresponding information to the voice interaction equipment in the room where the user exists according to the human activity information to be played. The system greatly improves the convenience of a man-machine interaction mode through voice interaction and the smooth feeling of use of a user.
Drawings
Fig. 1 shows a block diagram of a first embodiment of the voice-controlled interactive system of the present invention.
Fig. 2 shows a specific application scenario of the first embodiment of the voice-controlled interactive system of the present invention.
Fig. 3 shows a block diagram of a second embodiment of the voice-controlled interactive system of the present invention.
Fig. 4 shows a block diagram of a third embodiment of the voice-controlled interactive system of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood by those skilled in the art, the present invention is further described with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 shows a block diagram of a first embodiment of a voice-controlled interactive system 10 of the present invention. In the embodiment shown in the drawings, the system 10 comprises: a plurality of voice interaction devices 11 distributed in different rooms and a first server 12, wherein the voice interaction devices 11 are in communication with the first server 12, and a plurality of voice interaction devices 11 can be in communication with each other. The voice interaction device 11 is configured to detect and sense human activity information and record the human activity information, and the voice interaction device 11 may send the human activity information to other voice interaction devices 11, where the human activity information includes information about a person entering a room, sending an instruction, leaving the room, and the like, collect voice information, preprocess the collected voice information, that is, perform denoising processing on the collected voice information, send the preprocessed voice information and human activity information to the first server 12, and play information sent by the first server 12; the first server 12 is configured to analyze and process the received voice information, match corresponding information, and send the corresponding information to the corresponding voice interaction device 11 according to the received human activity information, that is, send the corresponding information to the voice interaction device 11 in the room where the presence of the person is detected and sensed.
In some embodiments, for example, in this embodiment, the voice interaction device 11 includes a human body induction module 111, a voice collection module 112, a central processing module 113, a first network connection module 114, and a voice decoding and playing module 115; the central processing module 113 is connected to the human body sensing module 111, the voice collecting module 112, the first network connection module 114, and the voice decoding and playing module 115.
The human body sensing module 111 is used for detecting and sensing human body activity information; the device comprises an infrared detector and/or a human body thermal sensor and/or a video camera. The voice acquisition module 112 is used for acquiring voice information sent by a human body; the voice information includes weather inquiry, road condition inquiry, music, news inquiry, control of the switch of the furniture electrical appliance, inquiry of the state of the furniture electrical appliance and other information, the voice acquisition module 112 includes a microphone matrix formed by one or more microphones, and if the microphone is an analog microphone, an analog-to-digital conversion device, i.e., an ADC, needs to be matched. The central processing module 113 is used for recording human body activity information and preprocessing the acquired voice information; the system comprises a central processing unit/microprocessor, a nonvolatile memory and a volatile random access memory, wherein the central processing unit or the microprocessor adopts an ARM architecture processor with high performance and low power consumption, has rich interfaces and can be connected with and communicate with an external module; the nonvolatile memory is used for storing an embedded operating system and related application programs or algorithms which are run by the whole system, and is also used for storing configuration information of the system and some temporary user data; volatile random access memories are used to store data or intermediate results that are buffered during system operation because of their fast access speed. The first network connection module 114 is configured to communicate with other voice interaction devices 11 to send the human activity information to other voice interaction devices 11, and communicate with the first server 12 to send the preprocessed voice information and human activity information to the first server 12 and receive information sent by the first server 12. The voice decoding and playing module 115 is configured to decompress and play the information sent by the first server 12; if the central processing module 113 has audio decoding and playing capabilities, the central processing module can directly connect with the external speaker to complete corresponding functions, and if the central processing module 113 does not have audio decoding and playing capabilities, the central processing module needs to connect with a chip having audio decoding and playing functions, and then is connected with the speaker through a digital-to-analog converter and an amplifying circuit.
In some embodiments, for example, in this embodiment, after the voice interaction device 11 is installed in a room, the network may be accessed through a wireless router, that is, after the device is powered on, the name and the password of the wireless router are directly written into the voice interaction device 11 through a man-machine interaction interface carried by the system, after receiving access information of the wireless router, the device 11 encrypts and automatically stores the access information in a nonvolatile memory, then tries to access the wireless router using the access information, if the access is successful, prompts success and starts working, if the access is not successful, abnormal information is given, and from a second device 11, automatically accesses the wireless router through a WIFI-Mesh protocol, and at the same time, every time one device 11 is added, tries to find whether any device 11 is already configured, and if so, automatically joins the already configured device network.
In the embodiment shown in the figure, the first server 12 includes a second network connection module 121, a data storage module 122 and a data processing module 123. The second network connection module 121 is configured to communicate with the voice interaction device 11 to receive the voice information and the human activity information sent by the voice interaction device 11, and send the matched information to the corresponding voice interaction device 11 according to the received human activity information. The data storage module 122 is configured to store information and update the information in real time; the information comprises weather conditions, road conditions, music, news, furniture electrical appliance states and other information, and the information can be updated in a networking and real-time mode. The data processing module 123 is configured to analyze the received voice information, and communicate with the data storage module 122 to obtain information matched with the analyzed voice information.
In some embodiments, for example, in this embodiment, the plurality of voice interaction devices 11 include a master device and a plurality of slave devices, where a device 11 that is most capable of joining the network or joins the network earliest in the plurality of voice interaction devices 11 is set as the master device, and all the other devices 11 are slave devices, the slave devices perform data interaction with the master device, and the master device performs data interaction with the first server 12. In this embodiment, the slave device sends the human body activity information and the collected voice information to the master device, the master device splices and preprocesses the received voice information from the slave device, sends the preprocessed voice information to the first server 12, receives the matched information sent by the first server 12, sends the matched information to the slave device sensing the existence of a person according to the received human body activity information, and the slave device plays the matched information. All the devices 11 adopt a precise clock synchronization protocol conforming to IEEE1588, the frequency provided by the hardware crystal oscillator can reach sub-microsecond synchronization precision, the clock of the master device provides a clock source, and all the slave devices in the group keep synchronization with the clock source.
Referring to fig. 2, fig. 2 shows a specific application scenario of the first embodiment of the voice-controlled interactive system 10 of the present invention. In the embodiment shown in the drawings, in the application scenario, the voice interaction device N1 is a master device, the voice interaction devices N2 to Nn are all slave devices, a plurality of slave devices perform data interaction with the master device, the master device performs data interaction with the first server 12, and the voice interaction devices 11 are distributed in different rooms respectively. Understandably, the voice interaction devices 11 in the voice control interaction system 10 in the present embodiment can communicate with each other, and a plurality of voice interaction devices 11 can cooperate with each other. For example, if the user speaks a part of the voice command in the room with the voice interaction device N2, then enters another room with the voice interaction device Nn, and finishes speaking the remaining voice command in the other room with the voice interaction device Nn, the voice interaction device N2 and the voice interaction device Nn both send the received part of the voice command to the main device, i.e., the voice interaction device N1, and the device N1 splices and preprocesses the two parts of the voice command and then sends the processed part of the voice command to the first server 12. Similarly, if the user listens to a part of the feedback voice information when not leaving the room after speaking the voice command in the room provided with the voice interaction device N2, and then enters another room, the voice interaction device N1 controls the device 11 in the room to perform seamless switching and play the remaining feedback voice information that is not played.
Referring to fig. 3, fig. 3 shows a block diagram of a second embodiment of the voice-controlled interactive system 10 of the present invention. The difference between the present embodiment and the first embodiment is that the first server 12 further includes a first data generating module 124 and a first data recording module 125. The first data generating module 124 is configured to generate a corresponding timestamp according to the received human activity information, that is, a timestamp when a person enters a room, issues an instruction, and leaves the room, and generate a voice analysis flag according to the received preprocessed voice information; the voice analysis flag is used to prompt the first data recording module 125 to extract information matching the received preprocessed voice information from the data processing module 123. The first data recording module 125 is configured to record a timestamp generated according to the human activity information and a voice analysis flag generated according to the received preprocessed voice information, and acquire and record information matched with the received voice information from the data processing module 123 according to the voice analysis flag. The first data generating module 124 and the first data recording module 125 added based on this embodiment can collect behavior information triggered by voice interaction at a specific time in a specific room of a user, and can accurately record space and time behavior information of the user, so that the system 10 can provide accurate decision data for systems such as intelligent home, intelligent entertainment, intelligent security and the like from the space and time perspective, thereby improving user experience, saving resources, and improving the economy of each intelligent system.
Referring to fig. 4, fig. 4 is a block diagram illustrating a third embodiment of the voice-controlled interactive system 10 of the present invention. The difference between this embodiment and the first embodiment is that the voice control interaction system 10 further includes a second server 13, the second server 13 is in communication with the voice interaction device 11 and the first server 12, and the second server 13 is configured to receive the human activity information, generate and record a timestamp corresponding to the human activity information, receive the voice information preprocessed by the voice interaction device 11, generate a voice analysis flag, record the voice analysis flag, and obtain and record information matched with the voice information from the voice interaction device 11 from the first server 12 according to the flag.
In some embodiments, for example, in this embodiment, the second server 13 includes a third network connection module 131, a second data generation module 132, and a second data recording module 133. The third network connection module 131 is configured to communicate with the voice interaction device 11 and the first server 12 to receive the human activity information and the preprocessed voice information sent by the voice interaction device 11 and information that is sent by the first server 12 and matches with the preprocessed voice information; the second data generating module 132 is configured to generate a corresponding timestamp according to the human activity information, that is, a timestamp when a person enters a room, issues an instruction, and leaves the room, and generate a voice analysis flag according to the received preprocessed voice information; the voice analysis flag is used to prompt the second server 13 to extract information matching the received preprocessed voice information from the first server 12. The second data recording module 133 is configured to record a timestamp generated according to the human activity information, a voice analysis flag generated according to the received preprocessed voice information, and information matched with the received voice information, which is acquired from the first server 12. The embodiment and the second embodiment can accurately record the spatial and temporal behavior information of the user, so that the system 10 can provide accurate decision data for other systems from the spatial and temporal angles, thereby improving the user experience, saving resources and improving the economy of each intelligent system. However, the specific implementation is different, and the differences are that: in this embodiment, a server is added, so that the voice interaction and behavior collection and analysis behaviors are operated on different servers, while the voice interaction and behavior collection and analysis behaviors in the second embodiment are operated on different services of the same server.
In summary, the voice interaction devices in the voice control interaction system of the present invention can detect and sense human activity information, and the devices can communicate with each other, so that a plurality of voice interaction devices can work cooperatively. The voice interaction equipment in the room can collect instructions sent by people, when a user leaves the room and goes to another room, the voice interaction equipment in the other room can continuously collect commands sent by people, receives voice information collected by equipment in the previous room and carries out information splicing and preprocessing, the first server can receive the preprocessed voice information and carries out analysis processing on the received voice information to match corresponding information, and sends the corresponding information to the voice interaction equipment in the room where the user exists according to the human activity information to be played. The system greatly improves the convenience of a man-machine interaction mode through voice interaction and the smooth feeling of use of a user.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Various equivalent changes and modifications can be made by those skilled in the art based on the above embodiments, and all equivalent changes and modifications within the scope of the claims should fall within the protection scope of the present invention.
Claims (9)
1. A voice-controlled interactive system, the voice-controlled interactive system comprising: the system comprises a plurality of voice interaction devices distributed in different rooms and a first server, wherein the voice interaction devices are mutually communicated;
the voice interaction equipment is used for detecting and sensing human activity information, recording and acquiring voice information, the voice interaction equipment comprises a main device and a plurality of slave devices which perform data interaction with the main device, the slave devices perform data interaction through the main device, the main device is also used for receiving the voice information acquired by the slave devices, performing information splicing and preprocessing on the acquired voice information and the received voice information, sending the preprocessed voice information and the preprocessed human activity information to the first server, and playing the information sent by the first server;
the first server is used for analyzing and processing the received voice information, matching corresponding information and sending the corresponding information to corresponding voice interaction equipment according to the received human body activity information.
2. The voice-controlled interactive system of claim 1, wherein the voice interactive device comprises: the system comprises a human body sensing module, a voice acquisition module, a central processing module, a first network connection module and a voice decoding and playing module; the central processing module is connected with the human body induction module, the voice acquisition module, the first network connection module and the voice decoding and playing module; wherein the content of the first and second substances,
the human body induction module is used for detecting and inducing human body activity information;
the voice acquisition module is used for acquiring voice information sent by a human body;
the central processing module is used for recording human body activity information and preprocessing the acquired voice information;
the first network connection module is used for communicating with other voice interaction devices to send the human body activity information to other voice interaction devices, and communicating with the first server to send the preprocessed voice information and the human body activity information to the first server and receive the information sent by the first server;
and the voice decoding and playing module is used for decompressing and playing the information sent by the first server.
3. The voice-controlled interactive system of claim 2, wherein: the human body induction module comprises an infrared detector and/or a human body thermal sensor and/or a video camera.
4. The voice-controlled interactive system of claim 2, wherein: the voice acquisition module comprises a microphone matrix formed by one or more microphones.
5. The voice-controlled interactive system of claim 2, wherein: the central processing module comprises a central processing unit/microprocessor, a nonvolatile memory and a random access memory.
6. The voice-controlled interactive system of claim 1, wherein: the first server comprises a second network connection module, a data storage module and a data processing module; wherein the content of the first and second substances,
the second network connection module is used for communicating with the voice interaction equipment to receive the voice information and the human body activity information sent by the voice interaction equipment and sending the matched information to the corresponding voice interaction equipment according to the received human body activity information;
the data storage module is used for storing information and updating the information in real time;
and the data processing module is used for analyzing and processing the received voice information and communicating with the data storage module to acquire information matched with the voice information after analysis and processing.
7. The voice-controlled interactive system of claim 6, wherein: the first server also comprises a first data generation module and a first data recording module; wherein the content of the first and second substances,
the first data generation module is used for generating a corresponding timestamp according to the received human body activity information and generating a voice analysis zone bit according to the received preprocessed voice information;
the first data recording module is used for recording a timestamp generated according to the human body activity information and a voice analysis zone bit generated according to the received preprocessed voice information, and acquiring and recording information matched with the received voice information from the data processing module according to the voice analysis zone bit.
8. The voice-controlled interactive system of any one of claims 1-6, wherein: the system also comprises a second server which is communicated with the voice interaction equipment and the first server and is used for generating and recording a time stamp of the human activity information, receiving the voice information preprocessed by the voice interaction equipment, generating a voice analysis zone bit, recording the voice analysis zone bit, and acquiring and recording information matched with the voice information from the voice interaction equipment from the first server according to the zone bit.
9. The voice-controlled interactive system of claim 8, wherein: the second server comprises a third network connection module, a second data generation module and a second data recording module; wherein the content of the first and second substances,
the third network connection module is used for communicating with the voice interaction device and the first server to receive the human body activity information, the preprocessed voice information and the information which is sent by the first server and matched with the preprocessed voice information;
the second data generation module is used for generating a corresponding timestamp according to the human body activity information and generating a voice analysis zone bit according to the received preprocessed voice information;
the second data recording module is used for recording a timestamp generated according to the human activity information, a voice analysis flag bit generated according to the received preprocessed voice information and information which is acquired from the first server and matched with the received voice information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710155933.1A CN106875946B (en) | 2017-03-14 | 2017-03-14 | Voice control interactive system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710155933.1A CN106875946B (en) | 2017-03-14 | 2017-03-14 | Voice control interactive system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106875946A CN106875946A (en) | 2017-06-20 |
CN106875946B true CN106875946B (en) | 2020-10-27 |
Family
ID=59172458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710155933.1A Active CN106875946B (en) | 2017-03-14 | 2017-03-14 | Voice control interactive system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106875946B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109285540A (en) * | 2017-07-21 | 2019-01-29 | 致伸科技股份有限公司 | The operating system of digital speech assistant |
CN107800763B (en) * | 2017-09-08 | 2021-12-03 | 冯源 | Control method and system for display content of table lamp |
CN109754798B (en) * | 2018-12-20 | 2021-10-15 | 歌尔股份有限公司 | Multi-loudspeaker-box synchronous control method and system and loudspeaker box |
CN112133300A (en) * | 2019-06-25 | 2020-12-25 | 腾讯科技(深圳)有限公司 | Multi-device interaction method, related device and system |
CN112767931A (en) * | 2020-12-10 | 2021-05-07 | 广东美的白色家电技术创新中心有限公司 | Voice interaction method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070250316A1 (en) * | 2006-04-20 | 2007-10-25 | Vianix Delaware, Llc | System and method for automatic merging of multiple time-stamped transcriptions |
CN106157950A (en) * | 2016-09-29 | 2016-11-23 | 合肥华凌股份有限公司 | Speech control system and awakening method, Rouser and household electrical appliances, coprocessor |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103795841B (en) * | 2012-10-30 | 2015-11-25 | 联想(北京)有限公司 | A kind of information cuing method and device |
CN103856228B (en) * | 2012-12-06 | 2016-08-03 | 汉王科技股份有限公司 | A kind of wireless human-computer interactive method and system |
US9842489B2 (en) * | 2013-02-14 | 2017-12-12 | Google Llc | Waking other devices for additional data |
CN104484151A (en) * | 2014-12-30 | 2015-04-01 | 江苏惠通集团有限责任公司 | Voice control system, equipment and method |
KR20160108874A (en) * | 2015-03-09 | 2016-09-21 | 주식회사셀바스에이아이 | Method and apparatus for generating conversation record automatically |
US9898902B2 (en) * | 2015-04-16 | 2018-02-20 | Panasonic Intellectual Property Management Co., Ltd. | Computer implemented method for notifying user by using a speaker |
US9710460B2 (en) * | 2015-06-10 | 2017-07-18 | International Business Machines Corporation | Open microphone perpetual conversation analysis |
CN106488286A (en) * | 2015-08-28 | 2017-03-08 | 上海欢众信息科技有限公司 | High in the clouds Information Collection System |
CN105516289A (en) * | 2015-12-02 | 2016-04-20 | 广东小天才科技有限公司 | Method and system for assisting voice interaction based on position and action |
CN205864405U (en) * | 2016-07-01 | 2017-01-04 | 佛山市顺德区美的电热电器制造有限公司 | Wearable and there is its control system |
-
2017
- 2017-03-14 CN CN201710155933.1A patent/CN106875946B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070250316A1 (en) * | 2006-04-20 | 2007-10-25 | Vianix Delaware, Llc | System and method for automatic merging of multiple time-stamped transcriptions |
CN106157950A (en) * | 2016-09-29 | 2016-11-23 | 合肥华凌股份有限公司 | Speech control system and awakening method, Rouser and household electrical appliances, coprocessor |
Also Published As
Publication number | Publication date |
---|---|
CN106875946A (en) | 2017-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875946B (en) | Voice control interactive system | |
CN107135443B (en) | Signal processing method and electronic equipment | |
US9774998B1 (en) | Automatic content transfer | |
EP3300074B1 (en) | Information processing apparatus | |
CN104049721B (en) | Information processing method and electronic equipment | |
KR20190024762A (en) | Music Recommendation Method, Apparatus, Device and Storage Media | |
WO2019037732A1 (en) | Television set with microphone array, and television system | |
US20150086033A1 (en) | Reduced Latency Electronic Content System | |
CN108574515B (en) | Data sharing method, device and system based on intelligent sound box equipment | |
CN110827818A (en) | Control method, device, equipment and storage medium of intelligent voice equipment | |
JP2019204074A (en) | Speech dialogue method, apparatus and system | |
CN109901698B (en) | Intelligent interaction method, wearable device, terminal and system | |
CN108766438A (en) | Man-machine interaction method, device, storage medium and intelligent terminal | |
CN110047497B (en) | Background audio signal filtering method and device and storage medium | |
US20190267005A1 (en) | Portable audio device with voice capabilities | |
CN108702572A (en) | Control method, system and the medium of audio output | |
CN110956963A (en) | Interaction method realized based on wearable device and wearable device | |
CN105005379A (en) | Integrated audio playing device and audio playing method thereof | |
CN110611861B (en) | Directional sound production control method and device, sound production equipment, medium and electronic equipment | |
CN112702633A (en) | Multimedia intelligent playing method and device, playing equipment and storage medium | |
CN112151013A (en) | Intelligent equipment interaction method | |
KR101995443B1 (en) | Method for verifying speaker and system for recognizing speech | |
CN109597996A (en) | A kind of semanteme analytic method, device, equipment and medium | |
US20170193552A1 (en) | Method and system for grouping devices in a same space for cross-device marketing | |
CN112735403B (en) | Intelligent home control system based on intelligent sound equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |