CN111524516A - Control method based on voice interaction, server and display device - Google Patents
Control method based on voice interaction, server and display device Download PDFInfo
- Publication number
- CN111524516A CN111524516A CN202010366783.0A CN202010366783A CN111524516A CN 111524516 A CN111524516 A CN 111524516A CN 202010366783 A CN202010366783 A CN 202010366783A CN 111524516 A CN111524516 A CN 111524516A
- Authority
- CN
- China
- Prior art keywords
- voice
- control instruction
- server
- recognition result
- sending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000003993 interaction Effects 0.000 title claims abstract description 30
- 238000013507 mapping Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a control method, a server and display equipment based on voice interaction, wherein a voice recognition result sent by a terminal is received through the server; determining a control instruction according to the voice recognition result, wherein the control instruction is used for controlling the display equipment to display a target page matched with the voice recognition result; then sending the control instruction to the display equipment; then receiving voice information which is sent by the display equipment and needs voice broadcast, wherein the voice information is generated according to the content of the target page; and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode. The method for realizing the multi-terminal interconnection control based on the voice interaction utilizes the terminal voice operation to replace the original manual operation, improves the operation convenience of users, and improves the display efficiency and the user experience of the display equipment.
Description
Technical Field
The present invention relates to the field of voice interaction technologies, and in particular, to a control method, a server, and a display device based on voice interaction.
Background
In recent years, with the vigorous development of technologies such as computers, artificial intelligence, big data and the like, various products and technologies for reducing behavior operation and improving user experience are developed, wherein a voice interaction technology develops rapidly and becomes a necessary functional module for various mobile phone software, and with the aggravation of the aging process of China, more and more products based on voice interaction control are provided.
Most of the existing voice interaction technologies are used at a mobile phone end to input voice and complete inquiry of a certain problem or complete switching of household appliances ecologically by an intelligent home. And less in display devices.
Disclosure of Invention
The embodiment of the invention provides a control method, a server and display equipment based on voice interaction, and multi-terminal interconnection control is realized based on voice interaction.
In a first aspect, an embodiment of the present invention provides a server, where the server is configured to perform:
receiving a voice recognition result sent by a terminal;
determining a control instruction according to the received voice recognition result, wherein the control instruction is used for controlling a display device to display a target page matched with the voice recognition result;
sending the control instruction to the display equipment;
receiving voice information which is required to be broadcasted by voice and is sent by the display equipment, wherein the voice information is generated according to the content of the target page;
and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode.
Optionally, in the server provided in the embodiment of the present invention, the determining a control instruction according to the received speech recognition result includes:
determining voice content according to the received voice recognition result;
and determining the control instruction according to the determined voice content through a mapping relation between the pre-established voice content and the control instruction.
Optionally, in the server provided in the embodiment of the present invention, the sending the control instruction to the display device includes:
encapsulating the control instruction into a data packet;
and sending the information to the display device through the established Websocket connection.
Optionally, in the server provided in the embodiment of the present invention, the sending the voice information to the terminal includes:
encapsulating the voice information into a data packet;
and sending the data packet to the terminal through the established Websocket connection.
In a second aspect, an embodiment of the present invention provides a display device, including a display screen, a memory, and a processor, wherein:
the memory, which is connected with the display screen and the processor, is configured to store computer instructions and save data associated with the display screen;
the processor, coupled to the display screen and the memory, configured to execute the computer instructions to cause the display device to:
receiving a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result;
responding to the control instruction, and displaying a target page matched with the voice recognition result through the display screen;
generating voice information needing voice broadcasting according to the target page;
and sending the voice information to the server.
Optionally, in the display device provided in the embodiment of the present invention, the sending the voice information to the server specifically includes:
and sending the voice information to the server through a post request.
In a third aspect, an embodiment of the present invention provides a control method based on voice interaction, including:
receiving a voice recognition result sent by a terminal;
determining a control instruction according to the received voice recognition result, wherein the control instruction is used for controlling a display device to display a target page matched with the voice recognition result;
sending the control instruction to the display equipment;
receiving voice information which is required to be broadcasted by voice and is sent by the display equipment, wherein the voice information is generated according to the content of the target page;
and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode.
Optionally, in the control method provided in the embodiment of the present invention, the determining a control instruction according to the received speech recognition result includes:
determining voice content according to the received voice recognition result;
and determining the control instruction according to the determined voice content through a mapping relation between the pre-established voice content and the control instruction.
In a third aspect, an embodiment of the present invention provides a control method based on voice interaction, including:
receiving a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result;
responding to the control instruction, and displaying a target page matched with the voice recognition result through the display screen;
generating voice information needing voice broadcasting according to the target page;
and sending the voice information to the server.
In a fifth aspect, the present invention provides a computer-readable storage medium, where computer instructions are stored, and when executed by a processor, the computer instructions implement any one of the above methods provided by the present invention.
The invention has the following beneficial effects:
according to the control method, the server and the display device based on the voice interaction, provided by the embodiment of the invention, the voice recognition result sent by the terminal is received through the server; determining a control instruction according to the voice recognition result, wherein the control instruction is used for controlling the display equipment to display a target page matched with the voice recognition result; then sending the control instruction to the display equipment; then receiving voice information which is sent by the display equipment and needs voice broadcast, wherein the voice information is generated according to the content of the target page; and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode. The method for realizing the multi-terminal interconnection control based on the voice interaction utilizes the terminal voice operation to replace the original manual operation, improves the operation convenience of users, and improves the display efficiency and the user experience of the display equipment.
Drawings
Fig. 1 is a flowchart of a control method based on voice interaction according to an embodiment of the present invention;
fig. 2 is a flowchart of another control method based on voice interaction according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a display device according to an embodiment of the present invention;
fig. 4 is a flowchart of another control method based on voice interaction according to an embodiment of the present invention;
fig. 5 is an interaction flowchart of a terminal, a server, and a display device according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, the present invention is further described with reference to the accompanying drawings and examples. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their repetitive description will be omitted. The words expressing the position and direction described in the present invention are illustrated in the accompanying drawings, but may be changed as required and still be within the scope of the present invention. The drawings of the present invention are for illustrative purposes only and do not represent true scale.
It should be noted that in the following description, specific details are set forth in order to provide a thorough understanding of the present invention. The invention can be implemented in a number of ways different from those described herein and similar generalizations can be made by those skilled in the art without departing from the spirit of the invention. Therefore, the present invention is not limited to the specific embodiments disclosed below. The description which follows is a preferred embodiment of the present application, but is made for the purpose of illustrating the general principles of the application and not for the purpose of limiting the scope of the application. The protection scope of the present application shall be subject to the definitions of the appended claims.
A control method, a server, and a display device based on voice interaction according to an embodiment of the present invention are specifically described below with reference to the accompanying drawings.
In an embodiment of the present invention, as shown in fig. 1, a server is configured to perform:
s101, receiving a voice recognition result sent by a terminal;
s102, determining a control instruction according to the received voice recognition result, wherein the control instruction is used for controlling display equipment to display a target page matched with the voice recognition result;
s103, sending a control instruction to display equipment;
s104, receiving voice information which is sent by display equipment and needs to be broadcasted in a voice mode, wherein the voice information is generated according to the content of a target page;
and S105, sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode.
According to the server provided by the embodiment of the invention, the voice recognition result sent by the terminal is received through the server; determining a control instruction according to the voice recognition result, wherein the control instruction is used for controlling the display equipment to display a target page matched with the voice recognition result; then sending the control instruction to the display equipment; then receiving voice information which is sent by the display equipment and needs voice broadcast, wherein the voice information is generated according to the content of the target page; and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode. The method for realizing the multi-terminal interconnection control based on the voice interaction utilizes the terminal voice operation to replace the original manual operation, improves the operation convenience of users, and improves the display efficiency and the user experience of the display equipment.
In specific implementation, in the present invention, the terminal may run an application having a voice recognition function, such as a voice recognition application, and the terminal needs to have functions of data transmission, voice acquisition, and voice playing, for example, the terminal may have components such as a microphone and a speaker, the terminal has a communication function and can access the internet, and the terminal may be a mobile phone, a tablet computer, an intelligent wearable device, a desktop computer, a notebook computer, and the like. The terminal can perform voice recognition after receiving the voice of the user, and sends the voice recognition result to the server through the network request, and when receiving the voice information sent by the server, the terminal can perform voice broadcast.
In specific implementation, the application with the voice recognition function can complete recognition of voice input and broadcast of voice information returned by the server by a Software Development Kit (SDK) with voice recognition and a Text-To-Speech (TTS) synthesis module.
Specifically, after recognizing the input voice, the SDK with voice recognition encapsulates the voice recognition result into a post request, and calls a background interface to send the post request to the server. After the terminal receives the voice information returned by the server of the party through the websocket connection, the encapsulated TTS synthesis module carries out voice synthesis and then carries out audio playing.
Optionally, in the server provided in the embodiment of the present invention, as shown in fig. 2, the step S102 of determining the control instruction according to the received voice recognition result includes:
s1021, determining voice content according to the received voice recognition result;
and S1022, determining a control instruction according to the determined voice content through a pre-established mapping relation between the voice content and the control instruction.
Optionally, in the server provided in the embodiment of the present invention, as shown in fig. 2, step S103 sends the control instruction to the display device, where the step includes:
s1031, packaging the control instruction into a data packet;
and S1032, sending the data packet to the display device through the established Websocket connection.
Optionally, in the server provided in the embodiment of the present invention, as shown in fig. 2, step S105 sends the voice information to the terminal, where the step includes:
s1051, packaging the voice information into a data packet;
and S1052, sending the data packet to the terminal through the established Websocket connection.
Based on the same inventive concept, an embodiment of the present invention further provides a display device, as shown in fig. 3, including a display screen 01, a memory 02, and a processor 03, where:
a memory 02 connected to the display screen 01 and the processor 03 and configured to store computer instructions and save data associated with the display screen 01;
a processor 03, connected to the display screen 01 and the memory 02, configured to execute computer instructions to cause the display device to perform the steps as shown in fig. 4:
s401, receiving a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result;
s402, responding to a control instruction, and displaying a target page matched with the voice recognition result through a display screen;
s403, generating voice information needing voice broadcasting according to the target page;
and S404, sending the voice information to a server.
The display device provided by the embodiment of the invention receives a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result; responding to the control instruction, and displaying a target page matched with the voice recognition result through a display screen; generating voice information needing voice broadcasting according to the target page; and sending the voice information to the server so as to send the voice information to the terminal by using the server. The method for realizing the multi-terminal interconnection control based on the voice interaction utilizes the terminal voice operation to replace the original manual operation, improves the operation convenience of users, and improves the display efficiency and the user experience of the display equipment.
In particular implementations, the processor may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
When implemented, the memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
Optionally, in the display device provided in the embodiment of the present invention, sending the voice information to the server specifically includes:
and sending the voice information to the server through a post request.
Specifically, the terminal, the server and the display device provided by the embodiment of the invention are described by taking the example that the target page displayed by the display screen can be a cloud image page. Specifically, the interactive process of the terminal, the server and the display device is shown in fig. 5, and includes the following processes:
the method comprises the following steps that 1, a user inputs voice of 'I want to display XX cloud pictures' into a terminal;
and 2, recognizing the voice result by the terminal and sending the voice recognition result to the server.
And 3, determining a control instruction by the server according to the received voice recognition result.
Specifically, a mapping relationship between the voice content and the control instruction is established in advance according to all cloud pictures which can be displayed by the display device.
Specifically, the server determines the voice content according to the received voice recognition result; and determining a control instruction according to the determined voice content through a mapping relation between the pre-established voice content and the control instruction, wherein the control instruction is used for controlling the display equipment to display a target page matched with the voice recognition result.
Step 4, the server sends the control instruction to the display device;
and 5, responding to the control instruction by the display equipment, displaying the XX cloud picture page through the display screen, and generating voice information needing voice broadcast according to the content of the cloud picture page.
The process 6, the display device sends the voice information to the server;
the process 7, the server sends the voice information to the terminal;
and 8, broadcasting the voice information by the terminal voice.
In the embodiment of the invention, the server is used as a link to enable the terminal to perform voice interaction with the display equipment, so that the method for realizing multi-terminal interconnection control based on voice interaction can be realized, and the original manual operation can be replaced by terminal voice operation, thereby improving the operation convenience of a user and improving the display efficiency and user experience of the display equipment.
Based on the same inventive concept, an embodiment of the present invention further provides a control method based on voice interaction, as shown in fig. 1, including:
s101, receiving a voice recognition result sent by a terminal;
s102, determining a control instruction according to the received voice recognition result, wherein the control instruction is used for controlling display equipment to display a target page matched with the voice recognition result;
s103, sending a control instruction to display equipment;
s104, receiving voice information which is sent by display equipment and needs to be broadcasted in a voice mode, wherein the voice information is generated according to the content of a target page;
and S105, sending the voice information to the terminal so that the terminal can broadcast the voice information in a voice mode.
Optionally, in the control method provided in the embodiment of the present invention, determining the control instruction according to the received speech recognition result includes:
determining voice content according to the received voice recognition result;
and determining the control instruction according to the determined voice content through the mapping relation between the pre-established voice content and the control instruction.
The control method based on voice interaction provided by the above embodiments and the embodiments of the server belong to the same concept, and specific implementation processes thereof are detailed in the embodiments of the server and are not described herein again.
Based on the same inventive concept, an embodiment of the present invention further provides a control method based on voice interaction, as shown in fig. 4, including:
s401, receiving a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result;
s401, responding to a control instruction, and displaying a target page matched with a voice recognition result through a display screen;
s401, generating voice information needing voice broadcasting according to a target page;
s401, sending the voice information to a server.
The embodiment of the control method based on voice interaction and the embodiment of the display device provided by the above embodiments belong to the same concept, and the specific implementation process is described in detail in the embodiment of the server, and is not described herein again.
Based on the same inventive concept, the embodiment of the present invention further provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed by a processor, the computer instructions implement any one of the methods provided by the embodiment of the present invention.
When embodied, the computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Claims (10)
1. A server, characterized in that the server is configured to perform:
receiving a voice recognition result sent by a terminal;
determining a control instruction according to the received voice recognition result, wherein the control instruction is used for controlling a display device to display a target page matched with the voice recognition result;
sending the control instruction to the display equipment;
receiving voice information which is required to be broadcasted by voice and is sent by the display equipment, wherein the voice information is generated according to the content of the target page;
and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode.
2. The server according to claim 1, wherein the determining a control instruction according to the received voice recognition result comprises:
determining voice content according to the received voice recognition result;
and determining the control instruction according to the determined voice content through a mapping relation between the pre-established voice content and the control instruction.
3. The server of claim 1, wherein the sending the control instruction to the display device comprises:
encapsulating the control instruction into a data packet;
and sending the information to the display device through the established Websocket connection.
4. The server according to claim 1, wherein said sending the voice information to the terminal comprises:
encapsulating the voice information into a data packet;
and sending the data packet to the terminal through the established Websocket connection.
5. A display device comprising a display screen, a memory, and a processor, wherein:
the memory, which is connected with the display screen and the processor, is configured to store computer instructions and save data associated with the display screen;
the processor, coupled to the display screen and the memory, configured to execute the computer instructions to cause the display device to:
receiving a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result;
responding to the control instruction, and displaying a target page matched with the voice recognition result through the display screen;
generating voice information needing voice broadcasting according to the target page;
and sending the voice information to the server.
6. The display device of claim 5, wherein the sending the voice information to the server is specifically:
and sending the voice information to the server through a post request.
7. A control method based on voice interaction is characterized by comprising the following steps:
receiving a voice recognition result sent by a terminal;
determining a control instruction according to the received voice recognition result, wherein the control instruction is used for controlling a display device to display a target page matched with the voice recognition result;
sending the control instruction to the display equipment;
receiving voice information which is required to be broadcasted by voice and is sent by the display equipment, wherein the voice information is generated according to the content of the target page;
and sending the voice information to the terminal so that the terminal broadcasts the voice information in a voice mode.
8. The control method of claim 7, wherein the determining a control instruction according to the received speech recognition result comprises:
determining voice content according to the received voice recognition result;
and determining the control instruction according to the determined voice content through a mapping relation between the pre-established voice content and the control instruction.
9. A control method based on voice interaction is characterized by comprising the following steps:
receiving a control instruction sent by a server, wherein the control instruction is determined by the server according to a received voice recognition result;
responding to the control instruction, and displaying a target page matched with the voice recognition result through the display screen;
generating voice information needing voice broadcasting according to the target page;
and sending the voice information to the server.
10. A computer-readable storage medium, having stored thereon computer instructions which, when executed by a processor, implement the method of any one of claims 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010366783.0A CN111524516A (en) | 2020-04-30 | 2020-04-30 | Control method based on voice interaction, server and display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010366783.0A CN111524516A (en) | 2020-04-30 | 2020-04-30 | Control method based on voice interaction, server and display device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111524516A true CN111524516A (en) | 2020-08-11 |
Family
ID=71912105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010366783.0A Pending CN111524516A (en) | 2020-04-30 | 2020-04-30 | Control method based on voice interaction, server and display device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111524516A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968638A (en) * | 2020-08-14 | 2020-11-20 | 上海茂声智能科技有限公司 | Method, system, equipment and storage medium for voice control display terminal |
CN112102828A (en) * | 2020-09-04 | 2020-12-18 | 杭州中软安人网络通信股份有限公司 | Voice control method and system for automatically broadcasting content on large screen |
CN112256230A (en) * | 2020-10-16 | 2021-01-22 | 广东美的厨房电器制造有限公司 | Menu interaction method and system and storage medium |
CN112786048A (en) * | 2021-03-05 | 2021-05-11 | 百度在线网络技术(北京)有限公司 | Voice interaction method and device, electronic equipment and medium |
CN113778367A (en) * | 2020-10-14 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Voice interaction method, device, equipment and computer readable medium |
CN114244879A (en) * | 2021-12-15 | 2022-03-25 | 北京声智科技有限公司 | Industrial control system, industrial control method and electronic equipment |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5930752A (en) * | 1995-09-14 | 1999-07-27 | Fujitsu Ltd. | Audio interactive system |
JP2004015443A (en) * | 2002-06-06 | 2004-01-15 | Nec Corp | Display-voice cooperation system, server, and method |
US20060116881A1 (en) * | 2004-12-01 | 2006-06-01 | Nec Corporation | Portable-type communication terminal device, contents output method, distribution server and method thereof, and contents supply system and supply method thereof |
CN202307121U (en) * | 2011-11-04 | 2012-07-04 | 中国人民解放军装甲兵技术学院 | Communication data information broadcasting system based on voice synthesis |
CN203055434U (en) * | 2012-07-30 | 2013-07-10 | 刘强 | Family speech interactive terminal based on cloud technique |
KR20130089501A (en) * | 2012-02-02 | 2013-08-12 | 김용진 | Method and apparatus for providing voice value added service |
CN103546790A (en) * | 2013-09-18 | 2014-01-29 | 深圳市掌世界网络科技有限公司 | Language interaction method and language interaction system on basis of mobile terminal and interactive television |
TW201508734A (en) * | 2013-08-28 | 2015-03-01 | Dynalab Singapore Ltd | Method for converting contents of multi-pages into voice play and automatically switching content pages |
CN106648291A (en) * | 2016-09-28 | 2017-05-10 | 珠海市魅族科技有限公司 | Method and device for displaying information and broadcasting information |
CN108536655A (en) * | 2017-12-21 | 2018-09-14 | 广州市讯飞樽鸿信息技术有限公司 | Audio production method and system are read aloud in a kind of displaying based on hand-held intelligent terminal |
CN108763500A (en) * | 2018-05-30 | 2018-11-06 | 深圳壹账通智能科技有限公司 | Voice-based Web browser method, device, equipment and storage medium |
CN109120993A (en) * | 2018-09-30 | 2019-01-01 | Tcl通力电子(惠州)有限公司 | Audio recognition method, intelligent terminal, speech recognition system and readable storage medium storing program for executing |
CN109389967A (en) * | 2018-09-04 | 2019-02-26 | 深圳壹账通智能科技有限公司 | Voice broadcast method, device, computer equipment and storage medium |
CN109448709A (en) * | 2018-10-16 | 2019-03-08 | 华为技术有限公司 | A kind of terminal throws the control method and terminal of screen |
CN109979460A (en) * | 2019-03-11 | 2019-07-05 | 上海白泽网络科技有限公司 | Visualize voice messaging exchange method and device |
CN110379430A (en) * | 2019-07-26 | 2019-10-25 | 腾讯科技(深圳)有限公司 | Voice-based cartoon display method, device, computer equipment and storage medium |
CN110992955A (en) * | 2019-12-25 | 2020-04-10 | 苏州思必驰信息科技有限公司 | Voice operation method, device, equipment and storage medium of intelligent equipment |
CN111048090A (en) * | 2019-12-27 | 2020-04-21 | 苏州思必驰信息科技有限公司 | Animation interaction method and device based on voice |
-
2020
- 2020-04-30 CN CN202010366783.0A patent/CN111524516A/en active Pending
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5930752A (en) * | 1995-09-14 | 1999-07-27 | Fujitsu Ltd. | Audio interactive system |
JP2004015443A (en) * | 2002-06-06 | 2004-01-15 | Nec Corp | Display-voice cooperation system, server, and method |
US20060116881A1 (en) * | 2004-12-01 | 2006-06-01 | Nec Corporation | Portable-type communication terminal device, contents output method, distribution server and method thereof, and contents supply system and supply method thereof |
CN202307121U (en) * | 2011-11-04 | 2012-07-04 | 中国人民解放军装甲兵技术学院 | Communication data information broadcasting system based on voice synthesis |
KR20130089501A (en) * | 2012-02-02 | 2013-08-12 | 김용진 | Method and apparatus for providing voice value added service |
CN203055434U (en) * | 2012-07-30 | 2013-07-10 | 刘强 | Family speech interactive terminal based on cloud technique |
TW201508734A (en) * | 2013-08-28 | 2015-03-01 | Dynalab Singapore Ltd | Method for converting contents of multi-pages into voice play and automatically switching content pages |
CN103546790A (en) * | 2013-09-18 | 2014-01-29 | 深圳市掌世界网络科技有限公司 | Language interaction method and language interaction system on basis of mobile terminal and interactive television |
CN106648291A (en) * | 2016-09-28 | 2017-05-10 | 珠海市魅族科技有限公司 | Method and device for displaying information and broadcasting information |
CN108536655A (en) * | 2017-12-21 | 2018-09-14 | 广州市讯飞樽鸿信息技术有限公司 | Audio production method and system are read aloud in a kind of displaying based on hand-held intelligent terminal |
CN108763500A (en) * | 2018-05-30 | 2018-11-06 | 深圳壹账通智能科技有限公司 | Voice-based Web browser method, device, equipment and storage medium |
CN109389967A (en) * | 2018-09-04 | 2019-02-26 | 深圳壹账通智能科技有限公司 | Voice broadcast method, device, computer equipment and storage medium |
CN109120993A (en) * | 2018-09-30 | 2019-01-01 | Tcl通力电子(惠州)有限公司 | Audio recognition method, intelligent terminal, speech recognition system and readable storage medium storing program for executing |
CN109448709A (en) * | 2018-10-16 | 2019-03-08 | 华为技术有限公司 | A kind of terminal throws the control method and terminal of screen |
CN109979460A (en) * | 2019-03-11 | 2019-07-05 | 上海白泽网络科技有限公司 | Visualize voice messaging exchange method and device |
CN110379430A (en) * | 2019-07-26 | 2019-10-25 | 腾讯科技(深圳)有限公司 | Voice-based cartoon display method, device, computer equipment and storage medium |
CN110992955A (en) * | 2019-12-25 | 2020-04-10 | 苏州思必驰信息科技有限公司 | Voice operation method, device, equipment and storage medium of intelligent equipment |
CN111048090A (en) * | 2019-12-27 | 2020-04-21 | 苏州思必驰信息科技有限公司 | Animation interaction method and device based on voice |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968638A (en) * | 2020-08-14 | 2020-11-20 | 上海茂声智能科技有限公司 | Method, system, equipment and storage medium for voice control display terminal |
CN112102828A (en) * | 2020-09-04 | 2020-12-18 | 杭州中软安人网络通信股份有限公司 | Voice control method and system for automatically broadcasting content on large screen |
CN113778367A (en) * | 2020-10-14 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Voice interaction method, device, equipment and computer readable medium |
CN112256230A (en) * | 2020-10-16 | 2021-01-22 | 广东美的厨房电器制造有限公司 | Menu interaction method and system and storage medium |
CN112786048A (en) * | 2021-03-05 | 2021-05-11 | 百度在线网络技术(北京)有限公司 | Voice interaction method and device, electronic equipment and medium |
CN114244879A (en) * | 2021-12-15 | 2022-03-25 | 北京声智科技有限公司 | Industrial control system, industrial control method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111524516A (en) | Control method based on voice interaction, server and display device | |
CN109388367B (en) | Sound effect adjusting method and device, electronic equipment and storage medium | |
CN104754536A (en) | Method and system for realizing communication between different languages | |
CN107731231B (en) | Method for supporting multi-cloud-end voice service and storage device | |
CN109101216B (en) | Sound effect adjusting method and device, electronic equipment and storage medium | |
CN103558916A (en) | Man-machine interaction system, method and device | |
CN103905644A (en) | Generating method and equipment of mobile terminal call interface | |
EP4164232A1 (en) | Information processing method, system and apparatus, and electronic device and storage medium | |
JP7311707B2 (en) | Human-machine interaction processing method | |
KR102358012B1 (en) | Speech control method and apparatus, electronic device, and readable storage medium | |
CN105120063A (en) | Volume prompting method of input voice and electronic device | |
CN103324459A (en) | Method and system for implementing USB (universal serial bus) headset devices | |
CN109240641B (en) | Sound effect adjusting method and device, electronic equipment and storage medium | |
US9369587B2 (en) | System and method for software turret phone capabilities | |
CN106776039A (en) | A kind of data processing method and device | |
CN114244821B (en) | Data processing method, device, equipment, electronic equipment and storage medium | |
CN104010154A (en) | Information processing method and electronic equipment | |
CN111124229A (en) | Method, system and browser for realizing webpage animation control through voice interaction | |
CN106782578B (en) | Distributed decoding controller, distributed decoding method and audio terminal | |
CN110035308A (en) | Data processing method, equipment and storage medium | |
CN116319689A (en) | IVVR video interaction realization method, system and storage medium based on HTML5 | |
US12022149B2 (en) | Method for processing sound information, and non-transitory computer storage medium and electronic device | |
CN104683550A (en) | Information processing method and electronic equipment | |
CN117037826A (en) | Audio identification method, device, electronic equipment and storage medium | |
KR102509106B1 (en) | Method for providing speech video and computing device for executing the method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200811 |