CN106847256A - A kind of voice converts chat method - Google Patents
A kind of voice converts chat method Download PDFInfo
- Publication number
- CN106847256A CN106847256A CN201611223813.2A CN201611223813A CN106847256A CN 106847256 A CN106847256 A CN 106847256A CN 201611223813 A CN201611223813 A CN 201611223813A CN 106847256 A CN106847256 A CN 106847256A
- Authority
- CN
- China
- Prior art keywords
- text information
- voice
- module
- voice messaging
- chat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000005540 biological transmission Effects 0.000 claims abstract description 11
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 abstract description 7
- 230000004438 eyesight Effects 0.000 abstract description 4
- 238000004891 communication Methods 0.000 abstract description 3
- 230000002950 deficient Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 3
- 241001672694 Citrus reticulata Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present invention relates to a kind of voice conversion chat method, methods described is realized by sending client, server and receiving client;Described transmission client receives the information that request sends, and described information includes text information and voice messaging;Described server is translated to information, and obtains corresponding text information, and the text information that will be obtained is matched with the customized sound stored in database, and text information is translated to the voice messaging of customization;Described reception client receives the voice messaging after translation, and voice messaging to receiving is played out.The word of input is converted into voice messaging, and be sent to reception client listened to, for the communication between the people having defective vision and the elderly is provided conveniently, simultaneously when voice-enabled chat is sent, the sound that the voice messaging of transmission is converted into customization is transmitted, avoid because the language of various regions is different, the failure of exchange occur, effectively raise efficiency and the degree of accuracy of voice-enabled chat.
Description
Technical field
The invention belongs to mobile communication technology field, it is related to a kind of voice-enabled chat method, more particularly to a kind of voice conversion
Chat method.
Background technology
The chief component that current routine work is lived is turned into by mobile terminal and other people chats, has been chatted in communication
During it, mainly have based on hair text information and voice messaging, can be convenient for people to read or listen to, but at present in letter
During breath forwarding, be all send for text information receive also for text information, and inconvenient some eyesights compare
Difference or the elderly are relatively more painstaking during reading, and easily readable mistake, and the word that will can not be sent is converted into
Voice supplies to have defective vision or old man is listened to, and the word being simultaneously emitted by can not often represent the linguistic context spoken, people
Intuitively linguistic context according to expressed by literal meaning can not understand word behind, while language difference ratio southern and northern at present
Larger, everyone mandarin level has certain difference again, in voice call process, often because this book of speaker institute
The local characteristic for carrying, causes to listen to the implication that people can not in time understand expressed language, the voice call band for giving people
Carry out obstacle.
The content of the invention
In order to overcome the shortcoming of above-mentioned prior art, text information can be converted into it is an object of the invention to provide one kind
Voice messaging, while being customized of voice messaging can be processed, the convenient voice conversion chat method linked up.
To reach above-mentioned purpose, the present invention uses following technical scheme, and a kind of voice converts chat method, and methods described is led to
Cross transmission client, server and receive client and realize;
Described transmission client receives the information that request sends, and described information includes text information and voice messaging;
Described server is translated to information, and obtains corresponding text information, the text information and number that will be obtained
Matched according to the customized sound stored in storehouse, and text information is translated to the voice messaging of customization;
Described reception client receives the voice messaging after translation, and voice messaging to receiving is played out.
Comprising the punctuation mark for sending in described text information, server is receiving the letter of the word with punctuation mark
During breath, the speech evaluating for carry out contextual analysis to it and synthesis, the linguistic context that analysis text information is included, and in local resource
Sound bank carries out text information matching, works as matching degree>=90% directly invokes the speech message containing correspondence linguistic context.
Described transmission client and reception client, includes respectively:
Input module, for calling sound pick-up outfit, speech data is converted into by the sound of user;
First sending module, for speech data to be uploaded onto the server;
First receiver module, for the chat message that the reception server sends;
Display module, for the chat message of reception to be shown;
Memory module, stores for the chat message and customized voice to showing, generates chat record;
Translation module, text information is resolved to by voice messaging, and text information is entered with the customized voice in storage module
Row matching, and the text information that the match is successful is converted into voice messaging;Or by the customization language in text information and storage module
Sound is matched, and the text information that the match is successful is converted into voice messaging;
Playing module, for calling playback equipment, the voice messaging after translation is played back.
The input module is recording module, and recording module is provided with screen touch record button, by the sound of user after click
Typing;First sending module is provided with screen touch send key, uploads speech data after click;The playing module will be chatted
Its information is set to screen touch play button, after click by speech play out;The display module will be chatted by display screen and be believed
Breath shows.
Also include linguistic context translation module, for the punctuation mark occurred in word is carried out the speech evaluating of contextual analysis with
And synthesis, the linguistic context that analysis text information is included, and text information matching is carried out in local resource sound bank, work as matching degree>
=90% directly invokes the speech message containing correspondence linguistic context.
The beneficial effects of the invention are as follows:The word conversion that the voice conversion chat method of offer of the invention will can be input into
Be voice messaging, and be sent to reception client listened to, for the communication between the people having defective vision and the elderly is provided
It is convenient, while when voice-enabled chat is sent, the sound that the voice messaging of transmission is converted into customization is transmitted, it is to avoid because respectively
The language on ground is different, causes the people for listening in time to understand expressed implication, and the failure for exchange occur is effective to improve
The efficiency of voice-enabled chat and the degree of accuracy.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of voice-enabled chat method of the present invention;
Fig. 2 is the annexation schematic diagram of mobile terminal and server in the present invention.
Specific embodiment
The present invention is described in detail with reference to the accompanying drawings and examples.
Embodiment 1
A kind of voice conversion chat method as shown in Figure 1, methods described is by sending client, server and receiving visitor
Realize at family end;
Described transmission client receives the information that request sends, and described information includes text information and voice messaging;
Described server is translated to information, and obtains corresponding text information, the text information and number that will be obtained
Matched according to the customized sound stored in storehouse, and text information is translated to the voice messaging of customization;
Described reception client receives the voice messaging after translation, and voice messaging to receiving is played out.
Particularly as sent the people of information in the enterprising style of writing word input of mobile client, after click sends, input
Text information customized sound in the server carries out matching parsing, after the completion of matching, text information is converted into voice letter
Breath is transmitted, during conversion, according to Java code android+ examples:
Information as sent is voice messaging, after typing voice messaging in mobile client, language of the server to typing
Message breath is translated, and voice messaging is converted into text message, then by the customized voice in text information and server
Information is matched, and after the match is successful, text confidence is converted into voice messaging, and is put and broadcast to receiving client and carry out voice
Put, the customized voice is the sound of Lin Zhiling or the sound etc. of the sound of Liu Yan or the sound of Song little Bao or Yue Yunpeng.Avoid
The voice that different places are said is different, it is impossible to is listened to people and is well understood by, misunderstood.During translation, realize detailed
Solution Java code android+ examples:
If considering that the local resource bag being loaded into can then be called:
During chat, during often being sent because of word, the linguistic context that people speak will not be given expression to, used more
Punctuation mark is expressed, but this mode can not completely give expression to the atmosphere and linguistic context spoken, be likely to result in
The misunderstanding of call, therefore comprising the punctuation mark for sending in described text information, server is being received with punctuate symbol
Number text information when, the speech evaluating for carry out contextual analysis to it and synthesis, the linguistic context that analysis text information is included, and
Text information matching is carried out in local resource sound bank, works as matching degree>=90% directly invokes the voice containing correspondence linguistic context
Message.
The ending of client's text information uses punctuation mark, carries out speech evaluating and the synthesis of contextual analysis:Client sends
Text information analyzes text information and carries out word in client local resource sound bank to after app chat systems by client
Information matches, match>=90% directly invokes the speech message containing correspondence linguistic context;If local resource cannot find correspondence
Linguistic context resource, then send customer voice system to server (because the winged linguistic context of news is evaluated and tested and synthesizes the function of the similar Lexus tinkling of pieces of jade
It is customization service, so temporarily being realized using database matching method);Carry out database matching and return to correspondence formatted voice text
Part;
Realize detailed annotation php code service ends+example
Embodiment 2
Described transmission client and reception client, include respectively as shown in Figure 2:Input module, the first sending module,
First receiver module, display module, memory module, translation module, playing module, and linguistic context translation module
The input module or recording module, recording module are provided with screen touch record button, by the sound of user after click
Sound typing;First sending module is provided with screen touch send key, uploads speech data after click;The playing module will
Chat message is set to screen touch play button, after click by speech play out;
As shown in Fig. 2 by taking voice-enabled chat of multiple users using wechat software as an example, specifically including following steps:
1) voice-enabled chat environment is set up by the mobile terminal at respective place respectively between multiple users;
2) each user in voice-enabled chat environment clicks on the screen touch record button of mobile terminal where it and is recorded
Sound, after input module calls sound pick-up outfit that the sound of user is converted into speech data, click on screen touch send key by
First sending module uploads onto the server;
3) server receives speech data by the second receiver module, and speech data is translated using translation module
Obtain corresponding word content;
4) text information of the server after by translation is matched with the customized voice information stored in storage module,
After the match is successful, the customized voice information that will be matched generates voice-enabled chat content, and is sent out chat message by the second sending module
It is sent to mobile terminal;
5) mobile terminal receives voice-enabled chat information by the first receiver module, and is played out by playing module and supply people
Listen to.
Embodiment 3
By taking the text chat of wechat software as an example, following steps are specifically included:
1) voice-enabled chat environment is set up by the mobile terminal at respective place respectively between multiple users;
2) each user in voice-enabled chat environment clicks on the screen touch of mobile terminal where it and carries out the defeated of word
Enter, click on screen touch send key and uploaded onto the server by the first sending module;
3) server receives lteral data by receiver module, and lteral data is translated using translation module
Corresponding word content;
4) text information of the server after by translation is matched with the customized voice information stored in storage module,
After the match is successful, the customized voice information that will be matched generates voice-enabled chat content, and is sent out chat message by the second sending module
It is sent to mobile terminal;
5) mobile terminal receives voice-enabled chat information by the first receiver module, and is played out by playing module and supply people
Listen to.
During word income, if any punctuation mark, after punctuation mark is uploaded onto the server, in server word
The punctuation mark of appearance carries out speech evaluating and the synthesis of contextual analysis, analyzes the linguistic context that text information is included, and at this
Ground resource sound bank carries out text information matching, works as matching degree>=90% directly invokes the speech message containing correspondence linguistic context.
What described translation module and linguistic context translation module was called is to call University of Science and Technology to interrogate to fly MSC-JDK.
It is exemplified as above be only to of the invention for example, do not constitute the limitation to protection scope of the present invention, it is all
It is that design same or analogous with the present invention is belonged within protection scope of the present invention.
Claims (5)
1. a kind of voice converts chat method, it is characterised in that methods described is by sending client, server and receiving client
Realize at end;
Described transmission client receives the information that request sends, and described information includes text information and voice messaging;
Described server is translated to information, and obtains corresponding text information, the text information and database that will be obtained
The customized sound of middle storage is matched, and text information is translated to the voice messaging of customization;
Described reception client receives the voice messaging after translation, and voice messaging to receiving is played out.
2. a kind of voice according to claim 1 converts chat method, it is characterised in that:Included in described text information
The punctuation mark of transmission, server carries out the voice of contextual analysis to it when the text information with punctuation mark is received
Evaluation and test and synthesize, the analysis linguistic context that is included of text information, and text information matching carried out in local resource sound bank, when
With degree>=90% directly invokes the speech message containing correspondence linguistic context.
3. a kind of voice according to claim 1 and 2 converts chat method, it is characterised in that described transmission client
With reception client, include respectively:
Input module, for calling sound pick-up outfit, speech data is converted into by the sound of user;
First sending module, for speech data to be uploaded onto the server;
First receiver module, for the chat message that the reception server sends;
Display module, for the chat message of reception to be shown;
Memory module, stores for the chat message and customized voice to showing, generates chat record;
Translation module, text information is resolved to by voice messaging, and the customized voice in text information and storage module is carried out
Match somebody with somebody, and the text information that the match is successful is converted into voice messaging;Or enter text information with the customized voice in storage module
Row matching, and the text information that the match is successful is converted into voice messaging;
Playing module, for calling playback equipment, the voice messaging after translation is played back.
4. a kind of voice according to claim 3 converts chat method, it is characterised in that the input module is recording mould
Block, recording module is provided with screen touch record button, by the sound typing of user after click;First sending module is provided with screen
Send key is touched, speech data is uploaded after click;Chat message is set to screen touch play button by the playing module, is clicked on
Afterwards by speech play out.
5. a kind of voice according to claim 2 converts chat method, it is characterised in that also including linguistic context translation module,
Speech evaluating and synthesis for carrying out contextual analysis to the punctuation mark occurred in word, analysis text information are included
Linguistic context, and text information matching is carried out in local resource sound bank, work as matching degree>=90% directly invoke contain correspondence linguistic context
Speech message.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611223813.2A CN106847256A (en) | 2016-12-27 | 2016-12-27 | A kind of voice converts chat method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611223813.2A CN106847256A (en) | 2016-12-27 | 2016-12-27 | A kind of voice converts chat method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106847256A true CN106847256A (en) | 2017-06-13 |
Family
ID=59136716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611223813.2A Pending CN106847256A (en) | 2016-12-27 | 2016-12-27 | A kind of voice converts chat method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106847256A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107634898A (en) * | 2017-08-18 | 2018-01-26 | 上海云从企业发展有限公司 | True man's voice information communication is realized by the chat tool on electronic communication equipment |
CN107644646A (en) * | 2017-09-27 | 2018-01-30 | 北京搜狗科技发展有限公司 | Method of speech processing, device and the device for speech processes |
CN108062955A (en) * | 2017-12-12 | 2018-05-22 | 深圳证券信息有限公司 | A kind of intelligence report-generating method, system and equipment |
CN108494573A (en) * | 2018-03-29 | 2018-09-04 | 丁超 | Group chat method, apparatus and information terminal |
CN109246214A (en) * | 2018-09-10 | 2019-01-18 | 北京奇艺世纪科技有限公司 | A kind of prompt tone acquisition methods, device, terminal and server |
CN110111770A (en) * | 2019-05-10 | 2019-08-09 | 濮阳市顶峰网络科技有限公司 | A kind of multilingual social interpretation method of network, system, equipment and medium |
CN113163053A (en) * | 2020-01-22 | 2021-07-23 | 阿尔派株式会社 | Electronic device and play control method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1379391A (en) * | 2001-04-06 | 2002-11-13 | 国际商业机器公司 | Method of producing individual characteristic speech sound from text |
US20050256716A1 (en) * | 2004-05-13 | 2005-11-17 | At&T Corp. | System and method for generating customized text-to-speech voices |
CN1929655A (en) * | 2006-09-28 | 2007-03-14 | 中山大学 | Mobile phone capable of realizing text and voice conversion |
US20070124142A1 (en) * | 2005-11-25 | 2007-05-31 | Mukherjee Santosh K | Voice enabled knowledge system |
CN102117614A (en) * | 2010-01-05 | 2011-07-06 | 索尼爱立信移动通讯有限公司 | Personalized text-to-speech synthesis and personalized speech feature extraction |
CN103327181A (en) * | 2013-06-08 | 2013-09-25 | 广东欧珀移动通信有限公司 | Voice chatting method capable of improving efficiency of voice information learning for users |
CN103761963A (en) * | 2014-02-18 | 2014-04-30 | 大陆汽车投资(上海)有限公司 | Method for processing text containing emotion information |
CN105096934A (en) * | 2015-06-30 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method for constructing speech feature library as well as speech synthesis method, device and equipment |
CN105989832A (en) * | 2015-02-10 | 2016-10-05 | 阿尔卡特朗讯 | Method of generating personalized voice in computer equipment and apparatus thereof |
-
2016
- 2016-12-27 CN CN201611223813.2A patent/CN106847256A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1379391A (en) * | 2001-04-06 | 2002-11-13 | 国际商业机器公司 | Method of producing individual characteristic speech sound from text |
US20050256716A1 (en) * | 2004-05-13 | 2005-11-17 | At&T Corp. | System and method for generating customized text-to-speech voices |
US20070124142A1 (en) * | 2005-11-25 | 2007-05-31 | Mukherjee Santosh K | Voice enabled knowledge system |
CN1929655A (en) * | 2006-09-28 | 2007-03-14 | 中山大学 | Mobile phone capable of realizing text and voice conversion |
CN102117614A (en) * | 2010-01-05 | 2011-07-06 | 索尼爱立信移动通讯有限公司 | Personalized text-to-speech synthesis and personalized speech feature extraction |
CN103327181A (en) * | 2013-06-08 | 2013-09-25 | 广东欧珀移动通信有限公司 | Voice chatting method capable of improving efficiency of voice information learning for users |
CN103761963A (en) * | 2014-02-18 | 2014-04-30 | 大陆汽车投资(上海)有限公司 | Method for processing text containing emotion information |
CN105989832A (en) * | 2015-02-10 | 2016-10-05 | 阿尔卡特朗讯 | Method of generating personalized voice in computer equipment and apparatus thereof |
CN105096934A (en) * | 2015-06-30 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method for constructing speech feature library as well as speech synthesis method, device and equipment |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107634898A (en) * | 2017-08-18 | 2018-01-26 | 上海云从企业发展有限公司 | True man's voice information communication is realized by the chat tool on electronic communication equipment |
CN107644646A (en) * | 2017-09-27 | 2018-01-30 | 北京搜狗科技发展有限公司 | Method of speech processing, device and the device for speech processes |
CN107644646B (en) * | 2017-09-27 | 2021-02-02 | 北京搜狗科技发展有限公司 | Voice processing method and device for voice processing |
CN108062955A (en) * | 2017-12-12 | 2018-05-22 | 深圳证券信息有限公司 | A kind of intelligence report-generating method, system and equipment |
CN108494573A (en) * | 2018-03-29 | 2018-09-04 | 丁超 | Group chat method, apparatus and information terminal |
CN109246214A (en) * | 2018-09-10 | 2019-01-18 | 北京奇艺世纪科技有限公司 | A kind of prompt tone acquisition methods, device, terminal and server |
CN110111770A (en) * | 2019-05-10 | 2019-08-09 | 濮阳市顶峰网络科技有限公司 | A kind of multilingual social interpretation method of network, system, equipment and medium |
CN113163053A (en) * | 2020-01-22 | 2021-07-23 | 阿尔派株式会社 | Electronic device and play control method |
CN113163053B (en) * | 2020-01-22 | 2024-05-28 | 阿尔派株式会社 | Electronic device and play control method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106847256A (en) | A kind of voice converts chat method | |
CN103327181B (en) | Voice chatting method capable of improving efficiency of voice information learning for users | |
CN102272789B (en) | Enhanced voicemail usage through automatic voicemail preview | |
US8328089B2 (en) | Hands free contact database information entry at a communication device | |
US10425365B2 (en) | System and method for relaying messages | |
CN102782751B (en) | Digital media voice tags in social networks | |
US8433574B2 (en) | Hosted voice recognition system for wireless devices | |
US9177551B2 (en) | System and method of providing speech processing in user interface | |
US9715873B2 (en) | Method for adding realism to synthetic speech | |
US8301454B2 (en) | Methods, apparatuses, and systems for providing timely user cues pertaining to speech recognition | |
US20090055186A1 (en) | Method to voice id tag content to ease reading for visually impaired | |
CN105141510B (en) | A kind of message prompt method and device | |
US20080126491A1 (en) | Method for Transmitting Messages from a Sender to a Recipient, a Messaging System and Message Converting Means | |
US20030157968A1 (en) | Personalized agent for portable devices and cellular phone | |
AU2012212517A1 (en) | Posting to social networks by voice | |
CN106713111B (en) | Processing method for adding friends, terminal and server | |
US20130253932A1 (en) | Conversation supporting device, conversation supporting method and conversation supporting program | |
CN109389967A (en) | Voice broadcast method, device, computer equipment and storage medium | |
CN111563182A (en) | Voice conference record storage processing method and device | |
CN106558311A (en) | Voice content reminding method and device | |
US20120179551A1 (en) | Personalised Items in Mobile Devices based on User Behaviour | |
CN110265024A (en) | Requirement documents generation method and relevant device | |
US9786268B1 (en) | Media files in voice-based social media | |
US8370161B2 (en) | Responding to a call to action contained in an audio signal | |
CN112133306B (en) | Response method and device based on express delivery user and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170613 |
|
RJ01 | Rejection of invention patent application after publication |