CN104536570A - Information processing method and device of intelligent watch - Google Patents

Information processing method and device of intelligent watch Download PDF

Info

Publication number
CN104536570A
CN104536570A CN201410834540.XA CN201410834540A CN104536570A CN 104536570 A CN104536570 A CN 104536570A CN 201410834540 A CN201410834540 A CN 201410834540A CN 104536570 A CN104536570 A CN 104536570A
Authority
CN
China
Prior art keywords
information
picture
key word
characteristic value
associated person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410834540.XA
Other languages
Chinese (zh)
Inventor
王强
郑发
陈天伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201410834540.XA priority Critical patent/CN104536570A/en
Publication of CN104536570A publication Critical patent/CN104536570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention is suitable for the technical field of wearable intelligent equipment and provides an information processing method and device of an intelligent watch. The method includes the steps that when it is detected that received information includes an appointed keyword, a picture corresponding to the keyword is obtained according to a pre-stored first mapping list; the obtained picture is displayed. When the information is received, whether the information includes the keyword is judged, if yes, the keyword in the information is extracted, the picture corresponding to the keyword is obtained according to the pre-stored first mapping list, and the obtained picture is displayed. Thus, young children with text reading disabilities can understand information transmitted by text information, and the information transmitting effect of the intelligent watch is improved.

Description

The information processing method of intelligent watch and device
Technical field
The invention belongs to wearable intelligent equipment technical field, particularly relate to information processing method and the device of intelligent watch.
Background technology
There are many intelligent watchs for children at present, converse for child and father and mother.Sometimes when child makes a phone call to father and mother with intelligent watch, father and mother are inconvenient to receive calls, and can only send note to child.But, for young child, read note and there is certain difficulty, cause the communicating information effect of intelligent watch poor.
Summary of the invention
Given this, embodiments provide a kind of information processing method and device of intelligent watch, with the problem that the communicating information effect solving existing intelligent watch is poor.
On the one hand, embodiments provide a kind of information processing method of intelligent watch, comprising:
When comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
The described picture that display obtains.
Second aspect, embodiments provides a kind of signal conditioning package of intelligent watch, comprising:
Picture acquiring unit, during for comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
Picture display unit, for showing the described picture of acquisition.
The beneficial effect that the embodiment of the present invention compared with prior art exists is: the embodiment of the present invention is by when receiving information, judge whether comprise key word in the information received, if, key word in information extraction, the first map listing according to prestoring obtains the picture corresponding with key word, and the picture that display obtains, the information enabling the underage child of word read obstacle understand Word message thus will to pass on, improves the communicating information effect of intelligent watch.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the realization flow figure of the information processing method of the intelligent watch that the embodiment of the present invention provides;
Fig. 2 is the specific implementation process flow diagram of the information processing method step S101 of the intelligent watch that the embodiment of the present invention provides;
Fig. 3 is the realization flow figure of the information processing method of the intelligent watch that another embodiment of the present invention provides;
Fig. 4 is the realization flow figure of the information processing method of the intelligent watch that another embodiment of the present invention provides;
Fig. 5 is the structured flowchart of the signal conditioning package of the intelligent watch that the embodiment of the present invention provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Fig. 1 shows the realization flow figure of the information processing method of the intelligent watch that the embodiment of the present invention provides, and details are as follows:
In step S101, when comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word.
In embodiments of the present invention, information includes but not limited to SMS (Short Message Service, Short Message Service) or instant communication information.When receiving information, if judge whether comprise key word in the information received according to the Keyword List prestored.Keyword List by system default, also can add key word by user in Keyword List.Such as, Keyword List comprises key word and " has a meal " and " sleep " etc.If comprise key word, then the key word in information extraction according in the information that the key column list deciding prestored receives.
In step s 102, the described picture of display acquisition.
Such as, the key word extracted from information is " having a meal ", then obtain and key word " is had a meal " corresponding picture, and shows this picture; The key word extracted from information is " sleep ", then obtain the picture corresponding with key word " sleep ", and show this picture.In embodiments of the present invention, by information is represented with the picture of image, make the underage child of word read obstacle understand the information meaning to be expressed, and add the interest of communication.
Fig. 2 shows the specific implementation process flow diagram of the information processing method step S101 of the intelligent watch that the embodiment of the present invention provides, with reference to Fig. 2:
In step s 201, the memory location of the picture corresponding with described key word is determined according to the first map listing prestored;
In step S202, obtain the picture corresponding with described key word according to described memory location.
As one embodiment of the present of invention, in order to save the storage resources of intelligent watch, by the picture-storage corresponding with key word in Cloud Server, and store the first map listing in the storer of intelligent watch.First map listing is used for the corresponding relation of recording key and picture memory location in the server.When extracting key word from information, determining the picture memory location in the server that this key word is corresponding, then from server, obtaining the picture corresponding with key word according to the memory location determined.
As an alternative embodiment of the invention, by the picture-storage corresponding with key word in the storer of intelligent watch, when extracting key word from information, determine the picture memory location in memory that this key word is corresponding, then from storer, obtain the picture corresponding with key word according to the memory location determined.
Fig. 3 shows the realization flow figure of the information processing method of the intelligent watch that another embodiment of the present invention provides, with reference to Fig. 3:
In step S301, when comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
In step s 302, the described picture of display acquisition;
In step S303, the described information received is converted to voice messaging;
In step s 304, the associated person information sending described information is obtained;
In step S305, the second map listing according to prestoring obtains the sound characteristic value corresponding with described associated person information;
In step S306, the described sound characteristic value according to obtaining processes described voice messaging, and the voice messaging after playback process.
In embodiments of the present invention, when receiving information, information is converted to voice messaging.And utilize the second map listing to obtain the sound characteristic value corresponding with associated person information, by the sound characteristic value obtained, voice messaging is processed, and the voice messaging after playback process, thus the information received is play with the tone of father or mother, thus make child feel warmer when obtaining information, improve the friendly of intelligent watch.
Preferably, described sound characteristic value comprises tamber characteristic value and tonality feature value.
Here, sound characteristic value comprises tamber characteristic value and tonality feature value.Wherein, tone color refers to the spectrum structure of audio frequency, comprises overtone or the harmonic of audio frequency; Tone refers to the frequency of audio frequency.
Fig. 4 shows the realization flow figure of the information processing method of the intelligent watch that another embodiment of the present invention provides, with reference to Fig. 4:
In step S401, receive audio frequency and the associated person information of user's input;
In step S402, from the described audio frequency received, extract the sound characteristic value that described associated person information is corresponding;
In step S403, create described second map listing according to described associated person information and sound characteristic value corresponding to described associated person information;
In step s 404, when comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
In step S405, the described picture that display obtains;
In step S406, the described information received is converted to voice messaging;
In step S 407, the associated person information sending described information is obtained;
In step S408, the second map listing according to prestoring obtains the sound characteristic value corresponding with described associated person information;
In step S409, the described sound characteristic value according to obtaining processes described voice messaging, and the voice messaging after playback process.
In embodiments of the present invention, father and mother need to record a section audio by intelligent watch in advance, with the sound characteristic value making intelligent watch obtain father and mother, then this sound characteristic value and the designated contact information in address list are bound mutually, to create the second map listing.Here, address list may be MDN (Mobile DirectoryNumber, Mobile Directory Number) or immediate communication tool address list, in this no limit.
Should be understood that in embodiments of the present invention, the size of the sequence number of above-mentioned each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
The embodiment of the present invention is by when receiving information, judge whether comprise key word in information, if, key word in information extraction, the first map listing according to prestoring obtains the picture corresponding with key word, and the picture that display obtains, the information enabling the underage child of word read obstacle understand Word message thus will to pass on, improves the communicating information effect of intelligent watch.
Fig. 5 shows the structured flowchart of the signal conditioning package of the intelligent watch that the embodiment of the present invention provides, and this device may be used for the information processing method of the intelligent watch described in service chart 1 to Fig. 4.For convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.
With reference to Fig. 5, this device comprises:
Picture acquiring unit 51, during for comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
Picture display unit 52, for showing the described picture of acquisition.
Preferably, described picture acquiring unit 51 comprises:
Subelement 511 is determined in memory location, during for comprising the key word of specifying in information reception being detected, determines the memory location of the picture corresponding with described key word according to the first map listing prestored;
Picture obtains subelement 512, for obtaining the picture corresponding with described key word according to described memory location.
Preferably, described device also comprises voice playing unit 53, and described voice playing unit 53 comprises:
Speech conversion subelement 531, for being converted to voice messaging by the described information received;
Associated person information obtains subelement 532, for obtaining the associated person information sending described information;
Sound characteristic value obtains subelement 533, for obtaining the sound characteristic value corresponding with described associated person information according to the second map listing prestored;
Speech play subelement 534, for processing described voice messaging according to the described sound characteristic value obtained, and the voice messaging after playback process.
Preferably, described sound characteristic value comprises tamber characteristic value and tonality feature value.
Preferably, described device also comprises the second map listing creating unit 54, and described second map listing creating unit 54 comprises:
Audio frequency and associated person information receive subelement 541, for receiving audio frequency and the associated person information of user's input;
Sound characteristic value extracts subelement 542, for extracting sound characteristic value corresponding to described associated person information from the described audio frequency received;
Second map listing creates subelement 543, for creating described second map listing according to described associated person information and sound characteristic value corresponding to described associated person information.
The embodiment of the present invention is by when receiving information, judge whether comprise key word in information, if, key word in information extraction, the first map listing according to prestoring obtains the picture corresponding with key word, and the picture that display obtains, the information enabling the underage child of word read obstacle understand Word message thus will to pass on, improves the communicating information effect of intelligent watch.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the intelligent watch of foregoing description and the specific works process of unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed intelligent watch and method can realize by another way.Such as, intelligent watch embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. an information processing method for intelligent watch, is characterized in that, comprising:
When comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
The described picture that display obtains.
2. the method for claim 1, is characterized in that, described when comprising the key word of specifying in information reception being detected, obtains the picture corresponding with described key word be specially according to the first map listing prestored:
When comprising the key word of specifying in information reception being detected, determine the memory location of the picture corresponding with described key word according to the first map listing prestored;
The picture corresponding with described key word is obtained according to described memory location.
3. the method for claim 1, is characterized in that, after the described picture that described display obtains, described method also comprises:
The described information received is converted to voice messaging;
Obtain the associated person information sending described information;
The second map listing according to prestoring obtains the sound characteristic value corresponding with described associated person information;
Described sound characteristic value according to obtaining processes described voice messaging, and the voice messaging after playback process.
4. method as claimed in claim 3, it is characterized in that, described sound characteristic value comprises tamber characteristic value and tonality feature value.
5. method as claimed in claim 3, it is characterized in that, before described information is converted to voice messaging, described method also comprises:
Receive audio frequency and the associated person information of user's input;
The sound characteristic value that described associated person information is corresponding is extracted from the described audio frequency received;
Described second map listing is created according to described associated person information and sound characteristic value corresponding to described associated person information.
6. a signal conditioning package for intelligent watch, is characterized in that, comprising:
Picture acquiring unit, during for comprising the key word of specifying in information reception being detected, the first map listing according to prestoring obtains the picture corresponding with described key word;
Picture display unit, for showing the described picture of acquisition.
7. device as claimed in claim 6, it is characterized in that, described picture acquiring unit comprises:
Subelement is determined in memory location, during for comprising the key word of specifying in information reception being detected, determines the memory location of the picture corresponding with described key word according to the first map listing prestored;
Picture obtains subelement, for obtaining the picture corresponding with described key word according to described memory location.
8. device as claimed in claim 6, it is characterized in that, described device also comprises voice playing unit, and described voice playing unit comprises:
Speech conversion subelement, for being converted to voice messaging by the described information received;
Associated person information obtains subelement, for obtaining the associated person information sending described information;
Sound characteristic value obtains subelement, for obtaining the sound characteristic value corresponding with described associated person information according to the second map listing prestored;
Speech play subelement, for processing described voice messaging according to the described sound characteristic value obtained, and the voice messaging after playback process.
9. device as claimed in claim 8, it is characterized in that, described sound characteristic value comprises tamber characteristic value and tonality feature value.
10. device as claimed in claim 8, it is characterized in that, described device also comprises the second map listing creating unit, and described second map listing creating unit comprises:
Audio frequency and associated person information receive subelement, for receiving audio frequency and the associated person information of user's input;
Sound characteristic value extracts subelement, for extracting sound characteristic value corresponding to described associated person information from the described audio frequency received;
Second map listing creates subelement, for creating described second map listing according to described associated person information and sound characteristic value corresponding to described associated person information.
CN201410834540.XA 2014-12-29 2014-12-29 Information processing method and device of intelligent watch Pending CN104536570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410834540.XA CN104536570A (en) 2014-12-29 2014-12-29 Information processing method and device of intelligent watch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410834540.XA CN104536570A (en) 2014-12-29 2014-12-29 Information processing method and device of intelligent watch

Publications (1)

Publication Number Publication Date
CN104536570A true CN104536570A (en) 2015-04-22

Family

ID=52852110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410834540.XA Pending CN104536570A (en) 2014-12-29 2014-12-29 Information processing method and device of intelligent watch

Country Status (1)

Country Link
CN (1) CN104536570A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106161711A (en) * 2015-07-20 2016-11-23 合肥淘云科技有限公司 A kind of child's wrist-watch address list voice messaging generates method and system
CN106204855A (en) * 2016-08-09 2016-12-07 福建省汽车工业集团云度新能源汽车股份有限公司 A kind of Application on Voiceprint Recognition automobile accessing method
CN110635991A (en) * 2019-09-16 2019-12-31 腾讯科技(深圳)有限公司 Message processing method, message display method, storage medium, and computer device

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070050453A1 (en) * 2005-08-23 2007-03-01 Carthern Taylor C Taylor made email
CN101359473A (en) * 2007-07-30 2009-02-04 国际商业机器公司 Auto speech conversion method and apparatus
CN101400033A (en) * 2007-09-24 2009-04-01 北京三星通信技术研究有限公司 Method for patterning text content of short message and apparatus thereof
CN101453427A (en) * 2007-11-29 2009-06-10 悟空数位娱乐实业有限公司 Animation playing method for real-time communication
CN101605158A (en) * 2008-06-13 2009-12-16 鸿富锦精密工业(深圳)有限公司 Mobile phone dedicated for deaf-mutes
CN101820475A (en) * 2010-05-25 2010-09-01 拓维信息系统股份有限公司 Cell phone multimedia message generating method based on intelligent semantic understanding
CN101883339A (en) * 2010-06-22 2010-11-10 宇龙计算机通信科技(深圳)有限公司 SMS communication method, terminal and mobile terminal
CN102083020A (en) * 2009-11-27 2011-06-01 中国移动通信集团贵州有限公司 Method for configuring emoticons for short message and emoticons information system
CN102262624A (en) * 2011-08-08 2011-11-30 中国科学院自动化研究所 System and method for realizing cross-language communication based on multi-mode assistance
CN102426568A (en) * 2011-10-04 2012-04-25 上海量明科技发展有限公司 Instant messaging text information picture editing method, client and system
CN102779508A (en) * 2012-03-31 2012-11-14 安徽科大讯飞信息科技股份有限公司 Speech corpus generating device and method, speech synthesizing system and method
CN103023684A (en) * 2011-09-26 2013-04-03 腾讯科技(深圳)有限公司 Method, device and system for network information management
CN103095557A (en) * 2012-12-18 2013-05-08 上海量明科技发展有限公司 Instant messaging information voice output method and system
CN103117057A (en) * 2012-12-27 2013-05-22 安徽科大讯飞信息科技股份有限公司 Application method of special human voice synthesis technique in mobile phone cartoon dubbing
CN103716340A (en) * 2012-09-28 2014-04-09 联想(北京)有限公司 Terminal devices and information processing method
CN103747141A (en) * 2014-01-08 2014-04-23 衡阳加一电子科技有限公司 Method and system for controlling applications loaded in mobile terminal
CN103823354A (en) * 2013-11-25 2014-05-28 喻应芝 Watch
CN103838866A (en) * 2014-03-20 2014-06-04 广东小天才科技有限公司 Text transformation method and device
CN104021201A (en) * 2014-06-16 2014-09-03 辛玲 Data conversion method and device
CN104158719A (en) * 2013-05-14 2014-11-19 腾讯科技(深圳)有限公司 Information processing method and system, IM application device, and terminal

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070050453A1 (en) * 2005-08-23 2007-03-01 Carthern Taylor C Taylor made email
CN101359473A (en) * 2007-07-30 2009-02-04 国际商业机器公司 Auto speech conversion method and apparatus
CN101400033A (en) * 2007-09-24 2009-04-01 北京三星通信技术研究有限公司 Method for patterning text content of short message and apparatus thereof
CN101453427A (en) * 2007-11-29 2009-06-10 悟空数位娱乐实业有限公司 Animation playing method for real-time communication
CN101605158A (en) * 2008-06-13 2009-12-16 鸿富锦精密工业(深圳)有限公司 Mobile phone dedicated for deaf-mutes
CN102083020A (en) * 2009-11-27 2011-06-01 中国移动通信集团贵州有限公司 Method for configuring emoticons for short message and emoticons information system
CN101820475A (en) * 2010-05-25 2010-09-01 拓维信息系统股份有限公司 Cell phone multimedia message generating method based on intelligent semantic understanding
CN101883339A (en) * 2010-06-22 2010-11-10 宇龙计算机通信科技(深圳)有限公司 SMS communication method, terminal and mobile terminal
CN102262624A (en) * 2011-08-08 2011-11-30 中国科学院自动化研究所 System and method for realizing cross-language communication based on multi-mode assistance
CN103023684A (en) * 2011-09-26 2013-04-03 腾讯科技(深圳)有限公司 Method, device and system for network information management
CN102426568A (en) * 2011-10-04 2012-04-25 上海量明科技发展有限公司 Instant messaging text information picture editing method, client and system
CN102779508A (en) * 2012-03-31 2012-11-14 安徽科大讯飞信息科技股份有限公司 Speech corpus generating device and method, speech synthesizing system and method
CN103716340A (en) * 2012-09-28 2014-04-09 联想(北京)有限公司 Terminal devices and information processing method
CN103095557A (en) * 2012-12-18 2013-05-08 上海量明科技发展有限公司 Instant messaging information voice output method and system
CN103117057A (en) * 2012-12-27 2013-05-22 安徽科大讯飞信息科技股份有限公司 Application method of special human voice synthesis technique in mobile phone cartoon dubbing
CN104158719A (en) * 2013-05-14 2014-11-19 腾讯科技(深圳)有限公司 Information processing method and system, IM application device, and terminal
CN103823354A (en) * 2013-11-25 2014-05-28 喻应芝 Watch
CN103747141A (en) * 2014-01-08 2014-04-23 衡阳加一电子科技有限公司 Method and system for controlling applications loaded in mobile terminal
CN103838866A (en) * 2014-03-20 2014-06-04 广东小天才科技有限公司 Text transformation method and device
CN104021201A (en) * 2014-06-16 2014-09-03 辛玲 Data conversion method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王南阳: "《单片优质语音录放集成电路应用手册》", 31 January 2006 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106161711A (en) * 2015-07-20 2016-11-23 合肥淘云科技有限公司 A kind of child's wrist-watch address list voice messaging generates method and system
CN106204855A (en) * 2016-08-09 2016-12-07 福建省汽车工业集团云度新能源汽车股份有限公司 A kind of Application on Voiceprint Recognition automobile accessing method
CN110635991A (en) * 2019-09-16 2019-12-31 腾讯科技(深圳)有限公司 Message processing method, message display method, storage medium, and computer device

Similar Documents

Publication Publication Date Title
CN104994401A (en) Barrage processing method, device and system
CN107256707B (en) Voice recognition method, system and terminal equipment
US8612226B1 (en) Determining advertisements based on verbal inputs to applications on a computing device
CN106601254B (en) Information input method and device and computing equipment
EP2763135A1 (en) Voice recognition apparatus and method for providing response information
US9848333B2 (en) Method supporting wireless access to storage device, and mobile routing hotspot device
CN105025319A (en) Video pushing method and device
CN104239458A (en) Method and device for representing search results
CN103763337A (en) Mobile terminal, server and corresponding methods
CN105469789A (en) Voice information processing method and voice information processing terminal
US20200168211A1 (en) Information Processing Method, Server, Terminal, and Information Processing System
US10652185B2 (en) Information sending method and information sending apparatus
CN111490927A (en) Method, device and equipment for displaying message
CN104503994A (en) Information recommendation method and device based on input method
CN105262878A (en) Processing method of automatic call recording and mobile terminal
CN105516472A (en) Information processing method and electronic apparatus
CN105100449A (en) Picture sharing method and mobile terminal
CN103763303A (en) Method and device for drama series playing
CN104536570A (en) Information processing method and device of intelligent watch
CN104583924A (en) Method and terminal for processing media file
CN106506325A (en) Picture sharing method and device
CN103488784A (en) Method for recommending multimedia files and electronic device
CN104125334A (en) Information processing method and electronic equipment
CN105007565A (en) Loss-prevention method and device for wearable intelligent device
CN104200826A (en) Audio signal playing method for mobile terminal and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150422

RJ01 Rejection of invention patent application after publication