CN104053131A - Text communication information processing method and related equipment - Google Patents

Text communication information processing method and related equipment Download PDF

Info

Publication number
CN104053131A
CN104053131A CN201310078302.6A CN201310078302A CN104053131A CN 104053131 A CN104053131 A CN 104053131A CN 201310078302 A CN201310078302 A CN 201310078302A CN 104053131 A CN104053131 A CN 104053131A
Authority
CN
China
Prior art keywords
emoticon
communication information
text communication
content
image recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310078302.6A
Other languages
Chinese (zh)
Inventor
南万青
周洪凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310078302.6A priority Critical patent/CN104053131A/en
Publication of CN104053131A publication Critical patent/CN104053131A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a text communication information processing method and related equipment and enables a user receiving text communication information to more accurately understand the content in the text communication information. The method comprises steps that: the text communication information edited by the user is acquired; if the text communication information contains an expression symbol, voice information corresponding to the expression symbol is acquired according to the expression symbol in the text communication information, and the voice information comprises mood, intonation, tone quality and volume; a corresponding voice is determined according to the content of the text communication information and the voice information; the content of the text communication information is converted into voice correspondingly.

Description

A kind of text method for processing communication messages and relevant device
Technical field
The present invention relates to communication technical field, especially relate to a kind of text method for processing communication messages and relevant device.
Background technology
Along with the development of mobile communication technology, communication terminal (as mobile phone, notebook computer, panel computer and palm equipment for surfing the net etc.) has become a requisite part in people's life, brings convenience to our life, taking mobile phone as example, usually, active can directly communicate by the telephone number and the second user that dial the cellphone subscriber (hereinafter to be referred as the second user) who is contacted with cellphone subscriber's (hereinafter to be referred as first user) that other people contact, if be inconvenient to receive calls the second user, as work, when meeting or rest etc., first user gets used to sending to the second user the communication content of text SMS expression oneself, be that first user is at this short message editing of the enterprising style of writing of own mobile phone, then send by mobile gateway, text SMS is sent on the second user's mobile phone, the second user can open note and read.
But this form of carrying out information transmission with text SMS, the amount of information of transmitting is less, and second user (being that text SMS receives user) cannot recognize the real feelings of first user (being that text SMS sends user) directly and accurately, may the mood when reading note misread the implication in note.
Summary of the invention
The embodiment of the present invention provides a kind of text method for processing communication messages and relevant device, can make the user who receives text communication information can understand more accurately the content in text communication information.
In view of this, first aspect present invention provides a kind of text method for processing communication messages, can comprise:
Obtain the text communication information that user edits;
If while including emoticon in described text communication information, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume;
Determine corresponding voice according to the content of described text communication information and described voice messaging;
The content of described text communication information is converted to described voice accordingly.
In the possible implementation of the first of first aspect, described in obtain the voice messaging corresponding with described emoticon before, described method also comprises:
Face image according to preset time interval recording user when the Edit Text communication information;
Described face image is carried out to image recognition;
Obtain first emoticon corresponding with the result of described image recognition according to the first presetting rule;
According to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark.
In conjunction with the possible implementation of the first of first aspect, in the possible implementation of the second, describedly obtain first emoticon corresponding with the result of described image recognition according to the first presetting rule, comprising:
According to the result of image recognition, obtain the countenance classification of the result of described image recognition;
From described countenance classification, determine first emoticon corresponding with the result of described image recognition.
In conjunction with the possible implementation of the first of first aspect, in the third possible implementation,
Described described face image is carried out before image recognition also comprising:
Obtain the moment of described face image record;
Describedly according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark be:
According to the moment of described face image record, at the first emoticon described in described text communication information and corresponding content of described moment place mark.
In conjunction with the possible implementation of the first of first aspect, in the 4th kind of possible implementation, described according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark, comprising:
According to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.
In conjunction with any possible implementation of four kinds of the first to the of first aspect, in the 5th kind of possible implementation, described according to the emoticon in described text communication information, before obtaining the voice messaging corresponding with described emoticon, comprising:
If the content place of the first emoticon is marked with directly the second emoticon of input of user described in mark in text communication information, described the first emoticon is replaced with to described the second emoticon.
Second aspect present invention provides a kind of text communication information treatment facility, can comprise:
The first acquisition module, the text communication information of editing for obtaining user;
Described the first acquisition module, if while also including emoticon for described text communication information, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume;
Determination module, for determining corresponding voice according to the content of described text communication information and described voice messaging;
Modular converter, for being converted to accordingly described voice by the content of described text communication information.
In the possible implementation of the first of second aspect, described equipment also comprises:
Logging modle, for the face image when the Edit Text communication information according to preset time interval recording user;
Image processing module, for described face image is carried out to image recognition, obtains first emoticon corresponding with the result of described image recognition according to the first presetting rule;
Mark module, for according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark.
In conjunction with the possible implementation of the first of second aspect, in the possible implementation of the second,
Described image processing module, specifically for described face image is carried out to image recognition, according to the result of image recognition, obtains the countenance classification of the result of described image recognition; From described countenance classification, determine first emoticon corresponding with the result of described image recognition.
In conjunction with the possible implementation of the first of second aspect, in the third possible implementation, described equipment also comprises:
The second acquisition module, for obtaining the moment of described face image record;
Described mark module, specifically for the moment of recording according to described face image, at the first emoticon described in described text communication information and corresponding content of described moment place mark.
In conjunction with the possible implementation of the first of second aspect, in the 4th kind of possible implementation,
Described mark module, specifically for according to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.
In conjunction with any possible implementation of four kinds of the first to the of two aspects, in the 5th kind of possible implementation, described equipment also comprises:
Replacement module, if for be marked with directly the second emoticon of input of user at the content place of the first emoticon described in text communication information mark, replace with described the second emoticon by described the first emoticon.
As can be seen from the above technical solutions, a kind of text method for processing communication messages and relevant device that the embodiment of the present invention provides, according to the emoticon in text communication information, the content of text communication information is converted to voice accordingly, more can recognize directly and accurately the real feelings of compiles user, avoid misreading the implication of communication information, promote user and experience.
Brief description of the drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, below the accompanying drawing of embodiment being described to required use is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The schematic flow sheet of a kind of text method for processing communication messages that Fig. 1 provides for the embodiment of the present invention;
Another schematic flow sheet of the text method for processing communication messages that Fig. 2 provides for the embodiment of the present invention;
Another schematic flow sheet of the text method for processing communication messages that Fig. 3 provides for the embodiment of the present invention;
The structural representation of a kind of text communication information treatment facility that Fig. 4 provides for the embodiment of the present invention;
Another structural representation of the text communication information treatment facility that Fig. 5 provides for the embodiment of the present invention;
Another structural representation of the text communication information treatment facility that Fig. 6 provides for the embodiment of the present invention;
Another structural representation of the text communication information treatment facility that Fig. 7 provides for the embodiment of the present invention.
Embodiment
The embodiment of the present invention provides a kind of text method for processing communication messages and relevant device, can make the user who receives text communication information can understand more accurately the content in text communication information.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiment.Based on the embodiment in the present invention, those of ordinary skill in the art, not making all other embodiment that obtain under creative work prerequisite, belong to the scope of protection of the invention.
Below be elaborated respectively.
Please refer to Fig. 1, the schematic flow sheet of a kind of text method for processing communication messages that Fig. 1 provides for the embodiment of the present invention, wherein, described method comprises:
S101, obtain the text communication information that user edits;
Be understandable that, in the embodiment of the present invention, described text communication information can be the content of text of the instant messaging such as text SMS or microblogging text or mail, this does not do concrete restriction, and the executive agent of the method can be mobile communication terminal (mobile phone) or notebook or flat computer etc.
If while including emoticon in the described text communication information of S102, according to the emoticon of mark in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume;
S103, determine corresponding voice according to the content of described text communication information and described voice messaging;
Determine the voice corresponding with text communication information according to the content of described text communication information and described voice messaging.
S104, the content of described text communication information is converted to described voice accordingly.
From the above, a kind of text method for processing communication messages that the embodiment of the present invention provides, according to the emoticon comprising in text communication information, obtain the voice messaging corresponding with emoticon, determine corresponding voice conversion according to the content of text communication information and described voice messaging, more can recognize directly and accurately the real feelings of compiles user, avoid misreading the implication of communication information, promote user and experience.
Next,, for obtaining of the emoticon of mark in described text communication information, taking a concrete application scenarios as example, the text method for processing communication messages that the embodiment of the present invention is provided is made a concrete analysis of:
Please refer to Fig. 2, another schematic flow sheet of the text method for processing communication messages that Fig. 2 provides for the embodiment of the present invention, wherein, described method comprises:
S201, obtain the text communication information that user edits;
In this embodiment, described text communication information can be the text SMS exchanging between cellphone subscriber, wherein, and for convenience of understanding and describing, the user of Edit Text note can be referred to as compiles user, and the user who receives described text SMS can be referred to as to receive user.
S202, record the face image of described user when the Edit Text communication information according to preset time interval;
In some embodiments, if described text communication information can be the text SMS exchanging between cellphone subscriber, can be according to preset time interval, record the face image of compiles user when the Edit Text note by camera;
S203, described face image is carried out to image recognition;
S204, obtain first emoticon corresponding with the result of described image recognition according to the first presetting rule;
Face image to record carries out image recognition, obtain first emoticon corresponding with the result of described image recognition, in some embodiments, can be particularly: according to the result of image recognition, obtain the countenance classification of the result of described image recognition, from described countenance classification, determine first emoticon corresponding with the result of described image recognition.As: as described in countenance classification can comprise happy, excited, dejected, indignant etc.; Wherein, from described countenance classification, determine that the optional execution mode of one of first emoticon corresponding with the result of described image recognition is: mobile phone terminal carries out image recognition to face image, and obtain countenance classification according to image recognition result, then from preset searching with the corresponding table of the first emoticon according to image recognition result, to determine first emoticon corresponding with the result of described image recognition in described countenance classification; Alternatively, the embodiment of the present invention can also obtain countenance information according to image recognition result, determines countenance classification, to determine first emoticon corresponding with described countenance information in described countenance classification;
What easily expect is, described image recognition result is pre-stored on mobile phone terminal with corresponding the showing of relation of the first emoticon, in addition, the embodiment of the present invention can also be used extend markup language (XML, Extensible Markup Language) mode of configuration file comes the result of document image identification and the corresponding informance of the first emoticon, do not do concrete restriction herein.
Be understandable that, in this execution mode, described in obtain the first emoticon the first presetting rule be to determine the first emoticon corresponding thereto according to described image recognition result.
S205, according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark;
In one embodiment, can also comprise before obtaining the first emoticon (step S204) corresponding with the result of described image recognition according to the first presetting rule: the moment of obtaining described face image record; Step S205 can be specially: according to the moment of described face image record, at the first emoticon described in described text communication information and corresponding content of described moment place mark.
Be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is according to moment mark first emoticon of described face image record;
In another embodiment, step S205 can be specially: according to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.
Be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is to carry out mark the first emoticon according to the sentence partition structure of described text communication information content.
S206, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon;
In this embodiment, described voice messaging can comprise the tone, intonation, tone color and volume etc.;
S207, determine corresponding voice according to the content of described text communication information and described voice messaging;
Be understandable that, determine that according to the content of described text communication information and described voice messaging the optional execution mode of one of corresponding voice is: according to the tone, intonation, tone color and volume etc., generate the voice corresponding with the content of described text communication information and described voice messaging, understand the expression of compiles user tone color, the tone and intonation aspect to make receiving user, more there is specificity.
S208, the content of described text communication information is converted to described voice accordingly;
It should be noted that, described text method for processing communication messages, goes for compiles user side, also goes for receiving user side, also goes for third party, as medium in transmitting procedure (high in the clouds) etc., does not do concrete restriction herein.
From the above, a kind of text method for processing communication messages that the embodiment of the present invention provides, face image when recording user Edit Text communication information, carries out image recognition to it, obtain the emoticon corresponding with the result of described image recognition, and in text communication information mark; Thereafter according to the emoticon comprising in text communication information, obtain the voice messaging corresponding with emoticon, determine corresponding voice conversion according to the content of text communication information and described voice messaging, to allow better reception user understand the expression of user's tone color, the tone and intonation aspect, more can recognize directly and accurately user's real feelings, avoid misreading the implication of communication information, promote user and experience.
Next,, for obtaining of the emoticon of mark in described text communication information, taking another concrete application scenarios as example, the text method for processing communication messages that the embodiment of the present invention is provided is made a concrete analysis of:
Please refer to Fig. 3, another schematic flow sheet of the text method for processing communication messages that Fig. 3 provides for the embodiment of the present invention, wherein, described method comprises:
S301, obtain the text communication information that user edits;
In this embodiment, described text communication information can be the text SMS exchanging between cellphone subscriber, wherein, and for convenience of understanding and describing, the user of Edit Text note can be referred to as compiles user, and the user who receives described text SMS can be referred to as to receive user.
S302, record the face image of described user when the Edit Text communication information according to preset time interval;
In some embodiments, if described text communication information can be the text SMS exchanging between cellphone subscriber, can be according to preset time interval, record the face image of compiles user when the Edit Text note by camera;
S303, described face image is carried out to image recognition;
S304, obtain first emoticon corresponding with the result of described image recognition according to the first presetting rule;
Wherein, face image to record carries out image recognition, the optional execution mode of one that obtains first emoticon corresponding with the result of described image recognition is: according to the result of image recognition, obtain the countenance classification of the result of described image recognition, from described countenance classification, determine first emoticon corresponding with the result of described image recognition.As: as described in countenance classification can comprise happy, excited, dejected, indignant etc.; In some embodiments, can be particularly: mobile phone terminal carries out image recognition to face image, and obtain countenance classification according to image recognition result, then from preset searching with the corresponding table of the first emoticon according to image recognition result, to determine first emoticon corresponding with the result of described image recognition in described countenance classification; In some embodiments, can also obtain countenance information according to image recognition result, determine countenance classification, to determine first emoticon corresponding with described countenance information in described countenance classification, not do concrete restriction herein.
What easily expect is, described image recognition result sets in advance in mobile phone terminal with the corresponding table of relation of the first emoticon, in addition, the embodiment of the present invention can also be come the result of document image identification and the corresponding informance of the first emoticon by the mode of extend markup language configuration file, does not do concrete restriction herein.
Be understandable that, in this execution mode, described in obtain the first emoticon the first presetting rule be to determine the first emoticon corresponding thereto according to described image recognition result.
S305, according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark;
In one embodiment, can also comprise before obtaining the first emoticon (step S304) corresponding with the result of described image recognition according to the first presetting rule: the moment of obtaining described face image record; Step S305 can be specially: according to the moment of described face image record, at the first emoticon described in described text communication information and corresponding content of described moment place mark.Separately be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is according to moment mark first emoticon of described face image record;
In another embodiment, step S305 can be specially: according to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.Separately be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is to carry out mark the first emoticon according to the sentence partition structure of described text communication information content.
Preferably, in this application scene, according to the emoticon in described text communication information, before obtaining the voice messaging corresponding with described emoticon, the method also comprises:
If S306 in text communication information described in mark the content place of the first emoticon be marked with directly the second emoticon of input of user, described the first emoticon is replaced with to described the second emoticon;
Namely need to determine the second emoticon of direct hand labeled while whether being marked with user in Edit Text note at the content place of the first emoticon described in mark, if described in mark, the content place of the first emoticon is marked with directly the second emoticon of input of user in text communication information, described the first emoticon is replaced with to described the second emoticon, what can expect is, if described in mark, the content place of the first emoticon is not marked with directly the second emoticon of input of user in text communication information, retain described the first emoticon, i.e. the second emoticon of priority flag user direct hand labeled in the time of Edit Text note in text communication information,
S307, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon;
In this embodiment, described voice messaging can comprise the tone, intonation, tone color and volume;
S308, determine corresponding voice according to the content of described text communication information and described voice messaging;
S309, the content of described text communication information is converted to described voice accordingly;
In one implementation, if in described text communication information according to step S302 to step S305, at corresponding content place mark the first emoticon, and in the text communication information mark the first emoticon, step S307 to step S309 can be: according to the first emoticon in described text communication information, obtain the voice messaging corresponding with described the first emoticon, determine corresponding voice according to the content of described text communication information and described voice messaging, the content of described text communication information is converted to described voice accordingly;
In another kind of implementation, if in described text communication information according to step S302 to step S305, at corresponding content place mark the first emoticon, and in text communication information also with tense marker the second emoticon of directly inputting of user, step S307 to step S309 can be: according to the first emoticon in text communication information, obtain the voice messaging corresponding with described the first emoticon, according to the second emoticon in text communication information, obtain the voice messaging corresponding with described the second emoticon, determine corresponding voice according to the content of described text communication information and the voice messaging obtaining, finally the content of described text communication information is converted to described voice accordingly,
What separately easily expect is, in the present embodiment, also comprise another kind of implementation, be in described text communication information, not to be marked with the first emoticon that the face image when the Edit Text communication information is converted to according to user, and be only marked with while collecting direct the second emoticon inputted of user, step S307 to step S309 can be: according to the second emoticon in text communication information, obtain the voice messaging corresponding with described the second emoticon, determine corresponding voice according to the content of described text communication information and the voice messaging obtaining, the content of described text communication information is converted to described voice accordingly,
Be understandable that, in the embodiment of the present invention, determine that according to the content of described text communication information and described voice messaging the optional execution mode of one of corresponding voice is: according to the tone, intonation, tone color and volume etc., generate the voice corresponding with the content of described text communication information and described voice messaging, understand the expression of compiles user tone color, the tone and intonation aspect to make receiving user, more there is specificity.
It should be noted that, described text method for processing communication messages, goes for compiles user side, also goes for receiving user side, also goes for third party, as medium in transmitting procedure (high in the clouds) etc., does not do concrete restriction herein.
From the above, a kind of text method for processing communication messages that the embodiment of the present invention provides, according to the emoticon comprising in text communication information, the content of text communication information is converted to voice accordingly, wherein, the face image when emoticon of mark can comprise according to user's Edit Text communication information in text communication information, the emoticon that identification is obtained, the emoticon of directly inputting with user, to allow better reception user understand user's tone color, the expression of the tone and intonation aspect, more can recognize directly and accurately user's real feelings, avoid misreading the implication of communication information, promoting user experiences.
For ease of better implementing the technical scheme of the embodiment of the present invention, the embodiment of the present invention is also provided for implementing the relevant device of above-mentioned text method for processing communication messages.Wherein the implication of noun is identical with said method, the explanation of specific implementation details in can reference method embodiment.
The embodiment of the present invention provides a kind of text communication information treatment facility 400, be applicable to the text method for processing communication messages in above-described embodiment, please refer to Fig. 4, the structural representation of the text communication information treatment facility 400 that Fig. 4 provides for the embodiment of the present invention, wherein, text communication information treatment facility 400 comprises:
The first acquisition module 401, the text communication information of editing for obtaining user;
Described the first acquisition module 401, if while also including emoticon for described text communication information, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume;
Determination module 402, for determining corresponding voice according to the content of described text communication information and described voice messaging;
Modular converter 403, for being converted to accordingly described voice by the content of described text communication information.
Be understandable that, in the embodiment of the present invention, described text communication information can be the content of text of the instant messaging such as text SMS or microblogging text or mail, this does not do concrete restriction, and described communication information treatment facility can be mobile communication terminal (as mobile phone) or notebook or flat computer etc.
From the above, a kind of text communication information treatment facility 400 that the embodiment of the present invention provides, according to the emoticon in text communication information, the content of text communication information is converted to voice accordingly, more can recognize directly and accurately user's real feelings, avoid misreading the implication of communication information, promote user and experience.
In some embodiments, described text communication information treatment facility 400 can also comprise: logging modle 404, image processing module 405 and mark module 406, can be with reference to figure 5, another structural representation of the text communication information treatment facility 400 that Fig. 5 provides for the embodiment of the present invention, wherein, described logging modle 404 is for the face image when the Edit Text communication information according to preset time interval recording user; Described image processing module 405, for described face image is carried out to image recognition, obtains first emoticon corresponding with the result of described image recognition according to the first presetting rule; Mark module 406, for according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark.
Further, described image processing module 406, can, specifically for described face image is carried out to image recognition, according to the result of image recognition, obtain the countenance classification of the result of described image recognition; From described countenance classification, determine first emoticon corresponding with the result of described image recognition.
In some embodiments, the optional execution mode of one that obtains first emoticon corresponding with the result of described image recognition is: mobile phone terminal carries out image recognition to face image, and obtain countenance classification according to image recognition result, then from preset searching with the corresponding table of the first emoticon according to image recognition result, to determine first emoticon corresponding with the result of described image recognition in described countenance classification; Separately be understandable that, described image recognition result can set in advance in mobile phone terminal with the corresponding table of relation of the first emoticon; In this execution mode, described in obtain the first emoticon the first presetting rule be to determine the first emoticon corresponding thereto according to described image recognition result.
Be directed to according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark, in one embodiment, described text communication information treatment facility 400 can also comprise the second acquisition module, for before described face image is carried out to image recognition, obtain the moment of described face image record; Under this execution mode, described mark module 406, can be specifically for the moment of recording according to described face image, at the first emoticon described in described text communication information and corresponding content of described moment place mark.Be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is according to moment mark first emoticon of described face image record;
In another embodiment, described mark module 406, can be specifically for according to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.Be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is to carry out mark the first emoticon according to the sentence partition structure of described text communication information content.
Preferably, in some embodiments, at the first acquisition module 401 according to the emoticon in described text communication information, before obtaining the voice messaging corresponding with described emoticon, can also process the emoticon of mark in text communication information, under this execution mode, described text communication information treatment facility 400 can also comprise replacement module 407, can be with reference to figure 6, another structural representation of the text communication information treatment facility 400 that Fig. 6 provides for the embodiment of the present invention, wherein, described replacement module 407, if for be marked with directly the second emoticon of input of user at the content place of the first emoticon described in text communication information mark, described the first emoticon is replaced with to described the second emoticon.
In some embodiments, described determination module 402 can be specifically for according to the tone, intonation, tone color and volume, generate the voice corresponding with the content of described text communication information and described voice messaging, understand the expression of compiles user tone color, the tone and intonation aspect to make receiving user, more there is specificity.
It should be noted that, in this execution mode, can a mark in described text communication information the first emoticon, according to the first emoticon in described text communication information, obtain the voice messaging corresponding with described the first emoticon, determine corresponding voice according to the content of described text communication information and described voice messaging, the content of described text communication information is converted to described voice accordingly;
In some embodiments, also can be with tense marker in described text communication information the first emoticon and the second emoticon, can process the first emoticon and the second emoticon respectively: according to the first emoticon in text communication information, obtain the voice messaging corresponding with described the first emoticon, according to the second emoticon in text communication information, obtain the voice messaging corresponding with described the second emoticon, determine corresponding voice according to the content of described text communication information and the voice messaging obtaining, finally the content of described text communication information is converted to described voice accordingly,
What easily expect is, in some embodiments, in described text communication information, also can only be marked with the second emoticon, according to the second emoticon in text communication information, obtain the voice messaging corresponding with described the second emoticon, determine corresponding voice according to the content of described text communication information and the voice messaging obtaining, the content of described text communication information is converted to described voice accordingly.
It should be noted that, described text communication information treatment facility 400, goes for user side, also goes for receiving user side, also goes for third party, as medium in transmitting procedure (high in the clouds) etc., does not do concrete restriction herein.
Be understandable that, in the text communication information treatment facility 400 that the embodiment of the present invention provides, the function of its each functional module can be according to the method specific implementation in said method embodiment, and its specific implementation process can, with reference to the associated description of said method embodiment, repeat no more herein.
From the above, a kind of text communication information treatment facility 400 that the embodiment of the present invention provides, according to the emoticon comprising in text communication information, the content of text communication information is converted to voice accordingly, wherein, the face image when emoticon of mark can comprise according to user's Edit Text communication information in text communication information, the emoticon that identification is obtained, the emoticon of directly inputting with user, to allow better reception user understand user's tone color, the expression of the tone and intonation aspect, more can recognize directly and accurately user's real feelings, avoid misreading the implication of communication information, promoting user experiences.
The embodiment of the present invention also provides a kind of text communication information treatment facility 700, be applicable to the text method for processing communication messages in above-described embodiment, please refer to Fig. 7, the structural representation of the text communication information treatment facility 700 that Fig. 7 provides for the embodiment of the present invention, wherein, this equipment 700 comprises: input unit 701, output device 703, processor 702; Wherein, described processor 702 is carried out following steps: obtain the text communication information that user edits; If while including emoticon in described text communication information, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume; Determine corresponding voice according to the content of described text communication information and described voice messaging; The content of described text communication information is converted to described voice accordingly.
In some embodiments, described processor 702 can also be carried out following steps: the face image according to preset time interval recording user when the Edit Text communication information; Described face image is carried out to image recognition, obtain first emoticon corresponding with the result of described image recognition according to the first presetting rule; According to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark;
Wherein, described processor 702 carries out image recognition to described face image, the step of obtaining first emoticon corresponding with the result of described image recognition according to the first presetting rule can be specially: described face image is carried out to image recognition, according to the result of image recognition, obtain the countenance classification of the result of described image recognition; From described countenance classification, determine first emoticon corresponding with the result of described image recognition; Separately be understandable that, in this execution mode, described in obtain the first emoticon the first presetting rule be to determine the first emoticon corresponding thereto according to described image recognition result.
In some embodiments, described processor 702 can also be carried out following steps: the moment of obtaining described face image record, can be specially at the first emoticon described in the corresponding content of described text communication information place mark according to the second presetting rule: according to the moment of described face image record, at the first emoticon described in described text communication information and corresponding content of described moment place mark; Be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is according to moment mark first emoticon of described face image record;
In some embodiments, described processor 702 can also be carried out following steps: the moment of obtaining described face image record, can be specially at the first emoticon described in the corresponding content of described text communication information place mark according to the second presetting rule: according to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content; Separately be understandable that, in this execution mode, the second presetting rule of described mark the first emoticon is to carry out mark the first emoticon according to the sentence partition structure of described text communication information content.
In some embodiments, described processor 702 can also be carried out following steps: if in text communication information described in mark the content place of the first emoticon be marked with directly the second emoticon of input of user, described the first emoticon is replaced with to described the second emoticon.
It should be noted that, described text communication information treatment facility 700, goes for user side, also goes for receiving user side, also goes for third party, as medium in transmitting procedure (high in the clouds) etc., does not do concrete restriction herein.
Be understandable that, in the text communication information treatment facility 700 that the embodiment of the present invention provides, processor 702 can be according to the method specific implementation in said method embodiment, and its specific implementation process can, with reference to the associated description of said method embodiment, repeat no more herein.
From the above, a kind of text communication information treatment facility 700 that the embodiment of the present invention provides, according to the emoticon comprising in text communication information, the content of text communication information is converted to voice accordingly, wherein, the face image when emoticon of mark can comprise according to user's Edit Text communication information in text communication information, the emoticon that identification is obtained, the emoticon of directly inputting with user, to allow better reception user understand user's tone color, the expression of the tone and intonation aspect, more can recognize directly and accurately user's real feelings, avoid misreading the implication of communication information, promoting user experiences.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the unit module in equipment and the equipment of foregoing description, can, with reference to the corresponding process in preceding method embodiment, not repeat them here.
In the several embodiment that provide in the application, should be understood that, disclosed system, apparatus and method, can realize by another way.For example, device embodiment described above is only schematic, for example, the division of described unit, be only that a kind of logic function is divided, when actual realization, can have other dividing mode, for example multiple unit or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrically, machinery or other form.
The described unit as separating component explanation can or can not be also physically to separate, and the parts that show as unit can be or can not be also physical locations, can be positioned at a place, or also can be distributed in multiple network element.Can select according to the actual needs some or all of unit wherein to realize the object of the present embodiment scheme.
In addition, the each functional unit in each embodiment of the present invention can be integrated in a processing unit, can be also that the independent physics of unit exists, and also can be integrated in a unit two or more unit.Above-mentioned integrated unit both can adopt the form of hardware to realize, and also can adopt the form of SFU software functional unit to realize.
If described integrated unit is realized and during as production marketing independently or use, can be stored in a computer read/write memory medium using the form of SFU software functional unit.Based on such understanding, the all or part of of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions (can be personal computers in order to make a computer equipment, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), the various media that can be program code stored such as random access memory (RAM, Random Access Memory), magnetic disc or CD.
Above a kind of text method for processing communication messages provided by the present invention and relevant device are described in detail, for one of ordinary skill in the art, according to the thought of the embodiment of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (12)

1. a text method for processing communication messages, is characterized in that, comprising:
Obtain the text communication information that user edits;
If while including emoticon in described text communication information, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume;
Determine corresponding voice according to the content of described text communication information and described voice messaging;
The content of described text communication information is converted to described voice accordingly.
2. method according to claim 1, is characterized in that, described in obtain the voice messaging corresponding with described emoticon before, described method also comprises:
Face image according to preset time interval recording user when the Edit Text communication information;
Described face image is carried out to image recognition;
Obtain first emoticon corresponding with the result of described image recognition according to the first presetting rule;
According to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark.
3. method according to claim 2, is characterized in that, describedly obtains first emoticon corresponding with the result of described image recognition according to the first presetting rule, comprising:
According to the result of image recognition, obtain the countenance classification of the result of described image recognition;
From described countenance classification, determine first emoticon corresponding with the result of described image recognition.
4. method according to claim 2, is characterized in that,
Described described face image is carried out before image recognition also comprising:
Obtain the moment of described face image record;
Describedly according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark be:
According to the moment of described face image record, at the first emoticon described in described text communication information and corresponding content of described moment place mark.
5. method according to claim 2, is characterized in that, described according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark, comprising:
According to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.
6. according to the method described in claim 2 to 5 any one, it is characterized in that, described according to the emoticon in described text communication information, before obtaining the voice messaging corresponding with described emoticon, comprising:
If the content place of the first emoticon is marked with directly the second emoticon of input of user described in mark in text communication information, described the first emoticon is replaced with to described the second emoticon.
7. a text communication information treatment facility, is characterized in that, comprising:
The first acquisition module, the text communication information of editing for obtaining user;
Described the first acquisition module, if while also including emoticon for described text communication information, according to the emoticon in described text communication information, obtain the voice messaging corresponding with described emoticon, described voice messaging comprises the tone, intonation, tone color and volume;
Determination module, for determining corresponding voice according to the content of described text communication information and described voice messaging;
Modular converter, for being converted to accordingly described voice by the content of described text communication information.
8. equipment according to claim 7, is characterized in that, described equipment also comprises:
Logging modle, for the face image when the Edit Text communication information according to preset time interval recording user;
Image processing module, for described face image is carried out to image recognition, obtains first emoticon corresponding with the result of described image recognition according to the first presetting rule;
Mark module, for according to the second presetting rule at the first emoticon described in the corresponding content of described text communication information place mark.
9. equipment according to claim 8, is characterized in that,
Described image processing module, specifically for described face image is carried out to image recognition, according to the result of image recognition, obtains the countenance classification of the result of described image recognition; From described countenance classification, determine first emoticon corresponding with the result of described image recognition.
10. equipment according to claim 8, is characterized in that, described equipment also comprises:
The second acquisition module, for obtaining the moment of described face image record;
Described mark module, specifically for the moment of recording according to described face image, at the first emoticon described in described text communication information and corresponding content of described moment place mark.
11. equipment according to claim 8, is characterized in that,
Described mark module, specifically for according to the sentence partition structure of described text communication information content, at the first emoticon described in the mark of the sentence end of described text communication information content.
Equipment described in 12. according to Claim 8 to 11 any one, is characterized in that, described equipment also comprises:
Replacement module, if for be marked with directly the second emoticon of input of user at the content place of the first emoticon described in text communication information mark, replace with described the second emoticon by described the first emoticon.
CN201310078302.6A 2013-03-12 2013-03-12 Text communication information processing method and related equipment Pending CN104053131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310078302.6A CN104053131A (en) 2013-03-12 2013-03-12 Text communication information processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310078302.6A CN104053131A (en) 2013-03-12 2013-03-12 Text communication information processing method and related equipment

Publications (1)

Publication Number Publication Date
CN104053131A true CN104053131A (en) 2014-09-17

Family

ID=51505405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310078302.6A Pending CN104053131A (en) 2013-03-12 2013-03-12 Text communication information processing method and related equipment

Country Status (1)

Country Link
CN (1) CN104053131A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106024014A (en) * 2016-05-24 2016-10-12 努比亚技术有限公司 Voice conversion method and device and mobile terminal
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106708635A (en) * 2016-12-20 2017-05-24 福建中金在线信息科技有限公司 Mixed display method and system for emoticons and texts
CN106951105A (en) * 2017-03-03 2017-07-14 深圳市联谛信息无障碍有限责任公司 A kind of method that use Barrier-free Service reads emoticon
CN108701137A (en) * 2016-04-20 2018-10-23 谷歌有限责任公司 Icon suggestion in keyboard
CN110189742A (en) * 2019-05-30 2019-08-30 芋头科技(杭州)有限公司 Determine emotion audio, affect display, the method for text-to-speech and relevant apparatus
CN110413841A (en) * 2019-06-13 2019-11-05 深圳追一科技有限公司 Polymorphic exchange method, device, system, electronic equipment and storage medium
CN111078340A (en) * 2019-12-02 2020-04-28 联想(北京)有限公司 Information processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002080107A1 (en) * 2001-03-29 2002-10-10 Koninklijke Philips Electronics N.V. Text to visual speech system and method incorporating facial emotions
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN1655231A (en) * 2004-02-10 2005-08-17 乐金电子(中国)研究开发中心有限公司 Expression figure explanation treatment method for text and voice transfer system
CN102037692A (en) * 2006-12-05 2011-04-27 北电网络有限公司 Method and system for communicating between devices
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002080107A1 (en) * 2001-03-29 2002-10-10 Koninklijke Philips Electronics N.V. Text to visual speech system and method incorporating facial emotions
CN1655231A (en) * 2004-02-10 2005-08-17 乐金电子(中国)研究开发中心有限公司 Expression figure explanation treatment method for text and voice transfer system
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN102037692A (en) * 2006-12-05 2011-04-27 北电网络有限公司 Method and system for communicating between devices
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108701137A (en) * 2016-04-20 2018-10-23 谷歌有限责任公司 Icon suggestion in keyboard
CN106024014A (en) * 2016-05-24 2016-10-12 努比亚技术有限公司 Voice conversion method and device and mobile terminal
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106708635A (en) * 2016-12-20 2017-05-24 福建中金在线信息科技有限公司 Mixed display method and system for emoticons and texts
CN106951105A (en) * 2017-03-03 2017-07-14 深圳市联谛信息无障碍有限责任公司 A kind of method that use Barrier-free Service reads emoticon
CN110189742A (en) * 2019-05-30 2019-08-30 芋头科技(杭州)有限公司 Determine emotion audio, affect display, the method for text-to-speech and relevant apparatus
CN110413841A (en) * 2019-06-13 2019-11-05 深圳追一科技有限公司 Polymorphic exchange method, device, system, electronic equipment and storage medium
CN111078340A (en) * 2019-12-02 2020-04-28 联想(北京)有限公司 Information processing method and device

Similar Documents

Publication Publication Date Title
CN104053131A (en) Text communication information processing method and related equipment
CN102272789B (en) Enhanced voicemail usage through automatic voicemail preview
CN109669924A (en) Sharing method, device, electronic equipment and the storage medium of online document
CN102184254A (en) Remark of mobile contact person
CN103443852A (en) Audio-interactive message exchange
CN101421714A (en) User experience for multimedia mobile note taking
US20080189375A1 (en) Method, apparatus and computer program product for constructing topic structure in instance message meeting
CN103577042A (en) Method and device for providing a message function
JP2011504304A (en) Speech to text transcription for personal communication devices
CN103916513A (en) Method and device for recording communication message at communication terminal
CN101421713A (en) Synchronizing multimedia mobile notes
CN105677512B (en) Data processing method and device and electronic equipment
CN103702297A (en) Short message enhancement method, device and system
CN104158945A (en) Conversation information obtaining method, device and system
US10917761B2 (en) Method and apparatus for automatically identifying and annotating auditory signals from one or more parties
CN101605307A (en) Test short message service (SMS) voice play system and method
CN104125140B (en) A kind of message method and device
CN104575579A (en) Voice management method and voice management system
CN106708794A (en) Attendance statement processing method and device
CN102025801A (en) Method and device for converting text information
CN103399737B (en) Multi-media processing method based on speech data and device
CN103092638A (en) Method and device for propagating mobile application software
CN103297582A (en) Method for processing voice communication content and electronic devices
CN105681523A (en) Method and apparatus for sending birthday blessing short message automatically
CN104182406A (en) Electronic business card creating method, electronic business card retrieval method and relevant system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140917

RJ01 Rejection of invention patent application after publication