KR100868638B1 - System and method for balloon providing during video communication - Google Patents
System and method for balloon providing during video communication Download PDFInfo
- Publication number
- KR100868638B1 KR100868638B1 KR1020070078898A KR20070078898A KR100868638B1 KR 100868638 B1 KR100868638 B1 KR 100868638B1 KR 1020070078898 A KR1020070078898 A KR 1020070078898A KR 20070078898 A KR20070078898 A KR 20070078898A KR 100868638 B1 KR100868638 B1 KR 100868638B1
- Authority
- KR
- South Korea
- Prior art keywords
- speech
- terminal
- voice
- mouth
- caller
- Prior art date
Links
Images
Abstract
Description
The present invention relates to a system and method for providing a video call speech balloon, and more particularly, to a system and a method for providing a video call speech balloon so that a voice of a caller can be displayed in a speech balloon during a video call.
With the rapid development of mobile communication technology and infrastructure, mobile communication terminals provide various additional service functions such as Internet search, wireless data communication, electronic organizer, and video call function as well as general voice call.
Among them, a video call function is to perform a call while transmitting and receiving an image captured by a camera with a counterpart, and is conventionally implemented to simply transmit a caller's voice during a video call.
Accordingly, in the past, the emotions, moods or feelings contained in the words of the caller cannot be delivered at the same time, and the words are unclearly transmitted due to the influence of the RF environment or the ambient noise, so that the feelings or feelings of the caller are not transmitted to the other party. In addition to not being able to communicate, there is a problem that it is difficult to accurately convey the context of words.
In addition, when two or more callers are simultaneously photographed and delivered to the other party using one camera provided in the terminal, when the two people speak at the same time, the other party may not know exactly who is speaking. There is a problem.
The present invention has been made to solve the above-described problems, and after recognizing the voice of the caller in the video call to convert the word, sentence, and then transmit the converted word, sentence with the speech bubble to the other terminal, the other party calls An object of the present invention is to provide a video call speech balloon providing system and method for checking a person's voice through words and sentences.
According to an aspect of the present invention, there is provided a video call speech balloon system, including: a speech balloon DB for storing a speech balloon shape designated for each terminal to be displayed together with a voice recognized word and sentence during a video call; A user voice and face image storage unit for receiving and storing a voice used to separate a voice of a terminal user in a two or more modes and a face image used to separate a face image of the terminal user for each terminal; If a session setup request is received from a terminal having a video call by establishing a session with the counterpart terminal, a session is established with the terminal according to the session setup request, and the counterpart is based on identification information of the counterpart terminal provided from the terminal. A video call session processing unit for requesting a session establishment from a terminal to establish a session with the counterpart terminal; A speech recognition processor for recognizing speech received from the terminal through an audio logical channel and converting the speech into words and sentences; The face image is extracted from the image received from the terminal through the video logical channel, the position of the mouth is analyzed by analyzing the pattern of the extracted face image, and the face inferring the words and sentences spoken by the caller through the mouth shape analysis. A recognition and mouth analysis unit; And a speech balloon display processing unit for displaying the speech balloon shape retrieved from the speech balloon DB and the words and sentences recognized by the speech recognition processing unit at the position of the mouth identified by the face recognition and mouth analysis unit and transmitting them to the counterpart terminal. It is preferable.
Further, the speech balloon DB, the manual speech balloon DB for storing one-to-one matching with the key button value assigned to each of the speech balloon shape and the respective speech balloon shape used in the manual speech bubble mode; It is preferable that the speech bubble shape used in the automatic speech bubble mode and an automatic speech bubble DB for matching and storing the words and sentences specified in each speech bubble shape is preferably made.
The speech recognition processing unit may include a one-person mode speech recognition processing unit for recognizing a voice received through an audio logical channel in a one-person mode and converting the speech into a word and a sentence; In two or more modes, two or more voices transmitted through an audio logical channel are separated by waveform analysis, and each separated voice is converted into words and sentences through voice recognition, and the separated voice is converted into the user voice and face. It is preferable to include a two-or more mode speech recognition processing unit for grasping the voice of the terminal user from the separated voices, respectively, compared to the terminal user voices stored in the image storage unit.
The speech balloon display processing unit may display a speech balloon shape searched in the speech balloon DB and a word and sentence recognized by the one-person mode speech recognition processing unit at the mouth position identified by the face recognition and mouth analysis unit. A display processor; When speaking only one person at a time in a two-person mode, the facial recognition and mouth shape analysis unit identifies the caller based on the result of the analysis of the mouth movement, and then the speech bubble shape and 2 found in the speech bubble DB In the abnormality mode speech recognition processing unit, a word or sentence recognized by the speech is displayed at the mouth position of the caller, and when two or more people speak at the same time, the voice of the terminal user recognized by the two-ordinary mode speech recognition processing unit. After the voice of each caller is grasped based on the information and the face image of the user of the terminal identified by the face recognition and mouth analysis unit, a speech bubble shape retrieved from the speech bubble DB and a word recognized by the two or more mode speech recognition processor, Two or more mode speech balloon display processing unit to display the sentence at the mouth position of each caller who has a voice It is made by also being preferred.
On the other hand, the video call speech balloon providing method according to an embodiment of the present invention, the terminal performing a video call by establishing a session with the other terminal to terminate the session with the other terminal to receive the speech bubble function to request the session setting Accordingly, a first process of performing session establishment with each of the terminal and the counterpart terminal; If the speech bubble mode is set to the manual speech bubble mode of 1, the speech bubble shape designated in the speech bubble shape selection key button value received from the terminal is searched through the control logical channel in the manual speech bubble DB, and then the detected speech bubble shape is recognized and detected. A second step of displaying on the mouth part of the caller identified by the mouth shape analyzing unit, and displaying the word and sentence recognized by the one-person mode speech recognition processor inside the speech bubble shape and transmitting the same to the counterpart terminal; If the speech balloon mode is set to an automatic speech balloon mode of 1, the first mode speech recognition processor converts a voice received through an audio logical channel into a word or sentence through speech recognition, and matches the speech recognized word or sentence. After searching the speech bubble shape in the automatic speech bubble DB, the searched speech bubble shape is displayed on the mouth part of the caller identified by the face recognition and mouth analysis unit, and the word and sentence recognized by the one-person mode speech recognition processor are displayed. It is preferably made to include a third process of displaying inside the shape and transmitting to the counterpart terminal.
In addition, the first process may include: establishing a session with the terminal according to the session establishment request in the video call session processing unit that has received a session establishment request from the terminal; And requesting session establishment from the counterpart terminal by using the identification information of the counterpart terminal provided from the terminal when the session establishment request is made.
The second process may include: searching, by the manual speech bubble DB, the speech bubble shape designated in the speech bubble shape selection key button value received through the control logic channel in the mode speech
The third process may include converting a voice received through an audio logical channel into a word or a sentence through voice recognition in the first mode speech recognition processor; Determining a position of a mouth by analyzing a pattern of a face image included in an image received through a video logical channel by the face recognition and mouth analysis unit; Searching for a speech bubble shape matching the speech recognized word and sentence in an automatic speech balloon DB using the speech recognized word and sentence in the one-person mode speech balloon display processor; Displaying the searched speech bubble shape in the one-person mode speech balloon display processing unit in the mouth part of the caller identified by the face recognition and mouth analysis unit; And displaying the speech recognized word and sentence in the speech bubble shape by the first mode speech recognition processor, and transmitting the speech recognized word and sentence to the counterpart terminal through a video logical channel.
On the other hand, the video call speech balloon providing method according to another embodiment of the present invention, the terminal performing a video call by establishing a session with the other terminal terminal to terminate the session with the other terminal to receive the speech bubble function to request the session setting Accordingly, a first process of performing session establishment with each of the terminal and the counterpart terminal; When the speech bubble mode is set to the manual speech bubble mode of 2, the speech bubble shape designated in the speech bubble shape selection key button value received from the terminal is searched through the control logic channel in the manual speech bubble DB, and the voice of each caller is identified. A second process of displaying a speech bubble shape, a speech recognized word, and a sentence in a mouth of a caller who has recognized a voice and transmitting the same to a counterpart terminal; If the speech bubble mode is set to an automatic speech bubble mode of 2, the two or more mode speech recognition processor converts a voice received through an audio logical channel into words and sentences through speech recognition, and converts the speech to a recognized word or sentence. After searching the automatic speech balloon DB in the appropriate speech balloon shape, the voice of each caller is identified, and the searched speech balloon shape, the voice recognized word, and the sentence are displayed on the mouth of the caller whose voice is recognized and transmitted to the counterpart terminal. It is preferable to include three steps.
Further, the second process may include: searching, by the manual speech bubble DB, a speech bubble shape designated in the speech bubble shape selection key button value received through the control logic channel by the two-or-more mode speech balloon display processing unit; When two or more voices are received through the audio logical channel at the same time, when the voice of the caller is set to identify the voice of the caller through the mouth movement analysis, the voices received through the audio logical channel by the two or more mode voice recognition processing unit by waveform After separation, converting the separated speech into words and sentences through speech recognition; The face recognition and mouth analysis unit extracts the face image included in the image received through the video logical channel, and then analyzes the pattern of each face image to determine the location of each caller's mouth, Inferring words and sentences spoken by each caller by analyzing movement; Comparing the inferred words, sentences with speech-recognized words, sentences and identifying the voice of each caller; And displaying the searched speech bubble shape, the voice recognized word, and the sentence in the mouth of each caller according to the identified caller's voice and transmitting the same to the counterpart terminal through a video logical channel.
In addition, when two or more voices are simultaneously received through the audio logical channel, if the voice of the caller is set to grasp the voice of the caller through the waveform analysis of the voice, the voice received through the audio logical channel by the two or more mode voice recognition processing unit. After each of the waveforms are separated into waveforms, the separated voices are compared with the terminal user voices stored in the user voice and face image storage unit to identify the voices of the terminal users from two or more voices, and the separated voices are voiced. Converting the word and sentence through recognition; The face recognition and mouth analysis unit extracts a face image included in an image received through a video logical channel, and then extracts each face image of the terminal user stored in the user voice and face image storage unit. Identifying the face image of the terminal user from two or more face images compared to the above, and identifying the mouth position of each caller through pattern analysis of each face image; Determining the voice of each caller based on the voice and face image grasping result of the terminal user; The method may further include displaying the searched speech bubble shape, the voice recognized word, and the sentence at the mouth of each caller according to the identified caller's voice, and transmitting the same to the counterpart terminal through a video logical channel.
And when one voice is received through the audio logical channel at the same time, converting the received voice into words and sentences through voice recognition; Extracting a face image included in an image received through a video logical channel through a face recognition and mouth analysis unit, and identifying a mouth position of each caller through pattern analysis of each face image; Analyzing the movement of the mouth to identify the talker who is currently speaking, displaying the searched speech bubble shape, the voice recognized word, and the sentence at the mouth of the talking caller, and transmitting the same to the counterpart terminal through a video logical channel. It is preferred to further comprise a.
In the third process, when two or more voices are received through the audio logical channel at the same time, if the voice of the caller is set to be identified through the mouth movement analysis, the two or more mode voice recognition processing unit performs the audio logical channel. Separating the received voice by waveform, converting each of the separated voices into words and sentences through speech recognition, and searching a speech balloon shape matching the recognized speech and sentences in an automatic speech balloon DB; ; The face recognition and mouth analysis unit extracts the face image included in the image received through the video logical channel, and then analyzes the pattern of each face image to determine the location of each caller's mouth, Analyzing the movements and inferring words and sentences spoken by each caller; Comparing the inferred words, sentences with speech-recognized words, sentences and identifying the voice of each caller; And displaying the searched speech bubble shape, the voice recognized word, and the sentence in the mouth of each caller according to the identified caller's voice and transmitting the same to the counterpart terminal through a video logical channel.
Here, when two or more voices are received through the audio logical channel at the same time, when the voice of the caller is set to grasp the voice of the caller through the waveform analysis of the voice, the voice received through the audio logical channel by the two or more mode voice recognition processing unit. Dividing the respective voices into waveforms, and comparing the separated voices with the terminal user voices stored in the user voice and face image storage unit to identify voices of the terminal users from two or more voices; Converting each of the separated voices into words and sentences through speech recognition and searching a speech balloon shape matching the speech recognized words and sentences in an automatic speech balloon DB; The face recognition and mouth analysis unit extracts a face image included in an image received through a video logical channel, and then extracts each face image of the terminal user stored in the user voice and face image storage unit. Identifying the face image of the terminal user from two or more face images compared to the above, and identifying the mouth position of each caller through pattern analysis of each face image; Determining the voice of each caller based on the voice and face image grasping result of the terminal user; And displaying the searched speech bubble shape, the voice recognized word, and the sentence in the mouth of each caller according to the identified caller's voice and transmitting the same to the counterpart terminal through a video logical channel.
If one voice is received at one time through the audio logical channel, the received speech is converted into a word and a sentence through voice recognition, and a speech bubble matching the word and sentence recognized in the automatic speech bubble DB is performed. Retrieving a shape; Extracting a face image included in an image received through a video logical channel through a face recognition and mouth analysis unit, and identifying a mouth position of each caller through pattern analysis of each face image; Analyzing the movement of the mouth to identify the talker who is currently speaking, displaying the searched speech bubble shape, the voice recognized word, and the sentence at the mouth of the talking caller, and transmitting the same to the counterpart terminal through a video logical channel. It is preferred to further comprise a.
According to the video call speech balloon providing system and method of the present invention, after recognizing the caller's voice during a video call and converting it into words and sentences, the converted word and sentence are designated by the user of the speech bubble or voice recognized words, sentences By putting the speech bubble matched to the image to be displayed on the image, it is possible to deliver the words, the emotions, feelings, etc. that the caller is speaking to the other party. In addition, when two or more people are photographed by one camera and delivered to the other party, each voice is classified and recognized, and the voice-recognized words and sentences are displayed on the image through the shape of each speech bubble. You can tell exactly if you speak.
Hereinafter, with reference to the accompanying drawings will be described in detail a system and method for providing a video call speech balloon according to an embodiment of the present invention.
1 is a road schematically showing a configuration of a communication network including a video call speech bubble providing system according to an embodiment of the present invention, the video call speech bubble providing system according to the present invention includes a WCMDA (Wideband Code Division Multiple Access) network It can be applied to all networks capable of video calls such as Long Term Evolution (LTE) network, High Speed Downlink Packet Access (HSDPA) network, CDMA 2000 1x EV-DO network, and IP network. In addition, the video call speech balloon providing system according to the present invention can be applied to a wired network can also be used during video calls over a wired network.
In FIG. 1, the
Here, when there are many speech bubble shapes desired by the terminal user, the speech bubble shape may be designated by using a combination of two or more buttons.
In addition, the
In addition, the
Meanwhile, after the
For example, when the speech bubble function is selected by the calling terminal user, the
As described above, the calling
On the other hand, when the video call speech
As described above, the video call speech
In addition, when the speech bubble mode is set to the automatic speech bubble mode, the video call speech
In addition, the video call speech
In addition, when the video call speech
As described above, the video call speech
3 is a road, a
In such a configuration, the
The above-described manual
The
On the other hand, the user voice and face
When the video call
The video
Meanwhile, the
As described above, the two or more modes speech
The face recognition and
The speech
When the speech bubble mode is set to the manual speech bubble mode, the one-person mode speech bubble
In addition, the two-person or more mode speech bubble display processor 365 recognizes and analyzes the mouth-shaped image from the image received through the video logical channel when the caller speaks only one person at a time in the two or more mode. After the caller who is speaking based on the result value received from the
The above-mentioned two or more mode speech balloon display processing unit 365 images the words and sentences recognized by the speech bubble shape searched by the manual
In addition, the two or more mode speech balloon display processing unit 365 separates the waveform of the voice received through the audio logical channel when two or more callers speak simultaneously in the two or more mode, and then separates the separated voice waveform. Input the result value received from the two-or more mode speech
In addition, the two or more mode speech balloon display processing unit 365 analyzes the image of the mouth from the image received through the video logical channel when two or more callers speak simultaneously in the two or more mode, the word spoken by each caller. 2 or more mode speech recognition processing unit by comparing the words and sentences received from the face recognition and
4 and 5 are process diagrams for explaining a video call speech bubble providing method according to an embodiment of the present invention, FIG. 4 is a process diagram for explaining an operation process in a single mode, and FIG. 5 is two or more persons. It is a process chart for demonstrating the operation process in mode.
First, when the calling
As described above, when the session is reset between the calling
In addition, the one-person mode
Thereafter, the one-person mode speech balloon
On the other hand, if the speech balloon mode is set to the automatic speech balloon mode of 1, the one-person mode speech
Thereafter, the one-person mode speech balloon
In addition, the speech balloon shape searched through the process S34 is displayed on the mouth of the caller identified by the face recognition and mouth analysis unit 350 (S36), and the word recognized by the one-person mode speech
The counterpart terminal receiving the image in which the voice recognized words and sentences are displayed together with the speech bubble shape through steps S28 and S40 may provide the counterpart caller with a screen as shown in FIG. 6.
Meanwhile, when the calling
As described above, when the session is reset between the calling
When the voice of one person is received as a result of analyzing the voice received through the audio logical channel by the two or more mode
Then, by analyzing the mouth movement of the mouth part of each caller to determine who is currently speaking (S68), two or more modes speech balloon display processing unit 365 is the mouth of the caller identified as the talker After displaying the speech bubble shape searched through the process S58 described above (S70), and the two or more mode speech
On the other hand, when two or more voices are received through the audio logical channel at the same time, it is determined whether the voice of each caller is set by analyzing mouth movements (S60, S76), and analyzing each mouth movement. When the voice of the caller is set to recognize the voice of the caller, the two or more mode voice
Meanwhile, the face recognition and
Thereafter, the face recognition and
On the other hand, when the determination result of the above-described process S76 is set to analyze the voice waveform to determine who is currently speaking, the two or more mode voice
Meanwhile, the face recognition and
Thereafter, the two-person or more mode speech balloon display processing unit 365 according to the voice of each caller identified based on the result of identifying the voice of the terminal user through step S98 and the result of identifying the face image of the terminal user through step S104. After displaying the speech balloon searched through the process S58 described above in each mouth of the caller (S108), and after the speech recognition words and sentences with the corresponding voice inside the speech bubble displayed in the mouth of the caller (S110), speech recognition In operation S112, the displayed word and the sentence together with the speech bubble shape are transmitted to the counterpart terminal through the video logical channel.
On the other hand, when the speech bubble mode is set to the automatic speech bubble mode of 2, when the voice received by the two or more mode voice
Meanwhile, the face recognition and
Then, by analyzing the mouth movement of the mouth part of each caller to determine who is currently speaking (S124), two or more mode speech balloon display processing unit 365 is the mouth of the caller identified as the talker After displaying the searched speech balloon shape in the above-described process S118 (S126), and after the two-ordinary mode speech
On the other hand, when two or more voices are received through the audio logical channel at the same time, by analyzing the mouth movement, it is determined whether it is set to find out who is currently speaking (S114, S132). If it is set to determine the caller who is currently speaking by analyzing the movement, the two or more mode speech
Meanwhile, the face recognition and
Thereafter, the face recognition and
On the other hand, when the determination result of the process S132 is set to analyze the voice waveform to determine who is currently speaking, the two or more mode speech
Meanwhile, the face recognition and
Thereafter, the two-person or more mode speech balloon display processing unit 365 according to the voice of each caller identified based on the result of identifying the terminal user's voice through step S156 and the result of identifying the face image of the terminal user through step S164. The speech balloon shape searched through the above-described process S160 is displayed on each caller's mouth (S168), and the words and sentences respectively recognized by the two-or more mode speech
The counterpart terminal that receives the voice-recognized word and sentence with the speech bubble shape through the processes S74, S94, S112, S130, S152, and S172 may display a screen as shown in FIG. It will be provided to the caller.
The video call speech balloon providing system and method of the present invention are not limited to the above-described embodiments, and various modifications can be made within the scope of the technical idea of the present invention.
The video call speech balloon providing system and method of the present invention is applied to a video call speech balloon providing system, and recognizes a caller's voice during a video call, converts it into words and sentences, and then converts the converted words and sentences from a terminal user. Accurate communication is achieved by putting the speech bubbles in the shape or speech recognition words and sentences to be displayed on the image.
1 is a view schematically showing the configuration of a mobile communication network including a video call speech bubble providing system according to an embodiment of the present invention.
2 is a view showing an example of a speech bubble shape assigned to a key button value according to the present invention.
Figure 3 is a schematic view showing the internal configuration of the video call speech bubble providing system applied to the present invention.
4 and 5 is a process for explaining a video call speech bubble providing method according to an embodiment of the present invention.
6 is an exemplary view showing an operation screen in a one-person mode.
7 is an exemplary view showing an operation screen in two or more modes.
*** Explanation of symbols for the main parts of the drawing ***
100. calling terminal, 200. called terminal,
300. Video call speech balloon providing system, 310. Speech balloon DB,
313. Manual Speech Bubble DB, 315. Automatic Speech Bubble DB,
320. User voice and face image storage unit, 330. Video call session processing unit,
340. A speech recognition processor, 343. a one-person mode speech recognition processor,
345. Two-person voice recognition processing unit, 350. Face recognition and mouth analysis unit,
360. Speech balloon display processing unit, 363. 1-person mode speech bubble display processing unit,
365. Two or more modes speech balloon display processing unit,
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020070078898A KR100868638B1 (en) | 2007-08-07 | 2007-08-07 | System and method for balloon providing during video communication |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020070078898A KR100868638B1 (en) | 2007-08-07 | 2007-08-07 | System and method for balloon providing during video communication |
Publications (1)
Publication Number | Publication Date |
---|---|
KR100868638B1 true KR100868638B1 (en) | 2008-11-12 |
Family
ID=40284199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020070078898A KR100868638B1 (en) | 2007-08-07 | 2007-08-07 | System and method for balloon providing during video communication |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR100868638B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
KR20130137283A (en) * | 2012-06-07 | 2013-12-17 | 엘지전자 주식회사 | Mobile terminal and controlling method thereof, and recording medium thereof |
WO2018038277A1 (en) * | 2016-08-22 | 2018-03-01 | 스노우 주식회사 | Message sharing method for sharing image data reflecting status of each user via chat room and computer program for executing same method |
WO2018066731A1 (en) * | 2016-10-07 | 2018-04-12 | 삼성전자 주식회사 | Terminal device and method for performing call function |
KR20190133361A (en) * | 2018-05-23 | 2019-12-03 | 카페24 주식회사 | An apparatus for data input based on user video, system and method thereof, computer readable storage medium |
WO2020228383A1 (en) * | 2019-05-14 | 2020-11-19 | 北京字节跳动网络技术有限公司 | Mouth shape generation method and apparatus, and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050055298A (en) * | 2003-12-08 | 2005-06-13 | 에스케이텔레텍주식회사 | Method for playing caption in mobile phone |
KR20050067022A (en) * | 2003-12-27 | 2005-06-30 | 삼성전자주식회사 | Method for processing message using avatar in wireless phone |
US20050216568A1 (en) | 2004-03-26 | 2005-09-29 | Microsoft Corporation | Bubble messaging |
KR20060107002A (en) * | 2005-04-06 | 2006-10-13 | 주식회사 더블유알지 | Method for displaying avatar in wireless terminal |
-
2007
- 2007-08-07 KR KR1020070078898A patent/KR100868638B1/en not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050055298A (en) * | 2003-12-08 | 2005-06-13 | 에스케이텔레텍주식회사 | Method for playing caption in mobile phone |
KR20050067022A (en) * | 2003-12-27 | 2005-06-30 | 삼성전자주식회사 | Method for processing message using avatar in wireless phone |
US20050216568A1 (en) | 2004-03-26 | 2005-09-29 | Microsoft Corporation | Bubble messaging |
KR20060107002A (en) * | 2005-04-06 | 2006-10-13 | 주식회사 더블유알지 | Method for displaying avatar in wireless terminal |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
KR20130137283A (en) * | 2012-06-07 | 2013-12-17 | 엘지전자 주식회사 | Mobile terminal and controlling method thereof, and recording medium thereof |
KR101978205B1 (en) | 2012-06-07 | 2019-05-14 | 엘지전자 주식회사 | Mobile terminal and controlling method thereof, and recording medium thereof |
WO2018038277A1 (en) * | 2016-08-22 | 2018-03-01 | 스노우 주식회사 | Message sharing method for sharing image data reflecting status of each user via chat room and computer program for executing same method |
US11025571B2 (en) | 2016-08-22 | 2021-06-01 | Snow Corporation | Message sharing method for sharing image data reflecting status of each user via chat room and computer program for executing same method |
WO2018066731A1 (en) * | 2016-10-07 | 2018-04-12 | 삼성전자 주식회사 | Terminal device and method for performing call function |
US10652397B2 (en) | 2016-10-07 | 2020-05-12 | Samsung Electronics Co., Ltd. | Terminal device and method for performing call function |
KR20190133361A (en) * | 2018-05-23 | 2019-12-03 | 카페24 주식회사 | An apparatus for data input based on user video, system and method thereof, computer readable storage medium |
KR102114368B1 (en) * | 2018-05-23 | 2020-05-22 | 카페24 주식회사 | An apparatus for data input based on user video, system and method thereof, computer readable storage medium |
WO2020228383A1 (en) * | 2019-05-14 | 2020-11-19 | 北京字节跳动网络技术有限公司 | Mouth shape generation method and apparatus, and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10885318B2 (en) | Performing artificial intelligence sign language translation services in a video relay service environment | |
US11114091B2 (en) | Method and system for processing audio communications over a network | |
KR100868638B1 (en) | System and method for balloon providing during video communication | |
US6990179B2 (en) | Speech recognition method of and system for determining the status of an answered telephone during the course of an outbound telephone call | |
DK1912474T3 (en) | A method of operating a hearing assistance device and a hearing assistance device | |
NO326770B1 (en) | Video conference method and system with dynamic layout based on word detection | |
CN109688276B (en) | Incoming call filtering system and method based on artificial intelligence technology | |
US20150154960A1 (en) | System and associated methodology for selecting meeting users based on speech | |
KR101542130B1 (en) | Finger-language translation providing system for deaf person | |
KR102263154B1 (en) | Smart mirror system and realization method for training facial sensibility expression | |
WO2011065686A2 (en) | Communication interface apparatus and method for multi-user and system | |
KR20200092166A (en) | Server, method and computer program for recognizing emotion | |
JP2007322523A (en) | Voice translation apparatus and its method | |
CN110570847A (en) | Man-machine interaction system and method for multi-person scene | |
CN110188364B (en) | Translation method, device and computer readable storage medium based on intelligent glasses | |
US11700325B1 (en) | Telephone system for the hearing impaired | |
WO2021066399A1 (en) | Realistic artificial intelligence-based voice assistant system using relationship setting | |
KR20160149488A (en) | Apparatus and method for turn-taking management using conversation situation and themes | |
US20210312143A1 (en) | Real-time call translation system and method | |
CN207718803U (en) | Multiple source speech differentiation identifying system | |
CN106791681A (en) | Video monitoring and face identification method, apparatus and system | |
CN112507829B (en) | Multi-person video sign language translation method and system | |
JP2000206983A (en) | Device and method for information processing and providing medium | |
JP2014149571A (en) | Content search device | |
US11848026B2 (en) | Performing artificial intelligence sign language translation services in a video relay service environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20121002 Year of fee payment: 5 |
|
FPAY | Annual fee payment |
Payment date: 20131024 Year of fee payment: 6 |
|
FPAY | Annual fee payment |
Payment date: 20141022 Year of fee payment: 7 |
|
LAPS | Lapse due to unpaid annual fee |