WO2022001706A1 - Procédé et système fournissant un appel vidéo utilisant un autocollant interactif d'utilisateur - Google Patents

Procédé et système fournissant un appel vidéo utilisant un autocollant interactif d'utilisateur Download PDF

Info

Publication number
WO2022001706A1
WO2022001706A1 PCT/CN2021/101010 CN2021101010W WO2022001706A1 WO 2022001706 A1 WO2022001706 A1 WO 2022001706A1 CN 2021101010 W CN2021101010 W CN 2021101010W WO 2022001706 A1 WO2022001706 A1 WO 2022001706A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
users
video call
input data
stickers
Prior art date
Application number
PCT/CN2021/101010
Other languages
English (en)
Inventor
Singh SHUBHAM KUMAR
Kaushal Prakash SHARMA
Prince Narula
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Publication of WO2022001706A1 publication Critical patent/WO2022001706A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42034Calling party identification service
    • H04M3/42042Notifying the called party of information on the calling party
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42085Called party identification service
    • H04M3/42093Notifying the calling party of information on the called or connected party
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/10Aspects of automatic or semi-automatic exchanges related to the purpose or context of the telephonic communication
    • H04M2203/1016Telecontrol
    • H04M2203/1025Telecontrol of avatars

Definitions

  • the present invention generally relates to the field of video communication, and more particularly, to methods and systems providing user interactive sticker based video call.
  • Augmented Reality (AR) based video calling feature is quite in demand now-a-days as it provides a much more interesting and efficient way for interaction between users . It helps to provide a flawless connection between the users as it does not require to compress the video image and then transmit over the data line. So even in low network zone it can provide video calling using various emojis as the 3D emojis can store depth of information about a person’s face.
  • a feature in which a user can replace his face with that of an animated avatar in a video call.
  • this method there is no way in which the other side user can control the emotions of the user with whom he is interacting. It just merely depends on how the wants the emoji representation of himself to look like and accordingly he designs the emoji.
  • This method does take into account the mood of other side user and he has to do video call with the emoji what the other side user wants to show him.
  • an object of the present invention to provide a method and system that provides a user interactive sticker based video call. It is another object of the invention provide a method or system in which we can do interactive emoji-based video call using Artificial intelligence (AI) based control where a user can provide options to control the emotions of the person with whom he is doing emoji-based video call.
  • AI Artificial intelligence
  • the present disclosure provides a method and system of providing a user interactive sticker based video call.
  • One aspect of the present invention relates to a method for providing a user interactive sticker based video call between a plurality of users.
  • the said method comprises the steps of initiating, by a first terminal device, a video call from a first user of the plurality of users to a second user of the plurality of users; receiving, by an input unit, at least one input data from the first user of the plurality of users; storing, by a storage unit, the received input data; extracting, by a processing unit, an emotion command from the stored input data, wherein said emotion command comprises the emotion chosen by the said first user of the plurality of users; mapping, by the processing unit, the said emotion command to a plurality of stickers from a pre-stored database of stickers; displaying, by a display unit, the said plurality of stickers to a second user of the plurality of users; receiving, by a input unit, an input sticker selected from said plurality of stickers by the second user of the plurality of users; and establishing, by the first and second terminal devices, a sticker
  • the said system comprises a first terminal device configured to initiate a video call from a first user of the plurality of users to a second user of the plurality of users.
  • the said first terminal device comprises an input unit configured to receive at least one input data from one of the at least two users; a storage unit configured to store the received input data; a processing unit, connected to said input unit, the processing unit configured to extract an emotion command from the stored input data and map the said emotion command to a plurality of emojis; and a display unit.
  • the system further comprises a second terminal device configured to receive a video call from the first user of the plurality of users to a second user of the plurality of users.
  • the said second terminal device comprises a display unit configured to display the said plurality of emojis to the second user of the plurality of users, an input unit configured to receive an input sticker selected from said plurality of stickers by the second user of the plurality of users; a storage unit configured to store the received input data; and a processing unit.
  • the first and second terminal devices establish a sticker based video call between the first user of the plurality of users and the second user of the plurality of users.
  • the user equipment comprises a terminal device, which is configured to initiate a video call from a first user of the plurality of users to a second user of the plurality of users; receive at least one input data from the first user of the plurality of users; store the received input data; extract an emotion command from the stored input data, wherein said emotion command comprises the emotion chosen by the said first user of the plurality of users; map the said emotion command to a plurality of stickers from a pre-stored database of stickers; display the said plurality of stickers to the second user of the plurality of users; receive an input sticker selected from said plurality of stickers by the second user of the plurality of users; and establish a sticker based video call between the first user of the plurality of users and the second user of the plurality of users.
  • FIG. 1 illustrates a block diagram of a system for providing a user interactive emoji-based video call between a plurality of users, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 2 illustrates an exemplary method flow diagram [300] , for providing a user interactive sticker based video call between a plurality of users, in accordance with exemplary embodiments of the present disclosure.
  • circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
  • well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
  • the present invention provides amethod and a system for providing a user interactive sticker based video call between a plurality of users.
  • the invention initiates, by means of a first terminal device, a video call from a first user of the plurality of users to a second user of the plurality of users.
  • an input unit in the first terminal device receives at least one input data from the first user of the plurality of users.
  • the invention provides that a storage unit in the first terminal device, stores the received input data.
  • the processing unit present in the first terminal device extracts an emotion command from the stored input data, wherein said emotion command comprises the emotion chosen by the said first user of the plurality of users.
  • the processing unit maps the said emotion command to a plurality of stickers from a pre-stored database of stickers.
  • the invention encompasses that the second user of the plurality of users can select a sticker from the mapped plurality of stickers and this selection is received and processed by the second terminal device of the second user. Then, consequently, a sticker based video call between the first user of the plurality of users and the second user of the plurality of users displaying, by a display unit, the said plurality of stickers to a second user of the plurality of users, by the first and second terminal devices is established.
  • a “processing unit” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
  • a “display unit” or “display” includes one or more computing device for displaying camera preview frame, images or videos generated by user/electronic devices.
  • the said display unit may be an additional hardware coupled to the said electronic device or may be integrated with in the electronic device.
  • the display unit may further include but not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.
  • a terminal device may be any electrical, electronic, electromechanical and computing device or equipment.
  • the user device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device in which a camera can be implemented.
  • the user device contains at least one input means configured to receive an input from the user, a processor and a display unit configured to display at least the camera preview frame, media, etc. to the user.
  • a sticker may denote a graphic image or illustration enabled by the feature of Augmented reality (AR) which is available to be placed on or added to a video call, for expressing emotion, sentiment, thought or an action through among other actions via emoji characters, emoticons, avatars, or animated stickers.
  • AR Augmented reality
  • the system comprises, at least two terminal devices (100, 200) .
  • the first terminal device comprises at least one input unit (110) , at least one storage unit (120) , at least one processing unit (130) , at least one display unit (140) .
  • the second terminal device comprises at least one input unit (210) , at least one storage unit (220) , at least one processing unit (230) , at least one display unit (240) .
  • the said system is configured to provide a user interactive sticker based video call with the help of the said interconnection between the said input units (110, 210) , said storage units (120, 220) , said processing units (130, 230) and said display units (140, 240) present in the first and second terminal devices.
  • the input unit (110) of the terminal device (100) is configured to receive at least one input data from the first user of the plurality of users.
  • the input data may include, for example but not limited to, a hand gesture, a smiley, a speech, a text or a drawing.
  • the user for instance, would choose an input relating the mood desired. For example, if the user is in a happy mood then the user can choose the preferred input according to the mood.
  • the input unit (210) of the terminal device (200) is configured to receive an input sticker selected from the mapped set of stickers from the second user, which would be explained in detailed below.
  • the storage unit (120) of the terminal device (100) is configured to store the input data received by the input unit (110) from the user.
  • the storage unit may include any non-transitory computer readable medium or computer program product known in the art including, for example, volatile memory, such as Static Random-Access Memory (SRAM) and Dynamic Random-Access Memory (DRAM) , and/or non-volatile memory, such as Read Only Memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as Static Random-Access Memory (SRAM) and Dynamic Random-Access Memory (DRAM)
  • non-volatile memory such as Read Only Memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • the processing unit (130) of the terminal device (100) is configured to process the input data stored in the storage unit (120) . Firstly, the processing unit (130) is configured to extract an emotion command from the input data. Further, the processing unit (130) is configured to map the extracted emotion command to a set of stickers present in the pre-stored database of stickers in the terminal device. In a non-limiting embodiment, if the user selects hand gesture-based method, then the front camera sensor will automatically turn on and recognize the gesture and then the AI engine will map the gesture to a large database of emojis. In smiley based method the user will get an option to select a smiley of an emotion which is subsequently mapped to different emojis.
  • the user will hold a button to record his emotion and that will be converted to text using speech to text-based ML conversion method. Now this text will go through AI engine which will subsequently map to the emoji database. Similar method will be followed by text-based method also where the user will enter the text in the space provided and that will undergo processing by the AI engine to match the emoji database. Finally, in the simple drawing-based method the user can draw an emotion like that of a smiley. The AI engine will recognize that drawing and subsequently map to the emojis corresponding to that drawing. In another non-limit embodiment, the AI engine is used to read the data of the various emotions as provided by the user and convert this to a particular instruction i.e. emotion command. This converted instruction is then matched with a large database containing the same emotion of various emojis and which, consequently, provides an option to the other side user to select any emoji of his choice.
  • the display unit (240) of the terminal device (200) is configured to display the mapped set of stickers to the second user. Also, the display unit (140) of the terminal device (100) is configured to display the selection of the second user.
  • the said display unit (140, 240) may be an additional hardware coupled to the said terminal device or may be integrated with in the terminal device.
  • the display unit (140, 240) may further include but not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.
  • FIG. 2 illustrates an exemplary method flow diagram depicting method for providing a user interactive sticker based video call between a plurality of users, in accordance with exemplary embodiments of the present disclosure.
  • the invention encompasses that the method begins at step (310) .
  • the method begins when the user initiates a video call to the second user by means of the terminal devices.
  • users may initiate “video calls, ” “videoconferencing, ” etc., wherein a camera and microphone in the terminal device captures audio and video of a user that is transmitted in real-time to one or more other recipients such as other mobile devices, desktop computers, videoconferencing systems, etc.
  • the method at step (320) comprises, receiving, at least at least one input data from the first user.
  • the input data may include, for example but not limited to, a hand gesture, a smiley, a speech, a text or a drawing.
  • the user for instance, would choose an input relating the mood desired.
  • the method at step (330) comprises storing the received input data from the user.
  • step (340) wherein the processing unit extracts an emotion command from the said input data and further, at step (350) , the processing unit maps the extracted the extracted emotion command to a set of stickers present in the pre-stored database of stickers in the terminal device.
  • the AI engine is used to read the data of the various emotions as provided by the user and convert this to a particular instruction i.e. emotion command. This converted instruction is then matched with a large database containing the same emotion of various emojis and which, consequently, provides an option to the other side user to select any emoji of the user ‘schoice.
  • step (360) comprises, displaying the mapped set of stickers to the second user by means of the terminal device.
  • the invention encompasses that at the same time, the first user would also be able to see the selection of stickers in the display of the terminal device.
  • step (310) the second user selects the sticker and the slected sticker is inputted to the input unit fo the terminal device.
  • step (380) that comprises the establishing of a sticker based video call between the first user and the second users by means of the terminal devies. Therefore, an interactive communication is established between the users.
  • the units, interfaces, modules, and/or components depicted in the figures and described herein may be present in the form of a hardware, a software and a combination thereof. Connection/sshown between these units/components/modules/interfaces in the exemplary system architecture may interact with each other through various wired links, wireless links, logical links and/or physical links. Further, the units/components/modules/interfaces may be connected in other possible ways.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Telephonic Communication Services (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Des modes de réalisation de la présente divulgation peuvent porter sur des procédés et des systèmes servant à fournir un appel vidéo utilisant un autocollant interactif d'utilisateur. Ledit procédé comprend les étapes consistant à initier, au moyen d'un premier équipement terminal (100), un appel vidéo émanant d'un premier utilisateur de la pluralité d'utilisateurs vers un second utilisateur de la pluralité d'utilisateurs; à recevoir, au moyen d'une unité d'entrée (110), au moins des données d'entrée émanant du premier utilisateur de la pluralité d'utilisateurs; à stocker, au moyen d'une unité de stockage (120), les données d'entrée reçues; à extraire, au moyen d'une unité de traitement (130), une commande d'émotion émanant des données d'entrée stockées, ladite commande d'émotion comprenant l'émotion choisie par ledit premier utilisateur de la pluralité d'utilisateurs; à mapper, au moyen de l'unité de traitement (130), ladite commande d'émotion avec une pluralité d'autocollants contenue dans une base de données pré-stockée d'autocollants; à afficher, au moyen d'une unité d'affichage (140, 240), ladite pluralité d'autocollants à un second utilisateur de la pluralité d'utilisateurs; à recevoir, au moyen d'une unité d'entrée (210), un autocollant d'entrée sélectionné parmi ladite pluralité d'autocollants par le second utilisateur de la pluralité d'utilisateurs; et à établir, au moyen des premier et second équipements terminaux (100, 200), un appel vidéo utilisant un autocollant entre le premier utilisateur de la pluralité d'utilisateurs et le second utilisateur de la pluralité d'utilisateurs.
PCT/CN2021/101010 2020-06-29 2021-06-18 Procédé et système fournissant un appel vidéo utilisant un autocollant interactif d'utilisateur WO2022001706A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041027485 2020-06-29
IN202041027485 2020-06-29

Publications (1)

Publication Number Publication Date
WO2022001706A1 true WO2022001706A1 (fr) 2022-01-06

Family

ID=79317412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101010 WO2022001706A1 (fr) 2020-06-29 2021-06-18 Procédé et système fournissant un appel vidéo utilisant un autocollant interactif d'utilisateur

Country Status (1)

Country Link
WO (1) WO2022001706A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060098027A1 (en) * 2004-11-09 2006-05-11 Rice Myra L Method and apparatus for providing call-related personal images responsive to supplied mood data
US20060145943A1 (en) * 2002-11-04 2006-07-06 Mark Tarlton Avatar control using a communication device
US20190188459A1 (en) * 2017-12-15 2019-06-20 Hyperconnect, Inc. Terminal and server for providing video call service
CN110650306A (zh) * 2019-09-03 2020-01-03 平安科技(深圳)有限公司 视频聊天中添加表情的方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060145943A1 (en) * 2002-11-04 2006-07-06 Mark Tarlton Avatar control using a communication device
US20060098027A1 (en) * 2004-11-09 2006-05-11 Rice Myra L Method and apparatus for providing call-related personal images responsive to supplied mood data
US20190188459A1 (en) * 2017-12-15 2019-06-20 Hyperconnect, Inc. Terminal and server for providing video call service
CN110650306A (zh) * 2019-09-03 2020-01-03 平安科技(深圳)有限公司 视频聊天中添加表情的方法、装置、计算机设备及存储介质

Similar Documents

Publication Publication Date Title
US11036469B2 (en) Parsing electronic conversations for presentation in an alternative interface
US20190012527A1 (en) Method and apparatus for inputting emoticon
CN110400251A (zh) 视频处理方法、装置、终端设备及存储介质
US11196962B2 (en) Method and a device for a video call based on a virtual image
CN111587432A (zh) 用于生成动画表情符号混搭的系统和方法
CN101727472A (zh) 图像识别系统以及图像识别方法
CN112527115B (zh) 用户形象生成方法、相关装置及计算机程序产品
CN113365146B (zh) 用于处理视频的方法、装置、设备、介质和产品
CN110674706B (zh) 社交方法、装置、电子设备及存储介质
CN114880062A (zh) 聊天表情展示方法、设备、电子设备及存储介质
US9942389B2 (en) Indicating the current demeanor of a called user to a calling user
WO2022001706A1 (fr) Procédé et système fournissant un appel vidéo utilisant un autocollant interactif d'utilisateur
CN111274489A (zh) 信息处理方法、装置、设备及存储介质
US9407864B2 (en) Data processing method and electronic device
CN114567693B (zh) 视频生成方法、装置和电子设备
CN115623133A (zh) 线上会议方法、装置、电子设备及可读存储介质
CN113327311B (zh) 基于虚拟角色的显示方法、装置、设备、存储介质
CN115412634A (zh) 消息显示方法和装置
US20200219294A1 (en) Sound Actuated Augmented Reality
KR20170057256A (ko) 비디오 피커
US20230005202A1 (en) Speech image providing method and computing device for performing the same
KR102509106B1 (ko) 발화 영상 제공 방법 및 이를 수행하기 위한 컴퓨팅 장치
US20240046951A1 (en) Speech image providing method and computing device for performing the same
CN115170705A (zh) 3d头像展示方法、装置、存储介质和服务端
CN116863359A (zh) 目标物体的识别方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834238

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21834238

Country of ref document: EP

Kind code of ref document: A1