CN110278140B - Communication method and device - Google Patents

Communication method and device Download PDF

Info

Publication number
CN110278140B
CN110278140B CN201810208534.1A CN201810208534A CN110278140B CN 110278140 B CN110278140 B CN 110278140B CN 201810208534 A CN201810208534 A CN 201810208534A CN 110278140 B CN110278140 B CN 110278140B
Authority
CN
China
Prior art keywords
end user
avatar
actual
image
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810208534.1A
Other languages
Chinese (zh)
Other versions
CN110278140A (en
Inventor
王强宇
张振东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810208534.1A priority Critical patent/CN110278140B/en
Publication of CN110278140A publication Critical patent/CN110278140A/en
Application granted granted Critical
Publication of CN110278140B publication Critical patent/CN110278140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

One or more embodiments of the present disclosure provide a communication method and apparatus, where the method may include: determining an avatar bound with a first end user; and in the communication process between the first end user and the second end user, the virtual image is used as a substitute for the actual image of the first end user so as to be displayed for the second end user.

Description

Communication method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of communications technologies, and in particular, to a communication method and apparatus.
Background
In the related art, the communication application may provide various communication forms to the user, such as text, voice, pictures, video, and the like; most communication forms only enable users of two communication parties to check communication contents, and compared with face-to-face communication, the communication method has the advantages that high foreign feeling and gap feeling exist, and efficient communication between the users of the two communication parties is not facilitated.
Although the real-time video communication can alleviate the above problems to some extent, this form of communication is often applied between users who are familiar with each other, and for users who are not familiar or even strange, the form of real-time video communication is often not adopted, thereby affecting the efficient communication between two communication parties.
Disclosure of Invention
In view of the above, one or more embodiments of the present disclosure provide a communication method and apparatus.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided a communication method including:
determining an avatar bound with a first end user;
and in the communication process between the first end user and the second end user, the virtual image is used as a substitute for the actual image of the first end user so as to be displayed for the second end user.
According to a second aspect of one or more embodiments of the present specification, there is provided a communication apparatus including:
the determining unit is used for determining the virtual image bound with the first end user;
and the processing unit is used for replacing the virtual image as the actual image of the first end user in the communication process between the first end user and the second end user so as to display the virtual image to the second end user.
Drawings
Fig. 1 is a schematic diagram of a communication system according to an exemplary embodiment.
Fig. 2 is a flowchart of a communication method according to an exemplary embodiment.
FIG. 3 is a schematic diagram of a personal home page interface of user A provided by an exemplary embodiment.
FIG. 4 is a diagram illustrating a setup of a virtual role in accordance with an illustrative embodiment.
FIG. 5 is a schematic diagram of a personal home page interface of user B provided in an exemplary embodiment.
Figure 6 is a schematic diagram of a call-in interface provided by an exemplary embodiment.
Fig. 7 is a schematic diagram of a video communication interface according to an exemplary embodiment.
Fig. 8 is a schematic structural diagram of an apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram of a communication device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Fig. 1 is a schematic diagram of a communication system according to an exemplary embodiment. As shown in fig. 1, the system may include a server 11, a network 12, a number of electronic devices such as a cell phone 13, a cell phone 14, a cell phone 15, and the like.
The server 11 may be a physical server comprising a separate host, or the server 11 may be a virtual server carried by a cluster of hosts. During operation, the server 11 may operate a server-side program of the communication application to be implemented as a corresponding communication application server.
The handsets 13-15 are just one type of electronic device that a user may use. In fact, it is obvious that the user can also use electronic devices of the type such as: tablet devices, notebook computers, Personal Digital Assistants (PDAs), wearable devices (e.g., smart glasses, smart watches, etc.), etc., which are not limited by one or more embodiments of the present disclosure. During operation, the electronic device may run a client-side program of a communication application to be implemented as a corresponding communication application client.
It should be noted that: an application program of a client of a communication application can be pre-installed on the electronic equipment, so that the client can be started and run on the electronic equipment; of course, when an online "client" such as HTML5 technology is employed, the client can be obtained and run without installing a corresponding application on the electronic device.
And the network 12 for interaction between the handsets 13-15 and the server 11 may include various types of wired or wireless networks. In one embodiment, the Network 12 may include the Public Switched Telephone Network (PSTN) and the Internet. Meanwhile, the electronic devices such as the mobile phones 13 to 15 and the like can also perform communication interaction through the network 12, for example, a single chat communication session is established between any two electronic devices; alternatively, several electronic devices may participate in the same group chat communication session, so that any user may send communication messages to all other users in the group chat communication session through its own electronic device.
Fig. 2 is a flowchart of a communication method according to an exemplary embodiment. As shown in fig. 2, the method is applied to an electronic device (such as the mobile phones 13-15 shown in fig. 1) or a server (such as the server 11 shown in fig. 1), and may include the following steps:
step 202, determining an avatar to bind with the first end user.
In one embodiment, the avatar bound to the first end user includes any of: the avatar assigned to the first end user, for example, the avatar assigned to the first end user by the communication application server according to a preset rule, where the preset rule may include random assignment, sequential assignment, and the like, and the description does not limit this; the first end user selects the virtual image from the alternative virtual images, for example, the first end user inquires the preset alternative virtual image through the communication client and selects the interesting virtual image according to the actual requirement; the first end user combines the alternative virtual elements to form an avatar, for example, the first end user queries preset alternative virtual elements through a communication client, for example, the alternative virtual elements may include alternative virtual five sense organs, alternative virtual hair styles, alternative virtual decorations, and the like; and generating an avatar by simulating the actual avatar of the first end user, for example, by performing two-dimensional image processing or three-dimensional scanning on the actual avatar of the first end user, thereby generating a corresponding avatar. Of course, the avatar corresponding to the first end user may also be obtained in other manners, which is not limited in this specification.
In one embodiment, the avatar may comprise any type of avatar, such as a photo avatar, which may be the actual avatar of the first end user, and a cartoon avatar, which may even take the form of an animal, a anthropomorphic plant, an anthropomorphic item, and the like, without limitation to this specification.
In one embodiment, the avatar may include: a two-dimensional avatar or a three-dimensional avatar, which the present description does not limit.
And 204, in the communication process between the first end user and the second end user, taking the virtual image as a substitute of the actual image of the first end user to display the virtual image to the second end user.
In one embodiment, the virtual image is used for showing the second end user instead of the actual image in the communication process, so that the first end user and the second end user can communicate through the corresponding virtual image, the adoption of the actual image is avoided, and particularly when the two communication parties are not familiar or even have a strange relationship, the embarrassment emotion of the two communication parties can be effectively relieved, so that the familiarity similar to the face-to-face communication is generated, and the remote feeling and the estrangement feeling caused by the pure information communication in the related technology are reduced or eliminated.
In one embodiment, after the avatar corresponding to the first end user is determined, the first end user may be restricted from replacing the avatar. For example, in one case, the first end user is not allowed to replace its corresponding avatar, such that a strong association is formed between the avatar and the first end user; in another case, the number of times the first end user replaces the avatar may be limited, such as providing only three total opportunities to replace the avatar, beyond which the first end user is no longer allowed to replace his avatar; in yet another case, the first end user may be limited in frequency of changing the avatar, such as by limiting the number of times the first end user changes the avatar itself per day (or other time periods, such as weekly, monthly, etc.), such as by performing at most one change operation per day on the avatar itself.
In one embodiment, the first end user and the second end user communicate with each other through the established communication session, and the second end user may include an exchange object provided by the communication session to the first end user, for example, when the communication session is a single-chat communication session, the second end user may be a single exchange object corresponding to the single-chat communication session, and when the communication session is a group-chat communication session, the second end user may be a plurality of exchange objects corresponding to the group-chat communication session (of course, if there are only two users in the group, the second end user may also be a single exchange object).
In an embodiment, the dynamic information of the actual image of the first end user may be acquired, and the virtual image is driven to perform dynamic display according to the dynamic information, that is, the virtual image may be dynamically displayed according to the actual image of the first end user, for example, expression, action, and the like of the first end user are shown, so that the virtual image is more suitable for the actual image of the first end user, which is helpful for improving the interest in the communication process.
In one embodiment, a motion trajectory of at least one actual keypoint on an actual avatar of the first end user may be captured as the dynamic information; and then, according to the mapping relation between the virtual key points on the virtual image and the actual key points on the actual image, driving the corresponding virtual key points to move according to the motion tracks corresponding to the actual key points so as to realize dynamic display. Because the virtual image and the actual image have an association relationship, for example, the actual image is the face image of the first end user, and the virtual image is the face image of the animal, the actual key points in the eyes of the actual image can be mapped to the virtual key points in the eyes of the virtual image, and the motion trajectory of the actual key points can be applied to drive the virtual image, so that the virtual image can express the dynamic change condition of the actual image of the first end user more accurately and vividly.
In one embodiment, dynamic information of the first end user's actual image may be saved; when the avatar of the first end user is updated, the updated avatar can be dynamically displayed according to the dynamic information drive without acquiring the dynamic information of the actual avatar of the first end user again. Particularly, the dynamic information of the actual image of the first end user may be formed by the first end user in a specific scene, but not completed by the first end user at any time, so that the updated virtual image can still accurately express the information of the emotion, posture, expression and the like of the first end user by storing the dynamic information.
In an embodiment, when the first end user performs video recording on the actual image, the actual image acquired by video may be replaced by the avatar, and the obtained video data is provided to the second end user for displaying to the second end user. In other words, the video data viewed by the second end user may be non-real-time video content.
In an embodiment, when the first end user and the second end user perform a video chat, the actual image captured by the video may be replaced by the avatar, and the obtained video data may be provided to the second end user for presentation to the second end user. In other words, the video data viewed by the second end user may be real-time content; of course, there may also be some delay in the transmission of the video data, but this is not actively caused or desired by the first end user or the communication application at subjective discretion, and does not affect the understanding of "real time".
In an embodiment, when the first end user and the second end user perform video chat, the collected dynamic information of the first end user may be sent to the second end user, so that the avatar corresponding to the first end user is driven by the dynamic information to perform dynamic display on the electronic device used by the second end user. In other words, the first end user corresponds to the communication application client 1, the second end user corresponds to the communication application client 2, the communication application client 1 collects the dynamic information of the actual image of the first end user, does not generate video content, but directly sends the dynamic information to the communication application client 2, and the communication application client 2 drives the virtual image adopted by the first end user to generate corresponding dynamic change according to the dynamic information, so that the data transmission amount is greatly reduced. Of course, the audio acquisition can be performed on the first end user and transmitted to the second end user in real time for playing, so that the experience effect obtained by the second end user is similar to the video call process.
In an embodiment, when a friend relationship between the first end user and the second end user is not established, the actual image of the first end user may be forcibly replaced with the avatar in a call message sent by the first end user to the second end user. When the friend relationship is not established, the first end user is allowed to send the call message to the second end user, and the call message is different from the conventional communication message to a certain extent, such as message length limitation, message type limitation, sending frequency limitation and the like, so that disturbance among strange users is avoided. When a friend relationship is not established, the first end user and the second end user are often unfamiliar or completely unfamiliar, the embarrassment in the initial communication can be relieved by forcibly replacing the actual image with the virtual image, the social desire of the users can be promoted, and the anxiety in the social process can be reduced.
For convenience of understanding, the technical solutions of one or more embodiments of the present specification will be described by taking a communication application "WeChat" as an example. Supposing that an enterprise wechat service terminal runs on the server 11, a wechat client terminal 1 corresponding to the user a runs on the mobile phone 13, and a wechat client terminal 2 corresponding to the user B runs on the mobile phone 14, the user a and the user B can communicate with each other through the wechat client terminal 1 and the wechat client terminal 2.
FIG. 3 is a schematic diagram of a personal home page interface of user A provided by an exemplary embodiment. As shown in FIG. 3, the WeChat client 1 can present a personal home page interface 30 to the user A through the mobile phone 13, the personal home page interface 30 is used for presenting the personal profile information of the user A, such as the name "user A", the signature information is "and a good day! ", gender is" female (indicated as gender male in the figure) ", etc. Illustratively, the personal home interface 30 includes the avatar of the user a, such as the character avatar 301 and the personal avatar 302 shown in fig. 3.
The character avatar 301 corresponds to the virtual character selected by user a in the messaging application "WeChat". For example, when the virtual character selected by the user a is "monkey", the character avatar 301 may adopt an avatar of the virtual character "monkey". The personal avatar 302 may be any other content set by the user a, such as a real avatar photo taken by the user a in fig. 3, and may be a scene photo, a picture downloaded on the network, and the like, which is not limited in this specification.
The personal home interface 30 may show an avatar of the avatar, such as the image 303 shown in fig. 3, according to the avatar selected by the user a. In one embodiment, the virtual images corresponding to the virtual characters adopted in the 'WeChat' communication application are three-dimensional images, so that a more vivid effect can be realized when the virtual images are driven to generate dynamic changes in the subsequent description; of course, in other embodiments, a two-dimensional image or other forms may be used, and this specification does not limit this.
The user a may add dynamic information to the image 303 so that the "monkey" avatar expressed by the image 303 may be dynamically changed based on the dynamic information. For example, the wechat client 1 may identify the actual facial feature points of the user a through a camera on the mobile phone 13, and collect the change tracks corresponding to the actual facial feature points, respectively, as the dynamic information; further, there is a corresponding virtual facial feature point in the "monkey" face in the image 303, and there is a mapping relationship between the virtual facial feature point and the above-mentioned actual facial feature point, so that the above-mentioned dynamic information can be applied to the virtual facial feature point according to the mapping relationship, so that the image 303 can dynamically change accordingly, and imitate the change of the expression of the user a. Meanwhile, in the process of collecting dynamic information, the audio data sent by the user A can be synchronously collected, so that the audio data is played in the process of generating dynamic change in the image 303, and the feeling and interesting experience of the virtual image in speaking can be generated.
FIG. 4 is a diagram illustrating a setup of a virtual role in accordance with an illustrative embodiment. The user a can select a virtual character which the user a wishes to adopt in the virtual character selection interface 40 shown in fig. 4; and, during subsequent use, user a may also enter the avatar selection interface 40 by triggering a setting option 304, such as that shown in fig. 3, and make changes to the avatar he or she is using. The method comprises the following steps that a user A has certain limitation on the replacement operation of a virtual role so as to enhance the relevance between the user A and the virtual role; for example, the number of times that the virtual character is replaced by the user a is limited to be not more than a certain value (for example, 3 times), and the user a is not allowed to replace the virtual character if the value is exceeded, and for example, the frequency of replacement of the virtual character by the user a is limited to be not more than once a day (or once a week, three times a month, etc.), and the user a is not allowed to replace the virtual character if the frequency is exceeded.
In an embodiment, the wechat client 1 or the wechat server may store the dynamic information and the audio data, etc. formed by the user a before changing the virtual role, so that the user a does not need to re-collect the changed virtual role and can directly apply the stored dynamic information and the stored audio data to the changed virtual role, thereby driving the corresponding virtual image to generate dynamic change and "talk".
The user A can browse the personal homepages of other users; similarly, the personal home interface 30 of user A can also be browsed by other users. Taking the example that the user a browses the personal homepages of other users, the user a can browse the personal homepages of all users in one case, no matter whether the user a establishes a friend relationship with the user a, the user a can only browse the personal homepages of friends and cannot browse the personal homepages of strangers in another case, and the user a can browse the personal homepages of strangers but omits part of private information in the personal homepages of the strangers in yet another case.
For example, FIG. 5 is a diagram illustrating a personal home page interface of user B, according to an exemplary embodiment. Assume that user a is viewing through the wechat client 1 and a personal home page interface 50 as shown in fig. 5 is presented on the handset 13 by the wechat client 1, the personal home page interface 50 belonging to user B. In the personal homepage interface 50, the user a can view the avatar corresponding to the virtual character "panda" selected by the user B, i.e., the image 501 shown in fig. 5. Further, the user a may trigger the image 501, so that the image 501 may generate corresponding dynamic changes according to the dynamic information configured in advance by the user B, and cooperate with the audio data configured in advance by the user B, as if the virtual character "panda" is speaking to the user a, so that the user a may generate more vivid understanding of the user B, and is not limited to the static information such as the name, signature, gender, and friend impression shown in the personal home page interface 50.
When user a has not established a friend relationship with user B, the personal home interface 50 may include an invoke option 502 as shown in fig. 5, such that user a may send a corresponding invoke message to user B by triggering the invoke option 502. In an embodiment, the process of generating the hello message may comprise: the user A faces the camera of the mobile phone 13 to the face of the user A, so that the WeChat client 1 can acquire dynamic information of the face of the user A through the camera, and meanwhile, the WeChat client 1 acquires audio sent by the user A through a microphone on the mobile phone 13, so that a virtual character ' monkey ' corresponding to the user A is driven according to the dynamic information and the audio, a video message of the virtual character ' monkey ' speaking ' is generated to serve as the calling and calling message, and the calling and calling message is sent to the user B.
Figure 6 is a schematic diagram of a call-in interface provided by an exemplary embodiment. The WeChat client 2 can present a call-out interface 60 as shown in FIG. 6 to the user B through the mobile phone 14, the call-out interface 60 includes a call-out message 601 sent by the user A to the user B, the WeChat client 2 can play automatically or under the trigger of the user B, so that the user B can see the dynamic change of the virtual character 'monkey' corresponding to the user A to generate 'talk', and can hear the audio sent by the user A; of course, if the virtual character is to be attached, the audio emitted by the user a may be subjected to sound change processing, so that the sound changed audio is more similar to the sound emitted by the "monkey".
Accordingly, the user B may trigger the "reply-to-call" option in the call-to-call interface 60, so that the WeChat client 2 generates a call-to-call message 601 according to the expression, sound, and the like of the user B by using a process similar to the above-described generation of the call-to-call message 601 by the WeChat client 1, and sends the call-to-call message 602 to the user a, so that the user a may view the call-to-call message 602 by the WeChat client 1.
Fig. 7 is a schematic diagram of a video communication interface according to an exemplary embodiment. As shown in fig. 7, the wechat client 2 may present a video communication interface 70 to the user B through the mobile phone 14, where the video communication interface 70 includes a larger first display area 71 and a smaller second display area 72, the first display area 71 is used to display an avatar 701 corresponding to the user a, and the second display area 72 is used to display an avatar 702 corresponding to the user B. In the video communication process, the avatar 701 and the avatar 702 can respectively generate dynamic changes in real time or quasi-real time, the dynamic changes respectively correspond to expressions or actions and the like actually formed by the user a and the user B, the user a and the user B can communicate through voice, so that the video communication process and the video call in the related technology can provide similar application experience, the virtual image 701 and the avatar 702 replace the actual images of the user a and the user B, the embarrassing psychology of the user can be relieved, and the interest of the communication process can be improved.
The dynamic change process of the avatar 701 can be obtained by generating corresponding video data for the wechat client 1 and transmitting the video data to the wechat client 2 for playing; or, the wechat client 1 may transmit only the dynamic information to the wechat client 2, and the wechat client 2 actively uses the dynamic information to drive the avatar 701 corresponding to the user a to generate corresponding dynamic changes; alternatively, other approaches may be used and the description is not intended to be limiting.
FIG. 8 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 8, at the hardware level, the apparatus includes a processor 802, an internal bus 804, a network interface 806, a memory 808, and a non-volatile memory 810, but may also include hardware required for other services. The processor 802 reads a corresponding computer program from the nonvolatile memory 810 into the memory 808 and then operates to form a communication device on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 9, in a software implementation, the communication device may include:
a determining unit 91 determining an avatar bound with the first end user;
and the processing unit 92 is used for replacing the virtual image with the actual image of the first end user in the communication process between the first end user and the second end user so as to show the second end user.
Optionally, the avatar bound to the first end user includes any one of the following:
an avatar assigned to the first end user;
an avatar selected by said first end user from among the alternative avatars;
an avatar formed by the first end user combining the alternative virtual elements;
and the virtual image is generated by simulating the actual image of the first end user.
Optionally, the processing unit 92 is specifically configured to:
and acquiring dynamic information of the actual image of the first end user, and driving the virtual image to dynamically display according to the dynamic information.
Optionally, the processing unit 92 is specifically configured to:
capturing a motion trajectory of at least one actual keypoint on an actual avatar of the first end user as the dynamic information;
and driving the corresponding virtual key points to move according to the movement tracks corresponding to the actual key points according to the mapping relation between the virtual key points on the virtual image and the actual key points on the actual image so as to realize dynamic display.
Optionally, the method further includes:
the saving unit 93 is configured to save the dynamic information of the actual image of the first end user, so that when the avatar of the first end user is updated, the updated avatar is driven to be dynamically displayed according to the dynamic information.
Optionally, the processing unit 92 is specifically configured to:
when the actual image of the first end user is subjected to video recording, replacing the actual image acquired by video with the virtual image, and providing the obtained video data to the second end user for displaying to the second end user;
or when the first end user and the second end user perform video chat, replacing the actual image acquired by the video with the virtual image, and providing the obtained video data to the second end user to display the video data to the second end user;
or when the first end user and the second end user perform video chat, the collected dynamic information of the first end user is sent to the second end user, so that the virtual image corresponding to the first end user is driven to be dynamically displayed on the electronic equipment used by the second end user through the dynamic information.
Optionally, the avatar includes: a two-dimensional avatar or a three-dimensional avatar.
Optionally, the processing unit 92 is specifically configured to:
and when the friend relationship between the first end user and the second end user is not established, forcibly replacing the actual image of the first end user with the virtual image in a call message sent by the first end user to the second end user.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is intended only to be exemplary of the one or more embodiments of the present disclosure, and should not be taken as limiting the one or more embodiments of the present disclosure, as any modifications, equivalents, improvements, etc. that come within the spirit and scope of the one or more embodiments of the present disclosure are intended to be included within the scope of the one or more embodiments of the present disclosure.

Claims (14)

1. A method of communication, comprising:
determining an avatar bound with a first end user;
in the communication process between the first end user and the second end user, the avatar is used as a substitute for the actual avatar of the first end user to be displayed to the second end user, including: and when the friend relationship between the first end user and the second end user is not established, forcibly replacing the actual image of the first end user with the virtual image in a call message sent by the first end user to the second end user.
2. The method of claim 1, wherein the avatar bound to the first end user comprises any of:
an avatar assigned to the first end user;
an avatar selected by said first end user from among the alternative avatars;
an avatar formed by the first end user combining the alternative virtual elements;
and the virtual image is generated by simulating the actual image of the first end user.
3. The method of claim 1, wherein said replacing said avatar with said first end user's physical avatar for presentation to said second end user comprises:
and acquiring dynamic information of the actual image of the first end user, and driving the virtual image to dynamically display according to the dynamic information.
4. The method of claim 3, wherein said replacing said avatar with said first end user's physical avatar for presentation to said second end user comprises:
capturing a motion trajectory of at least one actual keypoint on an actual avatar of the first end user as the dynamic information;
and driving the corresponding virtual key points to move according to the movement tracks corresponding to the actual key points according to the mapping relation between the virtual key points on the virtual image and the actual key points on the actual image so as to realize dynamic display.
5. The method of claim 3, further comprising:
and storing the dynamic information of the actual image of the first end user so as to drive the updated virtual image to be dynamically displayed according to the dynamic information when the virtual image of the first end user is updated.
6. The method of claim 1, wherein said replacing said avatar with said first end user's physical avatar for presentation to said second end user comprises:
when the actual image of the first end user is subjected to video recording, replacing the actual image acquired by video with the virtual image, and providing the obtained video data to the second end user for displaying to the second end user;
or when the first end user and the second end user perform video chat, replacing the actual image acquired by the video with the virtual image, and providing the obtained video data to the second end user to display the video data to the second end user;
or when the first end user and the second end user perform video chat, the collected dynamic information of the first end user is sent to the second end user, so that the virtual image corresponding to the first end user is driven to be dynamically displayed on the electronic equipment used by the second end user through the dynamic information.
7. The method of claim 1, wherein the avatar comprises: a two-dimensional avatar or a three-dimensional avatar.
8. A communication device, comprising:
the determining unit is used for determining the virtual image bound with the first end user;
the processing unit, in the communication process between first end user and the second end user, will the virtual image is regarded as the substitute of first end user's actual image to second end user demonstrates, include: and when the friend relationship between the first end user and the second end user is not established, forcibly replacing the actual image of the first end user with the virtual image in a call message sent by the first end user to the second end user.
9. The apparatus of claim 8, wherein the avatar bound to the first end user comprises any of:
an avatar assigned to the first end user;
an avatar selected by said first end user from among the alternative avatars;
an avatar formed by the first end user combining the alternative virtual elements;
and the virtual image is generated by simulating the actual image of the first end user.
10. The apparatus according to claim 8, wherein the processing unit is specifically configured to:
and acquiring dynamic information of the actual image of the first end user, and driving the virtual image to dynamically display according to the dynamic information.
11. The apparatus according to claim 10, wherein the processing unit is specifically configured to:
capturing a motion trajectory of at least one actual keypoint on an actual avatar of the first end user as the dynamic information;
and driving the corresponding virtual key points to move according to the movement tracks corresponding to the actual key points according to the mapping relation between the virtual key points on the virtual image and the actual key points on the actual image so as to realize dynamic display.
12. The apparatus of claim 10, further comprising:
and the storage unit is used for storing the dynamic information of the actual image of the first end user so as to drive the updated virtual image to be dynamically displayed according to the dynamic information when the virtual image of the first end user is updated.
13. The apparatus according to claim 8, wherein the processing unit is specifically configured to:
when the actual image of the first end user is subjected to video recording, replacing the actual image acquired by video with the virtual image, and providing the obtained video data to the second end user for displaying to the second end user;
or when the first end user and the second end user perform video chat, replacing the actual image acquired by the video with the virtual image, and providing the obtained video data to the second end user to display the video data to the second end user;
or when the first end user and the second end user perform video chat, the collected dynamic information of the first end user is sent to the second end user, so that the virtual image corresponding to the first end user is driven to be dynamically displayed on the electronic equipment used by the second end user through the dynamic information.
14. The apparatus of claim 8, wherein the avatar comprises: a two-dimensional avatar or a three-dimensional avatar.
CN201810208534.1A 2018-03-14 2018-03-14 Communication method and device Active CN110278140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810208534.1A CN110278140B (en) 2018-03-14 2018-03-14 Communication method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810208534.1A CN110278140B (en) 2018-03-14 2018-03-14 Communication method and device

Publications (2)

Publication Number Publication Date
CN110278140A CN110278140A (en) 2019-09-24
CN110278140B true CN110278140B (en) 2022-05-24

Family

ID=67958390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810208534.1A Active CN110278140B (en) 2018-03-14 2018-03-14 Communication method and device

Country Status (1)

Country Link
CN (1) CN110278140B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110784676B (en) * 2019-10-28 2023-10-03 深圳传音控股股份有限公司 Data processing method, terminal device and computer readable storage medium
CN112215929A (en) * 2020-10-10 2021-01-12 珠海格力电器股份有限公司 Virtual social data processing method, device and system
CN113395597A (en) * 2020-10-26 2021-09-14 腾讯科技(深圳)有限公司 Video communication processing method, device and readable storage medium
CN113766168A (en) * 2021-05-31 2021-12-07 腾讯科技(深圳)有限公司 Interactive processing method, device, terminal and medium
CN115499612A (en) * 2021-06-18 2022-12-20 海信集团控股股份有限公司 Video communication method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217511A (en) * 2008-01-03 2008-07-09 腾讯科技(深圳)有限公司 A personal image management system and management method
CN101364957A (en) * 2008-10-07 2009-02-11 腾讯科技(深圳)有限公司 System and method for managing virtual image based on instant communication platform
EP2053806B1 (en) * 2007-10-24 2010-12-29 Miyowa Instant messaging method and system for mobile terminals equipped with a virtual presence server configured to manage various address books for the same user
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103368929A (en) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 Video chatting method and system
CN105554429A (en) * 2015-11-19 2016-05-04 掌赢信息科技(上海)有限公司 Video conversation display method and video conversation equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3924583B2 (en) * 2004-02-03 2007-06-06 松下電器産業株式会社 User adaptive apparatus and control method therefor
CN101122972A (en) * 2007-09-01 2008-02-13 腾讯科技(深圳)有限公司 Virtual pet chatting system, method and virtual pet server for answering question
KR20170035608A (en) * 2015-09-23 2017-03-31 삼성전자주식회사 Videotelephony System, Image Display Apparatus, Driving Method of Image Display Apparatus, Method for Generation Realistic Image and Computer Readable Recording Medium
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2053806B1 (en) * 2007-10-24 2010-12-29 Miyowa Instant messaging method and system for mobile terminals equipped with a virtual presence server configured to manage various address books for the same user
CN101217511A (en) * 2008-01-03 2008-07-09 腾讯科技(深圳)有限公司 A personal image management system and management method
CN101364957A (en) * 2008-10-07 2009-02-11 腾讯科技(深圳)有限公司 System and method for managing virtual image based on instant communication platform
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103368929A (en) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 Video chatting method and system
CN105554429A (en) * 2015-11-19 2016-05-04 掌赢信息科技(上海)有限公司 Video conversation display method and video conversation equipment

Also Published As

Publication number Publication date
CN110278140A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110278140B (en) Communication method and device
US10140675B2 (en) Image grid with selectively prominent images
JP6858865B2 (en) Automatic suggestions for sharing images
JP7391913B2 (en) Parsing electronic conversations for presentation in alternative interfaces
EP3095091B1 (en) Method and apparatus of processing expression information in instant communication
Rivière Mobile camera phones: a new form of “being together” in daily interpersonal communication
CN109691054A (en) Animation user identifier
CN112601100A (en) Live broadcast interaction method, device, equipment and medium
CN107085495B (en) Information display method, electronic equipment and storage medium
JP2015504642A (en) Video message communication
JP2022525272A (en) Image display with selective motion drawing
CN104303206A (en) Animation in threaded conversations
JP2021504803A (en) Image selection proposal
CN110674398A (en) Virtual character interaction method and device, terminal equipment and storage medium
CN114430494B (en) Interface display method, device, equipment and storage medium
CN116349214A (en) Synchronous audio and text generation
Surale et al. Arcall: Real-time ar communication using smartphones and smartglasses
CN116457814A (en) Context surfacing of collections
KR20160013536A (en) System and method for providing advertisement through voluntary production and spread of viral contents
CN114430506B (en) Virtual action processing method and device, storage medium and electronic equipment
CN118451696A (en) Multi-layer connection in a messaging system
TW201915721A (en) Information display method and device
US11206374B1 (en) Low-bandwidth avatar animation
WO2021208330A1 (en) Method and apparatus for generating expression for game character
CN114500434A (en) Method and device for aggregating communication messages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40013100

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant