CN114363279A - Interaction method and device based on virtual image and storage medium - Google Patents

Interaction method and device based on virtual image and storage medium Download PDF

Info

Publication number
CN114363279A
CN114363279A CN202111522686.7A CN202111522686A CN114363279A CN 114363279 A CN114363279 A CN 114363279A CN 202111522686 A CN202111522686 A CN 202111522686A CN 114363279 A CN114363279 A CN 114363279A
Authority
CN
China
Prior art keywords
image sequence
contact
image
online
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111522686.7A
Other languages
Chinese (zh)
Inventor
王楚云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tongshi Future Network Technology Co ltd
Original Assignee
Beijing Tongshi Future Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tongshi Future Network Technology Co ltd filed Critical Beijing Tongshi Future Network Technology Co ltd
Priority to CN202111522686.7A priority Critical patent/CN114363279A/en
Publication of CN114363279A publication Critical patent/CN114363279A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The application discloses an interaction method and device based on virtual images and a storage medium. The method comprises the following steps: logging in a remote interactive platform through an account preset by a user; displaying an head portrait area of an online contact in a contact list of an account, wherein the contact list comprises contact information of contacts established in an interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and displaying a first image sequence corresponding to the online contact in the head portrait area of the online contact, so as to display the real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the online contact logging interactive platform in real time and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.

Description

Interaction method and device based on virtual image and storage medium
Technical Field
The present application relates to the field of communication interaction technologies, and in particular, to an interaction method, an interaction device, and a storage medium based on a virtual image.
Background
In the same interactive environment, the interactive environments of the users are directly communicated, so that the users can communicate in real time and feel each other at any time. In the case that a certain user wants to communicate with others, the user can see the conversation by raising his head, or can directly approach the target person and communicate with the target person. However, as the number of displaced interaction scenarios increases (e.g., more and more people start working at home or displaced), more and more users start interacting in different interaction environments. Since the users are spatially separated from each other, they cannot feel each other and communicate instantaneously at any time as they would in the same interactive environment.
Although the existing communication interaction mode can provide instant communication services (such as instant communication services like WeChat, QQ, enterprise WeChat and nailing), the instant communication services cannot provide the user with an environment capable of showing the real situation of each contact in real time, so that users at different places cannot feel each other and can communicate with each other instantly as users in the same interaction environment. If a user wants to know the actual situation of a certain contact, the user needs to trigger the corresponding video communication component and can carry out a video call or a video conference under the condition that the opposite party agrees, so that the sense of distance between the user and the contact is generated.
In view of the above technical problem that the existing communication interaction methods in the prior art cannot enable users in different environments to feel each other personally on the scene, no effective solution has been proposed at present.
Disclosure of Invention
Embodiments of the present disclosure provide an interaction method, an interaction device, and a storage medium based on a virtual image, so as to at least solve a technical problem that users in different environments cannot feel each other personally on the scene in an existing communication interaction manner in the prior art.
According to an aspect of the embodiments of the present disclosure, there is provided a virtual image-based interaction method for a first terminal device of a user of an interaction platform, including: logging in a remote interactive platform through an account preset by a user; displaying an head portrait area of an online contact in a contact list of an account, wherein the contact list comprises contact information of contacts established in an interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and displaying a first image sequence corresponding to the online contact in the head portrait area of the online contact, so as to display the real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the online contact logging interactive platform in real time and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.
According to another aspect of the embodiments of the present disclosure, there is also provided an interaction method based on virtual images, used for an interaction platform, including: after a first terminal device of a user of an interaction platform logs in the interaction platform through a preset account, determining an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; receiving a first image sequence corresponding to the online contact from a second terminal device of the online contact login interaction platform, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence; and sending the first image sequence to the first terminal equipment so as to display the real-time situation of the online contact in real time at the first terminal equipment.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method of any one of the above is performed by a processor when the program is executed.
According to another aspect of the embodiments of the present disclosure, there is also provided an interactive device based on virtual images, including: the login module is used for logging in a remote interaction platform through an account preset by a user; the head portrait area display module is used for displaying a head portrait area of an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and the image sequence display module is used for displaying a first image sequence corresponding to the online contact in the head portrait area of the online contact, so that the real-time situation of the online contact is displayed in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the interactive platform for logging in the online contact in real time and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.
According to another aspect of the embodiments of the present disclosure, there is also provided an interactive device based on virtual images, including: the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining an online contact in a contact list of an account after a first terminal device of a user of the interactive platform logs in the interactive platform through a preset account, the contact list comprises contact information of contacts established on the interactive platform through the account, and the online contact is a contact which has logged in the interactive platform in the contact list; a first receiving module, configured to receive a first image sequence corresponding to the online contact from a second terminal device logging in the interaction platform, where the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence includes a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to an image of the online contact in the second image sequence; and the first image sequence sending module is used for sending the first image sequence to the first terminal equipment so as to display the real-time situation of the online contact in real time at the first terminal equipment.
According to another aspect of the embodiments of the present disclosure, there is also provided an interaction apparatus based on virtual images, configured to a first terminal device of a user of an interaction platform, including: a first processor; and a first memory coupled to the first processor for providing instructions to the first processor to process the following processing steps: logging in a remote interactive platform through an account preset by a user; displaying an head portrait area of an online contact in a contact list of an account, wherein the contact list comprises contact information of contacts established in an interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and displaying a first image sequence corresponding to the online contact in the head portrait area of the online contact, so as to display the real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the online contact logging interactive platform in real time and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.
According to another aspect of the embodiments of the present disclosure, there is also provided an interactive device based on virtual images, including: a second processor; and a second memory coupled to the second processor for providing instructions to the second processor to process the following processing steps: after a first terminal device of a user of an interaction platform logs in the interaction platform through a preset account, determining an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; receiving a first image sequence corresponding to the online contact from a second terminal device of the online contact login interaction platform, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence; and sending the first image sequence to the first terminal equipment so as to display the real-time situation of the online contact in real time at the first terminal equipment.
Therefore, according to the embodiment of the disclosure, after a user logs in an interaction platform through an account by using a terminal device, in order to enable the user to feel other users like a user in the same environment, the terminal device responds to the operation of logging in the interaction platform and displays the head portrait area of an online contact in a contact list of the account on a main interface. Because the interaction platform continuously receives the image sequence containing the dynamic virtual image of the logged-in user in real time, after the user logs in the interaction platform through the account by using the terminal equipment, the interaction platform determines the online contact of the user and sends the first image sequence containing the dynamic virtual image of the online contact received in real time to the terminal equipment of the user. Therefore, after the terminal equipment of the user displays the head portrait area of the online contact person, the dynamic virtual image of the online contact person is displayed in the head portrait area. The dynamic virtual image is synchronously generated according to the second image sequence acquired by the terminal equipment of the online contact person logging interactive platform in real time, so that the dynamic virtual image can show the real-time situation of the online contact person. Therefore, by means of displaying the image sequence containing the dynamic virtual images of the online contacts in the head image area of the online contacts by the terminal equipment of the user, the real-time situation of the online contacts of the user can be displayed in real time, so that each user can continuously check the real-time situation of the online contacts in real time, and the users in different environments can feel each other like the users in the same environment. Therefore, a user can check the real-time situation of each online contact person through the head portrait area corresponding to the online contact person only by logging in the interactive platform. Therefore, the technical effect that users in different environments can feel each other personally on the scene is achieved. And then the technical problem that the users in different environments can not feel each other personally on the scene and communicate instantly in the existing communication interaction mode in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
fig. 1 is a hardware block diagram of a computing device for implementing the method according to embodiment 1 of the present disclosure;
fig. 2a is a schematic diagram of an interactive system according to embodiment 1 of the present disclosure;
fig. 2b is a schematic diagram of a hardware structure of a terminal device in the interactive system according to embodiment 1 of the present disclosure;
fig. 3 is a schematic flow chart of an interaction method according to the first aspect of embodiment 1 of the present disclosure;
fig. 4 is a schematic flow chart of an interaction method according to a second aspect of embodiment 1 of the present disclosure;
fig. 5 is a schematic flow chart of an interaction method according to a third aspect of embodiment 1 of the present disclosure;
FIG. 6a is a schematic diagram of an interaction process according to embodiment 1 of the present disclosure;
fig. 6b is a schematic diagram of another interaction process according to embodiment 1 of the present disclosure;
fig. 7 is a schematic diagram of an interaction device according to a first aspect of embodiment 2 of the present disclosure;
fig. 8 is a schematic diagram of an interaction device according to a second aspect of embodiment 2 of the present disclosure;
fig. 9 is a schematic diagram of an interaction device according to a first aspect of embodiment 3 of the present disclosure; and
fig. 10 is a schematic diagram of an interaction device according to a second aspect of embodiment 3 of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It is to be understood that the described embodiments are merely exemplary of some, and not all, of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to the present embodiment, an embodiment of a virtual image-based interaction method is provided, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer-executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown.
The method embodiments provided by the present embodiment may be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Fig. 1 shows a hardware block diagram of a computing device for implementing a virtual image-based interaction method. As shown in fig. 1, the computing device may include one or more processors (which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory for storing data, and a transmission device for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computing device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device. As referred to in the disclosed embodiments, the data processing circuit acts as a processor control (e.g., selection of a variable resistance termination path connected to the interface).
The memory may be configured to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the virtual image-based interaction method in the embodiments of the present disclosure, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the virtual image-based interaction method of the application software. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory may further include memory located remotely from the processor, which may be connected to the computing device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of such networks may include wireless networks provided by communication providers of the computing devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device.
It should be noted here that in some alternative embodiments, the computing device shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that FIG. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in a computing device as described above.
Fig. 2a is a schematic diagram of an interactive system according to the present embodiment, which may be, for example, a displaced office system. Referring to FIG. 2a, the interactive system generally comprises: a plurality of terminal devices 100a to 100c, a plurality of office computers 200a to 200c, and an interactive platform 300. The plurality of terminal devices 100a to 100c are communicatively connected to the interactive platform 300, and a user may perform communication interaction through the respective terminal devices in an interaction environment provided by the interactive system. The plurality of office computers 200a to 200c are connected to the plurality of terminal devices 100a to 100c in communication, and the user can perform office work through the office computers 200a to 200 c. The interactive platform 300 can be built by a plurality of servers and provides instant messaging services for users. Therefore, through the interactive system, the users a to c do not need to be limited to the same place, and the office in different places is realized.
In addition, fig. 2b is a schematic diagram of a hardware structure of the terminal device in the interactive system. The terminal device (e.g., terminal device 100a) in the interactive system includes a display 100, and an image capture device 121, an audio capture device 122, and an audio playback device 123 disposed on the display 100. After the terminal device 100a logs in the interactive platform 300, the main interface displayed by the display 100 includes three display areas, which are a first display area 111, a second display area 112, and a third display area 113. The first display area 111 comprises an avatar area for displaying the sequence of images of the own user captured by the terminal device 110 a. The second display area 112 includes a plurality of avatar areas for displaying image sequences corresponding to the respective contacts, wherein the image sequence corresponding to the respective contact may include a dynamic virtual image (e.g., a cartoon image or a dynamic 3D image) of the corresponding contact, for example. The third display area 113 includes a plurality of function keys for providing functions required for various interactions to the user.
The above-described hardware configuration can be applied to the plurality of terminal devices 100a to 100c, the plurality of office computers 200a to 200c, and the interactive platform 300 in the system.
Under the operating environment, according to the first aspect of the present embodiment, a virtual image-based interaction method is provided, which is implemented by a first terminal device (for example, the terminal device 100a) for a user of the interaction platform 300 shown in fig. 2 a. Fig. 3 shows a flow diagram of the method, which, with reference to fig. 3, comprises:
s302: logging in a remote interactive platform through an account preset by a user;
s304: displaying an head portrait area of an online contact in a contact list of an account, wherein the contact list comprises contact information of contacts established in an interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and
s306: displaying a first image sequence corresponding to the online contact in a head portrait area of the online contact, so as to display the real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the online contact logging interactive platform in real time and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.
Referring to fig. 2a, 2b and 3, the terminal device 100a is, for example, a terminal device (i.e., a first terminal device) of a user a, and the user a can perform communication interaction through the terminal device 100a in an interaction environment provided by the interaction system. Wherein, the user a can register an account on the interactive platform 300 and establish various contacts on the interactive platform 300 through the account. Therefore, the user a can log in the remote interactive platform 300 through the account by using the terminal device 100a when needing to perform communication interaction with other people. In the application scenario, the terminal device 100a displays a main interface in response to an operation of the user a logging in the interaction platform 300, and displays an avatar area of an online contact in the contact list of the account on the main interface. The contact list includes contact information of contacts established in the interaction platform 300 through the account, and the online contacts are contacts in the contact list that have logged in the interaction platform. For example, referring to fig. 2a, a contact list under an account name established by user a on the interaction platform 300 includes contact information of user b and user c, after user a logs in the interaction platform 300 through the account by using terminal device 100a, terminal device 100a detects that only user b is online, and at this time, terminal device 100a displays an avatar area of user b.
The function of the avatar area may refer to an avatar component displayed in a main interface of an interactive application (e.g., WeChat, QQ, etc.), so that a user interacts with a contact corresponding to the avatar area by clicking the avatar area (e.g., may transmit a file, perform video communication, send an instant message, etc.).
Further, as shown with reference to fig. 2b, the terminal device 100a may display avatar regions (e.g., 112b and 112c) corresponding to respective contacts (e.g., user b and user c), for example, in the second display region 112. Thus, the terminal device 100a displays the avatar area 112b of the online contact of the account (i.e., user b), and then displays the image sequence corresponding to user b (i.e., the first image sequence) in the avatar area 112 b. The avatar area 112b corresponds to an image sequence (a second image sequence) acquired by the terminal device 100b (i.e., a second terminal device) of the interactive platform 300 logged by the user b in real time, and the first image sequence includes a dynamic virtual image of the user b.
In particular, the image frames in the first image sequence may be virtual pictures generated from corresponding image frames in the second image sequence. For example, the image frames in the first image sequence are cartoon images generated from the corresponding image frames in the second image sequence, such that the first image sequence contains a dynamic cartoon image of user b. Alternatively, the image frames in the first image sequence are 3D images generated from corresponding image frames in the second image sequence, such that the first image sequence contains a dynamic 3D imagery of user b. So that a dynamic virtual image of the user b can be presented while successively displaying the respective image frames of the first image sequence.
For example: after the user b logs in the interaction platform 300 through the account of the user b, in order to enable the online contact of the user b to continuously view the image sequence of the user b in real time, the terminal device 100b starts to acquire the image sequence (corresponding to the second image sequence) of the user b in real time after the user b logs in the interaction platform 300, and sends the acquired first image sequence corresponding to the second image sequence to the interaction platform 300 in real time, and the interaction platform 300 transmits the first image sequence to the terminal device 100a of the user a in real time. Accordingly, after the avatar area 112b of the user b is displayed, the terminal device 100a can display the first image sequence including the moving virtual image of the user b in the avatar area 112 b.
As described in the background art, although the existing instant messaging service can realize instant interaction and communication between users at different places, it cannot help users at different places to feel each other as users in the same interaction environment and communicate instantly at any time, and users cannot see the real situation of other people in real time.
In view of this, in order to enable the user a to feel other users as a user in the same environment after the user a logs in the interaction platform 300 through the account by using the terminal device 100a, the terminal device 100a displays, in the main interface, the avatar area of the online contact in the contact list of the account (for example, the avatar area of the user b) in response to the operation of logging in the interaction platform 300. Since the interactive platform 300 continuously receives the image sequence including the dynamic virtual image of the logged-in user in real time, after the user a logs in the interactive platform 300 through the account by using the terminal device 100a, the interactive platform 300 determines the online contact of the user a, and transmits the first image sequence including the dynamic virtual image of the online contact (e.g., the user b) received in real time to the first terminal device (e.g., the terminal device 100 a). Thus, the first terminal device (i.e., the terminal device 100a) displays the moving virtual image of the online contact in the avatar area after displaying the avatar area of the online contact. Since the dynamic virtual image is synchronously generated according to the second image sequence acquired by the terminal device of the online contact (for example, the user b) logging in the interactive platform 300 in real time and is transmitted to the first terminal device of the user a in real time, the dynamic virtual image can show the real-time situation of the online contact. Therefore, by means of displaying the image sequence containing the dynamic virtual images of the online contacts in the head image area of the online contacts by the terminal equipment of the user, the real-time situation of the online contacts of the user can be displayed in real time, so that each user can continuously check the real-time situation of the online contacts in real time, and the users in different environments can feel each other like the users in the same environment. Therefore, the user a can check the real-time situation of each online contact through the head portrait area corresponding to the online contact only by logging in the interactive platform 300. Therefore, the technical effect that users in different environments can feel each other personally on the scene is achieved. And then solved the current communication interaction mode that exists among the prior art and can't make the user under different environment feel each other personally on the scene technical problem.
Furthermore, it should be further noted that, in the present embodiment, the method described in the first aspect of the present embodiment is described by taking the first terminal device 100a as an example. However, the application of the method is not limited to this, and for example, the method may also be applied to instant messaging software similar to QQ or wechat, so that the above-mentioned avatar area may be displayed on the main interface of the instant messaging software, and the image sequence of the dynamic virtual image including the online contact may be displayed in the avatar area. Therefore, the method according to the first aspect of the present embodiment may be applied to an application program on a terminal device such as a tablet computer and a mobile phone.
Optionally, the operation of displaying the head portrait area of the online contact who has logged in the interaction platform in the contact list of the account includes: determining online contacts which have logged in an interaction platform in a contact list of an account; and displaying the head portrait area of the online contact according to the determined online contact.
Specifically, in the operation process of displaying the avatar area of the online contact that has logged in the interaction platform in the contact list of the account, the first terminal device (for example, terminal device 100a) first determines the online contact that has logged in the interaction platform in the contact list of the account, and then displays the avatar area of the online contact according to the determined online contact. For example, referring to fig. 2a, a contact list under an account name established by user a on the interaction platform 300 includes contact information of user b and user c, and after user a logs in the interaction platform 300 through the account by using the terminal device 100a, the interaction platform 300 detects that only user b is online through the contact list, so that the terminal device 100a displays an avatar area of user b. Through the method, the first terminal equipment can quickly and accurately display the head portrait area of the online contact.
In addition, as described above, the dynamic virtual image may be a dynamic cartoon image or a dynamic 3D image that is generated synchronously according to the images of the online contacts (i.e., the user b) in the second image sequence (i.e., the image sequence of the terminal device 100b that collects the user b in real time after the user b logs in the interactive platform 300).
Specifically, during the process of acquiring the image sequence (corresponding to the second image sequence) of the user b in real time, the terminal device (e.g., the terminal device 100b) of the online contact (e.g., the user b) may generate a corresponding cartoon image or a 3D image from each acquired image frame synchronously as each image frame of the first image sequence (i.e., the image sequence displayed in the avatar area displayed by the terminal device 100a of the user a), so as to generate the first image sequence.
The method in which the corresponding cartoon image or 3D image is generated from the image frames of the second image sequence may refer to a method of converting a real image into a cartoon image or a 3D image as is common in the art. Preferably, the generated cartoon image or virtual imagery of the online contact in the 3D image matches imagery of the online contact in the image frames of the second image sequence. The image information such as the appearance, the action and the expression of the virtual image is matched with the corresponding image information such as the appearance, the action and the expression of the online contact person in the image frame of the second image sequence, so that the appearance, the action and the expression of the online contact person can be visually shown. Therefore, the dynamic virtual image in the first image sequence can dynamically display the information such as the appearance, the action, the expression and the like of the online contact person in real time.
Furthermore, the image frames of the first image sequence may also be generated from the image frames of the second image sequence using image fusion techniques. For example, a terminal device of an online contact (e.g., terminal device 100b) may detect an image area of an online contact movie in an image frame of the second image sequence using an R-CNN or the like detection network. The terminal device 100b may then generate a cartoon image or a 3D image containing the virtual imagery of the online contact only for the detected image area. Then, the terminal device 100b performs image fusion on the generated cartoon image or 3D image and the original image frame corresponding to the second image sequence, so as to generate the image frame of the first image sequence.
In addition, in order to reduce the transmission cost and bandwidth pressure of the image sequence, the terminal device 100b may first perform resolution and/or frame rate reduction processing on the second image sequence, then generate the first image sequence according to the processed second image sequence by using the method described above, and send the first image sequence to the interactive platform 300. Thus, the first image sequence is an image sequence generated after a resolution and/or frame rate reduction based on the second image sequence. Finally, the terminal device 100a displays the received first image sequence in the avatar area of the online contact. In this way, the user can continuously view the image sequence of the online contact in real time, and the transmission cost and bandwidth pressure of the image sequence can be effectively reduced.
Optionally, the interaction method further includes: responding to a login interaction platform, and starting an image acquisition device arranged on first terminal equipment; and sending the third image sequence to the interactive platform, wherein the third image sequence corresponds to a fourth image sequence acquired by the image acquisition device in real time and is generated synchronously with the fourth image sequence, and the third image sequence contains dynamic virtual images of the user, and the dynamic virtual images of the user are generated according to the images of the user in the fourth image sequence.
Specifically, referring to fig. 2b, the terminal devices 100a to 100c are provided with an image capturing device 121. Thus, the first terminal device (e.g., terminal device 100a) starts the image capturing device 121 provided on the terminal device 100a in response to the operation of the user logging in the interactive platform 300, and captures a sequence of images (corresponding to a fourth sequence of images) of the user in real time by the image capturing device 121. Optionally, in order to reduce the transmission cost and bandwidth pressure of the image sequence, the first terminal device may perform resolution and/or frame rate reduction processing on the fourth image sequence. And, the terminal device 100a synchronously generates a third image sequence according to the fourth image sequence (or the fourth image sequence after the reduction processing), and transmits the third image sequence to the interactive platform 300 in real time. Wherein the third image sequence contains the dynamic virtual image of the user (i.e. user a), and the dynamic virtual image of user a is generated according to the image of user a in the fourth image sequence. The method for generating the third image sequence according to the fourth image sequence may refer to the method for generating the first image sequence according to the second image sequence, which is not described herein again.
In this way, the first terminal device may automatically continuously acquire the image sequence of the user in real time through the image acquisition device after logging in the interactive platform 300 (without having to click on the image acquisition device through the user a), generate a corresponding image sequence containing the dynamic virtual image of the user, and transmit the generated image sequence to the interactive platform 300, so that the real-time situation of the user may be shown to the online contact of the user (i.e., the user a) in real time through the interactive platform 300.
Furthermore, according to a second aspect of the present embodiment, an interaction method is provided, which is implemented by the interaction platform 300 shown in fig. 2 a. Fig. 4 shows a flow diagram of the method, which, with reference to fig. 4, comprises:
s402: after a first terminal device of a user of an interaction platform logs in the interaction platform through a preset account, determining an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list;
s404: receiving a first image sequence corresponding to the online contact from a second terminal device of the online contact login interaction platform, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence; and
s406: and sending the first image sequence to the first terminal equipment so as to display the real-time situation of the online contact in real time at the first terminal equipment.
Optionally, the method further comprises: receiving a third image sequence from the first terminal equipment in response to the login operation, wherein the third image sequence corresponds to a fourth image sequence acquired by the first terminal equipment in real time and is generated synchronously with the fourth image sequence, and the third image sequence contains dynamic virtual images of the user, and the dynamic virtual images of the user are generated according to the images of the user in the fourth image sequence; and transmitting the third image sequence to the second terminal device.
It should be specifically noted that, for details of the method described in the second aspect of this embodiment, reference may be made to the contents described in the first aspect of this embodiment, and details are not described here again.
Furthermore, according to a third aspect of the present embodiment, an interaction method is provided, which is implemented by the first terminal device (e.g. terminal device 100a) and the interaction platform 300 shown in fig. 2a together. Fig. 5 shows a flow diagram of the method, which, with reference to fig. 5, comprises:
s501: the method comprises the steps that a first terminal device logs in a remote interaction platform through a preset account;
s502: the method comprises the steps that an interactive platform determines online contacts in a contact list of an account, wherein the contact list comprises contact information of contacts established in the interactive platform through the account, and the online contacts are contacts which have logged in the interactive platform in the contact list;
s503: the method comprises the steps that an interaction platform receives a first image sequence corresponding to an online contact from a second terminal device of the online contact logging interaction platform, and sends the first image sequence to the first terminal device so as to display the real-time situation of the online contact in real time on the first terminal device, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence;
s504: the method comprises the steps that a first terminal device receives a first image sequence from an interactive platform;
s505: the method comprises the steps that a first terminal device displays a head portrait area of an online contact; and
s506: the first terminal device displays the first image sequence in the head portrait area of the online contact person, so that the real-time situation of the online contact person is displayed in real time.
Specifically, referring to fig. 6a, a user (e.g., user a) may log in to a remote interaction platform 300 through the account by using a first terminal device (e.g., terminal device 100a) (step 1 in fig. 6 a). In this application scenario, the interactive platform 300 determines online contacts in the contact list of the account in response to a login operation of the user a (step 2 in fig. 6 a). The contact list includes contact information of contacts established in the interaction platform 300 through the account, and the online contacts are contacts in the contact list that have logged in the interaction platform. For example, referring to fig. 2a, a contact list under an account name established by a user a on the interaction platform 300 includes contact information of a user b and a user c, after the user a logs in the interaction platform 300 through the account by using the terminal device 100a, the interaction platform 300 detects that the user b is online and the user c is offline, and at this time, the interaction platform 300 determines that the online contact of the account is the user b.
Further, in order to enable the online contact of the user b to continuously view the image sequence of the user b in real time, a second terminal device (e.g., the terminal device 100b) of the user b acquires the image sequence of the user b (corresponding to the second image sequence) in real time, and generates a first image sequence containing the dynamic virtual image of the user b according to the second image sequence. The terminal device 100b then sends the first image sequence corresponding to the second image sequence to the interaction platform 300 (step 3 of fig. 6 a). Thus, the interactive platform 300 transmits the first image sequence received from the terminal device 100b to the terminal device 100a (step 4 of fig. 6 a). Then, as shown in fig. 2b, the terminal device 100a is provided with avatar areas corresponding to the respective contacts (e.g., user b and user c) (step 4 of fig. 6 a). Thus, in this application scenario, the terminal device 100a displays the avatar area of the online contact (i.e., user b) in the contact list of the account, and after displaying the avatar area of the online contact (i.e., user b) of the account, displays the first image sequence corresponding to user b in the avatar area (step 6 in fig. 6 a).
Therefore, compared with the prior art, in the technical solution proposed in this embodiment, after the user a logs in the interaction platform 300 through the account by using the terminal device 100a, in order to enable the user a to feel other users as a user in the same environment, the terminal device 100a displays, in response to the operation of logging in the interaction platform 300, the avatar area of the online contact in the contact list of the account (for example, the avatar area of the user b) on the main interface. Since the interactive platform 300 continuously receives the image sequence including the dynamic virtual image of the logged-in user in real time, after the user a logs in the interactive platform 300 through the account by using the terminal device 100a, the interactive platform 300 determines the online contact of the user a, and transmits the first image sequence including the dynamic virtual image of the online contact (e.g., the user b) received in real time to the first terminal device (e.g., the terminal device 100 a). Thus, the first terminal device (i.e., the terminal device 100a) displays the moving virtual image of the online contact in the avatar area after displaying the avatar area of the online contact. Since the dynamic virtual image is synchronously generated according to the second image sequence acquired by the terminal device of the online contact (for example, the user b) logging in the interactive platform 300 in real time and is transmitted to the first terminal device of the user a in real time, the dynamic virtual image can show the real-time situation of the online contact. Therefore, by means of displaying the image sequence containing the dynamic virtual images of the online contacts in the head image area of the online contacts by the terminal equipment of the user, the real-time situation of the online contacts of the user can be displayed in real time, so that each user can continuously check the real-time situation of the online contacts in real time, and the users in different environments can feel each other like the users in the same environment. Therefore, the user a can check the real-time situation of each online contact through the head portrait area corresponding to the online contact only by logging in the interactive platform 300. Therefore, the technical effect that users in different environments can feel each other personally on the scene is achieved. And then solved the current communication interaction mode that exists among the prior art and can't make the user under different environment feel each other personally on the scene technical problem.
Optionally, the interaction method further includes: the first terminal device responds to the login interaction platform and sends a third image sequence to the interaction platform, wherein the third image sequence corresponds to a fourth image sequence acquired by the first terminal device in real time and is generated synchronously with the fourth image sequence, the third image sequence comprises dynamic virtual images of the user, and the dynamic virtual images of the user are generated according to the images of the user in the fourth image sequence; and the interactive platform receives the third image sequence from the first terminal equipment and sends the third image sequence to the second terminal equipment of the online contact person.
Specifically, referring to fig. 6b, after the first terminal device (e.g., terminal device 100a) logs in the interaction platform 300 through the account, a fourth image sequence of the user (e.g., user a) may be acquired in real time (step 1 in fig. 6 b). Optionally, in order to reduce the transmission cost and bandwidth pressure of the image sequence, the first terminal device may perform resolution and/or frame rate reduction processing on the fourth image sequence. And, the terminal device 100a synchronously generates a third image sequence according to the fourth image sequence (or the fourth image sequence after the reduction processing) (step 2 in fig. 6 b), and sends the third image sequence to the interactive platform 300 (step 2 in fig. 6 b). Thus, the interaction platform 300 receives the third image sequence from the terminal device 100a and sends the third image sequence to the second terminal device (i.e. terminal device 100b) of the online contact (e.g. user b) (fig. 6b, step 3). After the terminal device 100b logs in the interactive platform 300 through the account of the user b, the avatar area corresponding to the user a of the terminal device 100a is displayed (step 4 of fig. 6 b). Then, a third image sequence of the user a of the terminal device 100a is received from the interactive platform 300, and the third image sequence is displayed in the avatar area. In this way, the first terminal device can not only continuously acquire the image sequence of the user in real time through the image acquisition device, but also effectively reduce the transmission cost and bandwidth pressure of the image sequence.
Further, referring to fig. 1, according to a fourth aspect of the present embodiment, there is provided a storage medium. The storage medium comprises a stored program, wherein the method of any of the above is performed by a processor when the program is run.
Therefore, according to the embodiment, after a user logs in the interactive platform through an account by using the terminal device, in order to enable the user to feel other users like a user in the same environment, the terminal device responds to the operation of logging in the interactive platform and displays the head image area of the online contact in the contact list of the account on the main interface. Because the interaction platform continuously receives the image sequence containing the dynamic virtual image of the logged-in user in real time, after the user logs in the interaction platform through the account by using the terminal equipment, the interaction platform determines the online contact of the user and sends the first image sequence containing the dynamic virtual image of the online contact received in real time to the terminal equipment of the user. Therefore, after the terminal equipment of the user displays the head portrait area of the online contact person, the dynamic virtual image of the online contact person is displayed in the head portrait area. The dynamic virtual image is synchronously generated according to the second image sequence acquired by the terminal equipment of the online contact person logging interactive platform in real time, so that the dynamic virtual image can show the real-time situation of the online contact person. Therefore, by means of displaying the image sequence containing the dynamic virtual images of the online contacts in the head image area of the online contacts by the terminal equipment of the user, the real-time situation of the online contacts of the user can be displayed in real time, so that each user can continuously check the real-time situation of the online contacts in real time, and the users in different environments can feel each other like the users in the same environment. Therefore, a user can check the real-time situation of each online contact person through the head portrait area corresponding to the online contact person only by logging in the interactive platform. Therefore, the technical effect that users in different environments can feel each other personally on the scene is achieved. And then the technical problem that the users in different environments can not feel each other personally on the scene and communicate instantly in the existing communication interaction mode in the prior art is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
Fig. 7 shows a virtual image based interactive device 700 according to the first aspect of the present embodiment, where the device 700 corresponds to the method according to the first aspect of the embodiment 1. Referring to fig. 7, the apparatus 700 includes: the first login module 710 is used for logging in a remote interaction platform through an account preset by a user; a first photo area display module 720, configured to display a photo area of an online contact in a contact list of an account, where the contact list includes contact information of contacts established on an interaction platform through the account, and the online contact is a contact that has already logged in the interaction platform in the contact list; and a first image sequence display module 730, configured to display a first image sequence corresponding to the online contact in the avatar area of the online contact, so as to display a real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, where the first image sequence corresponds to a second image sequence acquired in real time by a second terminal device logging in the interaction platform of the online contact, and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to an image of the online contact in the second image sequence.
Optionally, the dynamic virtual image is a dynamic cartoon image generated from images of online contacts in the second image sequence.
Optionally, the dynamic virtual imagery is dynamic 3D virtual imagery generated from imagery of online contacts in the second image sequence.
Optionally, the apparatus 700 further comprises: the starting module is used for responding to the login interaction platform and starting an image acquisition device arranged on the first terminal equipment; and the sending module is used for sending the third image sequence to the interactive platform, wherein the third image sequence corresponds to a fourth image sequence acquired by the image acquisition device in real time and is generated synchronously with the fourth image sequence, the third image sequence comprises a dynamic virtual image of the user, and the dynamic virtual image of the user is generated according to the image of the user in the fourth image sequence.
Furthermore, fig. 8 shows a virtual image based interaction device 800 according to the second aspect of the present embodiment, where the device 800 corresponds to the method according to the second aspect of embodiment 1. Referring to fig. 8, the apparatus 800 includes: the determining module 810 is configured to determine a contact associated with a user after a first terminal device of the user of the interaction platform logs in the interaction platform through a preset account; a first receiving module 820 for receiving a sequence of images of a contact associated with a user; and a first image sequence sending module 830, configured to send the image sequence to the first terminal device.
Optionally, the apparatus 800 further comprises: the second receiving module is used for responding to the login operation and receiving a third image sequence from the first terminal equipment, wherein the third image sequence corresponds to a fourth image sequence acquired by the first terminal equipment in real time and is generated synchronously with the fourth image sequence, the third image sequence comprises dynamic virtual images of the user, and the dynamic virtual images of the user are generated according to the images of the user in the fourth image sequence; and the second image sequence sending module is used for sending the third image sequence to the second terminal equipment.
Therefore, according to the embodiment, after a user logs in the interactive platform through an account by using the terminal device, in order to enable the user to feel other users like a user in the same environment, the terminal device responds to the operation of logging in the interactive platform and displays the head image area of the online contact in the contact list of the account on the main interface. Because the interaction platform continuously receives the image sequence containing the dynamic virtual image of the logged-in user in real time, after the user logs in the interaction platform through the account by using the terminal equipment, the interaction platform determines the online contact of the user and sends the first image sequence containing the dynamic virtual image of the online contact received in real time to the terminal equipment of the user. Therefore, after the terminal equipment of the user displays the head portrait area of the online contact person, the dynamic virtual image of the online contact person is displayed in the head portrait area. The dynamic virtual image is synchronously generated according to the second image sequence acquired by the terminal equipment of the online contact person logging interactive platform in real time, so that the dynamic virtual image can show the real-time situation of the online contact person. Therefore, by means of displaying the image sequence containing the dynamic virtual images of the online contacts in the head image area of the online contacts by the terminal equipment of the user, the real-time situation of the online contacts of the user can be displayed in real time, so that each user can continuously check the real-time situation of the online contacts in real time, and the users in different environments can feel each other like the users in the same environment. Therefore, a user can check the real-time situation of each online contact person through the head portrait area corresponding to the online contact person only by logging in the interactive platform. Therefore, the technical effect that users in different environments can feel each other personally on the scene is achieved. And then the technical problem that the users in different environments can not feel each other personally on the scene and communicate instantly in the existing communication interaction mode in the prior art is solved.
Example 3
Fig. 9 shows a virtual image based interactive device 900 according to the first aspect of the present embodiment, where the device 900 corresponds to the method according to the first aspect of the embodiment 1. Referring to fig. 9, the apparatus 900 includes: a first processor 910; and a first memory 920, coupled to the first processor 910, for providing instructions to the first processor 910 to process the following steps: logging in a remote interactive platform through an account preset by a user; displaying an head portrait area of an online contact in a contact list of an account, wherein the contact list comprises contact information of contacts established in an interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and displaying a first image sequence corresponding to the online contact in the head portrait area of the online contact, so as to display the real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the online contact logging interactive platform in real time and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.
Optionally, the dynamic virtual image is a dynamic cartoon image generated from images of online contacts in the second image sequence.
Optionally, the dynamic virtual imagery is dynamic 3D virtual imagery generated from imagery of online contacts in the second image sequence.
Optionally, the first memory 920 is further configured to provide the first processor 910 with instructions to process the following processing steps: responding to a login interaction platform, and starting an image acquisition device arranged on first terminal equipment; and sending the third image sequence to the interactive platform, wherein the third image sequence corresponds to a fourth image sequence acquired by the image acquisition device in real time and is generated synchronously with the fourth image sequence, and the third image sequence contains dynamic virtual images of the user, and the dynamic virtual images of the user are generated according to the images of the user in the fourth image sequence.
Furthermore, fig. 10 shows an interaction device 1000 according to the second aspect of the present embodiment, which device 1000 corresponds to the method according to the second aspect of embodiment 1. Referring to fig. 10, the apparatus 1000 includes: a second processor 1010; and a second memory 1020 coupled to the second processor 1010 for providing instructions to the second processor 1010 to process the following steps: after a first terminal device of a user of an interaction platform logs in the interaction platform through a preset account, determining an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; receiving a first image sequence corresponding to the online contact from a second terminal device of the online contact login interaction platform, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence; and sending the first image sequence to the first terminal equipment so as to display the real-time situation of the online contact in real time at the first terminal equipment.
Optionally, the second memory 1020 is further configured to provide the second processor 1010 with instructions to process the following processing steps: receiving a third image sequence from the first terminal equipment in response to the login operation, wherein the third image sequence corresponds to a fourth image sequence acquired by the first terminal equipment in real time and is generated synchronously with the fourth image sequence, and the third image sequence contains dynamic virtual images of the user, and the dynamic virtual images of the user are generated according to the images of the user in the fourth image sequence; and transmitting the third image sequence to the second terminal device.
Therefore, according to the embodiment, after a user logs in the interactive platform through an account by using the terminal device, in order to enable the user to feel other users like a user in the same environment, the terminal device responds to the operation of logging in the interactive platform and displays the head image area of the online contact in the contact list of the account on the main interface. Because the interaction platform continuously receives the image sequence containing the dynamic virtual image of the logged-in user in real time, after the user logs in the interaction platform through the account by using the terminal equipment, the interaction platform determines the online contact of the user and sends the first image sequence containing the dynamic virtual image of the online contact received in real time to the terminal equipment of the user. Therefore, after the terminal equipment of the user displays the head portrait area of the online contact person, the dynamic virtual image of the online contact person is displayed in the head portrait area. The dynamic virtual image is synchronously generated according to the second image sequence acquired by the terminal equipment of the online contact person logging interactive platform in real time, so that the dynamic virtual image can show the real-time situation of the online contact person. Therefore, by means of displaying the image sequence containing the dynamic virtual images of the online contacts in the head image area of the online contacts by the terminal equipment of the user, the real-time situation of the online contacts of the user can be displayed in real time, so that each user can continuously check the real-time situation of the online contacts in real time, and the users in different environments can feel each other like the users in the same environment. Therefore, a user can check the real-time situation of each online contact person through the head portrait area corresponding to the online contact person only by logging in the interactive platform. Therefore, the technical effect that users in different environments can feel each other personally on the scene is achieved. And then the technical problem that the users in different environments can not feel each other personally on the scene and communicate instantly in the existing communication interaction mode in the prior art is solved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, which can store program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An interaction method based on virtual images is used for a first terminal device of a user of an interaction platform, and is characterized by comprising the following steps:
logging in a remote interaction platform through an account preset by the user;
displaying an avatar area of an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and
displaying a first image sequence corresponding to the online contact in an avatar area of the online contact, so as to display a real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the interactive platform for logging in the online contact in real time, and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to an image of the online contact in the second image sequence.
2. The method of claim 1, wherein the dynamic virtual imagery is dynamic cartoon imagery generated from imagery of the online contact in the second sequence of images.
3. The method of claim 1, wherein the dynamic virtual imagery is dynamic 3D virtual imagery generated from imagery of the online contact in the second sequence of images.
4. The method of claim 1, further comprising:
responding to the login of the interactive platform, and starting an image acquisition device arranged on the first terminal equipment; and
sending a third image sequence to the interactive platform, wherein the third image sequence corresponds to a fourth image sequence acquired by the image acquisition device in real time and is generated synchronously with the fourth image sequence, and the third image sequence contains a dynamic virtual image of the user, and the dynamic virtual image of the user is generated according to the image of the user in the fourth image sequence.
5. An interaction method based on virtual images is used for an interaction platform and is characterized by comprising the following steps:
after a first terminal device of a user of the interaction platform logs in the interaction platform through a preset account, determining an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established on the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list;
receiving a first image sequence corresponding to the online contact from a second terminal device logging in the interactive platform from the online contact, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to an image of the online contact in the second image sequence; and
and sending the first image sequence to the first terminal equipment so as to display the real-time situation of the online contact in real time at the first terminal equipment.
6. An interaction method based on virtual images is characterized by comprising the following steps:
a first terminal device of a user logs in a remote interaction platform through an account preset by the user;
the interaction platform determines online contacts in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contacts are contacts which have logged in the interaction platform in the contact list;
the interaction platform receives a first image sequence corresponding to the online contact from a second terminal device of the online contact logging in the interaction platform, and sends the first image sequence to the first terminal device so as to display the real-time situation of the online contact in real time at the first terminal device, wherein the first image sequence corresponds to a second image sequence acquired by the second terminal device in real time and is generated synchronously with the second image sequence, and the first image sequence comprises a dynamic virtual image of the online contact, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence;
the first terminal equipment receives the first image sequence from the interaction platform;
the first terminal equipment displays the head portrait area of the online contact; and
and the first terminal equipment displays the first image sequence in the head portrait area of the online contact person, so that the real-time situation of the online contact person is displayed in real time.
7. The method of claim 6, further comprising:
the first terminal device sends a third image sequence to the interactive platform in response to logging in the interactive platform, wherein the third image sequence corresponds to a fourth image sequence acquired by the first terminal device in real time and is generated synchronously with the fourth image sequence, and the third image sequence contains a dynamic virtual image of the user, and the dynamic virtual image of the user is generated according to the image of the user in the fourth image sequence; and
and the interactive platform receives the third image sequence from the first terminal equipment and sends the third image sequence to the second terminal equipment of the online contact person.
8. A storage medium comprising a stored program, wherein the method of any one of claims 1 to 7 is performed by a processor when the program is run.
9. An interactive device based on virtual images, comprising:
the login module is used for logging in a remote interaction platform through an account preset by a user;
the head portrait area display module is used for displaying a head portrait area of an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and
the image sequence display module is used for displaying a first image sequence corresponding to the online contact in the head portrait area of the online contact, so that the real-time situation of the online contact is displayed in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the interactive platform for logging in the online contact in real time, and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to the image of the online contact in the second image sequence.
10. An interactive device based on virtual images, comprising:
a first processor; and
a first memory coupled to the first processor for providing instructions to the first processor to process the following process steps:
logging in a remote interaction platform through an account preset by the user;
displaying an avatar area of an online contact in a contact list of the account, wherein the contact list comprises contact information of contacts established in the interaction platform through the account, and the online contact is a contact which has logged in the interaction platform in the contact list; and
displaying a first image sequence corresponding to the online contact in an avatar area of the online contact, so as to display a real-time situation of the online contact in real time through a dynamic virtual image of the online contact in the first image sequence, wherein the first image sequence corresponds to a second image sequence acquired by a second terminal device of the interactive platform for logging in the online contact in real time, and is generated synchronously with the second image sequence, and the dynamic virtual image is generated according to an image of the online contact in the second image sequence.
CN202111522686.7A 2021-12-13 2021-12-13 Interaction method and device based on virtual image and storage medium Pending CN114363279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111522686.7A CN114363279A (en) 2021-12-13 2021-12-13 Interaction method and device based on virtual image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111522686.7A CN114363279A (en) 2021-12-13 2021-12-13 Interaction method and device based on virtual image and storage medium

Publications (1)

Publication Number Publication Date
CN114363279A true CN114363279A (en) 2022-04-15

Family

ID=81100179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111522686.7A Pending CN114363279A (en) 2021-12-13 2021-12-13 Interaction method and device based on virtual image and storage medium

Country Status (1)

Country Link
CN (1) CN114363279A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100274847A1 (en) * 2009-04-28 2010-10-28 Particle Programmatica, Inc. System and method for remotely indicating a status of a user
CN108271058A (en) * 2018-02-02 2018-07-10 优酷网络技术(北京)有限公司 Video interaction method, subscription client, server and storage medium
CN108271057A (en) * 2018-02-02 2018-07-10 优酷网络技术(北京)有限公司 Video interaction method, subscription client, server and readable storage medium storing program for executing
CN113179208A (en) * 2021-06-29 2021-07-27 北京同视未来网络科技有限公司 Interaction method, interaction device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100274847A1 (en) * 2009-04-28 2010-10-28 Particle Programmatica, Inc. System and method for remotely indicating a status of a user
CN108271058A (en) * 2018-02-02 2018-07-10 优酷网络技术(北京)有限公司 Video interaction method, subscription client, server and storage medium
CN108271057A (en) * 2018-02-02 2018-07-10 优酷网络技术(北京)有限公司 Video interaction method, subscription client, server and readable storage medium storing program for executing
CN113179208A (en) * 2021-06-29 2021-07-27 北京同视未来网络科技有限公司 Interaction method, interaction device and storage medium

Similar Documents

Publication Publication Date Title
EP2850816B1 (en) Communication system
CN110881144B (en) Data processing method based on live broadcast platform and related equipment
EP2947825A1 (en) Video communication method, home terminal and home server
US10129301B2 (en) Apparatus, system, and method of controlling data transmission, and recording medium
CN102571631B (en) The sending method of motion images information, terminal and system in instant messaging
CN113179208B (en) Interaction method, interaction device and storage medium
CN105430317B (en) A kind of video background setting method and terminal device
CN112836198A (en) Account login method and device, server, electronic equipment and storage medium
CN113329240A (en) Screen projection method and device
CN110740131B (en) Data processing method and device, electronic equipment and storage medium
CN113765932B (en) Control method for multiparty call and electronic equipment
CN110619097A (en) Two-dimensional code generation method and device, electronic equipment and storage medium
CN112751932A (en) Method for remotely checking mobile phone application information through video stream and fixed instruction
CN108574878B (en) Data interaction method and device
CN108243319B (en) Method, equipment and system for realizing teleconference
CN112583896A (en) Session management method, session management device, electronic equipment, session management server and storage medium
CN103701877A (en) Interaction type remote consultation system and interaction type remote consultation method
US10205686B2 (en) Communication terminal, communication system, and output method
CN114363279A (en) Interaction method and device based on virtual image and storage medium
US20060218282A1 (en) System and method for providing mobile assisted, fixed line video calls
CN112968786B (en) Method and device for carrying out online conference based on working object and storage medium
CN108924182B (en) Text information sending method and device in virtual reality scene
KR101800979B1 (en) Method of visual communication
EP2950527B1 (en) Transmission terminal, transmission system, relay device selecting method, and carrier means
CN107168662B (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination