CN116309951A - Virtual image-based interaction method and device, electronic equipment and storage medium - Google Patents

Virtual image-based interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116309951A
CN116309951A CN202310029013.0A CN202310029013A CN116309951A CN 116309951 A CN116309951 A CN 116309951A CN 202310029013 A CN202310029013 A CN 202310029013A CN 116309951 A CN116309951 A CN 116309951A
Authority
CN
China
Prior art keywords
user
photo
avatar
template
sharing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310029013.0A
Other languages
Chinese (zh)
Inventor
赵峻
何杰
夏思禹
虞晨晨
徐敏
汪恒
张杨
赖宏焕
杨赛
方凯
耿军
张天伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202310029013.0A priority Critical patent/CN116309951A/en
Publication of CN116309951A publication Critical patent/CN116309951A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

One or more embodiments of the present specification provide an avatar-based interaction method, apparatus, electronic device, and storage medium. The method comprises the following steps: a first client corresponding to a first user acquires a target photo template selected by the first user and a first photo position selected by the first user; responding to a first user initiated photo closing operation, and rendering an avatar of the first user to a first photo closing position by the first client; responding to a sharing operation initiated by a first user, and sharing the rendered target photo template by the first client to a second client corresponding to a second user; the second client acquires a target photo template shared by the first client; in response to a second user initiated syndication check operation, the second client renders the avatar of the second user to a second syndication location reserved for the second user to generate a syndication picture containing the avatar of the first user and the avatar of the second user.

Description

Virtual image-based interaction method and device, electronic equipment and storage medium
Technical Field
One or more embodiments of the present disclosure relate to the field of metauniverse technology, and in particular, to an interaction method, device, electronic apparatus, and storage medium based on an avatar.
Background
With the continuous development of network technology, more and more users have created avatars for representing own avatars on a network. For example, a user may configure the avatar with patterns of five sense organs, body shapes, and attitudes through a face pinching system, and may also personally dress the avatar to create a personalized avatar for representing the user.
Currently, in game, social, etc., the interactivity between the avatars of individual users is poor.
For example, in a game scene, each user can control the avatars of the user and the avatars of other users in the game to be gathered together, and then a 'photo of the user's avatar and the avatars of other users is obtained by taking a screenshot of a game page or the like, so that it is difficult to generate a photo according to personal preference of the user.
For another example, in a social scenario, each user may present their own avatar to other users; further, the user can submit a request for taking a photo with other users to the server, so that the server generates a photo between the avatars of the users based on the created avatars of the users participating in the photo; subsequently, each user participating in the group photo can obtain the group photo generated by the server, resulting in poor interactivity between each user.
Disclosure of Invention
The application provides an interaction method based on an avatar, which is applied to a first client corresponding to a first user; the method comprises the following steps:
acquiring a target photo template selected by the first user from at least one preset photo template; wherein the target photo template comprises a plurality of photo positions;
acquiring a first photo position selected by the first user from the plurality of photo positions; the target photo-combining template comprises a first photo-combining position and a second photo-combining position, wherein the first photo-combining position is reserved for a first user;
responsive to the first user initiated syndication operation, rendering an avatar of the first user to the first syndication location;
and responding to the sharing operation initiated by the first user, sharing the rendered target photo template with a second client corresponding to the second user, so that the second client responds to the photo confirming operation initiated by the second user, and rendering the avatar of the second user to the second photo position.
Optionally, before the target photo template selected by the first user from the preset at least one photo template is acquired, the method further includes:
Outputting a photo interaction option for photo interaction with the avatar of the second user in the virtual scene in response to the distance between the output avatar of the first user in the virtual scene and a preset photo position in the virtual scene or the distance between the output avatar of the second user in the virtual scene being smaller than a preset distance;
and responding to the triggering operation of the first user on the photo interaction options, and acquiring a target photo template selected by the first user from at least one preset photo template.
Optionally, in response to the distance between the output avatar of the first user in the virtual scene and the preset synopsis position in the virtual scene or the distance between the output avatar of the second user is smaller than the preset distance, before outputting the synopsis interaction option for carrying out the synopsis interaction with the avatar of the second user in the virtual scene, the method further comprises:
and controlling the avatar of the first user to move in the virtual scene in response to the movement operation of the first user in the virtual scene for the avatar of the first user.
Optionally, the first client is in butt joint with a plurality of sharing channels;
Responding to the sharing operation initiated by the first user, sharing the rendered target photo template to a second client corresponding to the second user, wherein the sharing operation comprises the following steps:
and responding to the sharing operation initiated by the first user, acquiring a target sharing channel selected by the first user from the plurality of sharing channels, and sharing the rendered target photo template with a second client corresponding to the second user through the target sharing channel.
Optionally, in response to the sharing operation initiated by the first user, sharing the rendered target photo template with a second client corresponding to the second user, including:
generating a graphic code corresponding to the rendered target photo template in response to the sharing operation initiated by the first user, and sharing the generated graphic code with a second client corresponding to the second user, so that the second client scans the graphic code to obtain the rendered target photo template corresponding to the graphic code; the graphic codes are used for sharing the rendered target photo templates.
Optionally, in response to the sharing operation initiated by the first user, sharing the rendered target photo template with a second client corresponding to the second user, including:
Responding to the sharing operation initiated by the first user, generating a character password corresponding to the rendered target photo template, and sharing the generated character password to a second client corresponding to the second user, so that the second client recognizes the character password and acquires the rendered target photo template corresponding to the character password; the character password is used for sharing the rendered target photo template.
Optionally, the second user is a user having a social relationship with the first user;
before sharing the rendered target photo template with the second client corresponding to the second user, the method further comprises:
and acquiring a second user selected by the first user from a user group with social relation with the first user.
The application also provides another interaction method based on the virtual image, which is applied to a second client corresponding to a second user; the method comprises the following steps:
acquiring a target photo template shared by a first client corresponding to a first user; wherein the target photo template comprises a plurality of photo positions; the plurality of syndicated positions including a first syndicated position selected by the first user and a second syndicated position reserved for the second user; the virtual image of the first user is rendered on the first joint photo position;
And in response to a photo validation operation initiated by the second user, rendering the avatar of the second user to the second photo location to generate a photo comprising the avatar of the first user and the avatar of the second user.
Optionally, the second photo position is at least one photo position reserved for the second user;
and in response to the second user initiated syndication confirmation operation, rendering the avatar of the second user to the second syndication location, including:
and responding to the second user initiated photo confirmation operation, acquiring a target photo position selected by the second user from the at least one photo position, and rendering the avatar of the second user to the target photo position.
Optionally, before the avatar of the second user is rendered to the second syndicated position in response to a syndication confirmation operation initiated by the second user, the method further comprises:
determining whether an avatar of the second user has been created;
and if the avatar of the second user is not created, performing an avatar creation prompt on the second user.
The application also provides an interactive device based on the virtual image, which is applied to a first client corresponding to a first user; the device comprises:
the first template acquisition unit is used for acquiring a target photo template selected by the first user from at least one preset photo template; wherein the target photo template comprises a plurality of photo positions;
a first position acquisition unit configured to acquire a first syndication position selected by the first user from the plurality of syndication positions; the target photo-combining template comprises a first photo-combining position and a second photo-combining position, wherein the first photo-combining position is reserved for a first user;
a first rendering unit for rendering an avatar of the first user to the first syndication position in response to a syndication operation initiated by the first user;
and the sharing unit is used for responding to the sharing operation initiated by the first user and sharing the rendered target photo template with a second client corresponding to the second user, so that the second client responds to the photo confirming operation initiated by the second user and renders the avatar of the second user to the second photo position.
The application also provides another virtual image-based interaction device, which is applied to a second client corresponding to a second user; the device comprises:
the second template acquisition unit is used for acquiring a target photo template shared by a first client corresponding to the first user; wherein the target photo template comprises a plurality of photo positions; the plurality of syndicated positions including a first syndicated position selected by the first user and a second syndicated position reserved for the second user; the virtual image of the first user is rendered on the first joint photo position;
and a second rendering unit for rendering an avatar of the second user to the second syndication position in response to a syndication confirmation operation initiated by the second user to generate a syndication picture including the avatar of the first user and the avatar of the second user.
The application also provides electronic equipment, which comprises a communication interface, a processor, a memory and a bus, wherein the communication interface, the processor and the memory are connected with each other through the bus;
the memory stores machine-readable instructions and the processor performs any of the methods described above by invoking the machine-readable instructions.
The present application also provides a machine-readable storage medium storing machine-readable instructions that, when invoked and executed by a processor, implement any of the methods described above.
According to the embodiment, on one hand, the first user can firstly select the target photo template from the preset at least one photo template through the first client, then select the first photo position from a plurality of photo positions contained in the target photo template, and reserve the second photo position for the second user from other photo positions contained in the target photo template, so that the generated photo pictures can be generated according to personal preference of the first user by respectively rendering the avatar of the first user and the avatar of the second user to the first photo position and the second photo position, and experience of the first user for photo interaction based on the avatar is improved.
On the other hand, the first client corresponding to the first user can only render the avatar of the first user to the first joint photo position selected by the first user, and share the rendered target joint photo template with the second client corresponding to the second user, so that the second client can render the avatar of the second user to the second joint photo position reserved for the second user after obtaining the target joint photo template shared by the first client, and a joint photo comprising the avatar of the first user and the avatar of the second user is generated, thereby improving the interactivity of the first user and the second user in the joint photo interaction process based on the avatar, and improving the experience of the second user for joint photo interaction based on the avatar.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a method of avatar-based interaction in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram of a selection target photo template shown in an exemplary embodiment;
FIG. 3 is a schematic diagram of a virtual scene shown in an exemplary embodiment;
FIG. 4 is a schematic diagram of another virtual scene shown in an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a sharing operation in accordance with an exemplary embodiment;
FIG. 6 is a flowchart illustrating another avatar-based interaction method in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram of a syndicated interaction page according to one exemplary embodiment;
FIG. 8 is a schematic diagram of an electronic device in which an avatar-based interactive apparatus is located, according to an exemplary embodiment;
FIG. 9 is a block diagram illustrating an avatar-based interactive apparatus according to an exemplary embodiment;
fig. 10 is a block diagram illustrating another avatar-based interactive apparatus according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
With the continuous development of network technology, more and more users have created avatars for representing own avatars on a network. For example, a user may configure the avatar with patterns of five sense organs, body shapes, and attitudes through a face pinching system, and may also personally dress the avatar to create a personalized avatar for representing the user.
Currently, in game, social, etc., the interactivity between the avatars of individual users is poor.
For example, in a game scene, each user can control the avatars of the user and the avatars of other users in the game to be gathered together, and then a 'photo' picture between the avatars of the user and the avatars of other users is obtained by taking a screenshot of a game page and the like.
As can be seen from this, in the above-shown related art embodiment, it is difficult to generate a photo by capturing a photo between the avatars of the respective users in a screenshot manner or the like, according to the personal preference of the users; in addition, in the case where the resolution of the game client is low, it is difficult to ensure the definition of the photo-combined picture obtained by capturing the images.
For another example, in a social scenario, each user may present their own avatar to other users; further, the user can submit a request for taking a photo with other users to the server, so that the server generates a photo between the avatars of the users based on the created avatars of the users participating in the photo; subsequently, each user participating in the group photo can obtain the group photo picture generated by the server.
As can be seen from this, in the related art embodiment shown above, the photo is generated by the service terminal based on the avatar of each user, and the interactivity between each user is poor; in addition, since only the server side can be requested to generate a photo for a plurality of users who have created avatars, it is difficult to increase the enthusiasm of users who have not created avatars to participate in a photo interaction.
In view of the above, the present disclosure is directed to providing a technical solution for performing a group photo interaction based on the avatar of each user, so as to solve at least one technical problem set forth above.
In implementation, the first user may be a user inviting other users to an avatar-based group photo interaction, and the second user may be a user invited to an avatar-based group photo interaction.
A first client corresponding to the first user can firstly acquire a target photo template selected by the first user from at least one preset photo template, wherein the target photo template comprises a plurality of photo positions; further, the first client may further acquire a first photo position selected by the first user from the plurality of photo positions, where other photo positions included in the target photo template except the first photo position are second photo positions reserved for a second user; further, in response to the first user initiated syndication operation, the first client may render an avatar of the first user to the first syndication location; further, in response to the sharing operation initiated by the first user, the first client may share the rendered target snapshot template with a second client corresponding to the second user.
Further, a second client corresponding to the second user can acquire a target photo template shared by a first client corresponding to the first user; further, in response to the second user initiated syndication verification operation, the second client may render the avatar of the second user to a second syndication location reserved for the second user contained by the target syndication template to generate a syndication picture containing the avatar of the first user and the avatar of the second user.
Therefore, in the technical scheme in the specification, on one hand, because the first user can select the target photo template from the preset at least one photo template through the first client, then select the first photo position from a plurality of photo positions contained in the target photo template, and reserve the second photo position for the second user from other photo positions contained in the target photo template, the photo pictures generated by rendering the avatar of the first user and the avatar of the second user to the first photo position and the second photo position respectively can be photo pictures generated according to personal preference of the first user, thereby improving the experience of the first user for photo interaction based on the avatar.
On the other hand, the first client corresponding to the first user can only render the avatar of the first user to the first joint photo position selected by the first user, and share the rendered target joint photo template with the second client corresponding to the second user, so that the second client can render the avatar of the second user to the second joint photo position reserved for the second user after obtaining the target joint photo template shared by the first client, and a joint photo comprising the avatar of the first user and the avatar of the second user is generated, thereby improving the interactivity of the first user and the second user in the joint photo interaction process based on the avatar, and improving the experience of the second user for joint photo interaction based on the avatar.
The following describes the present application through specific embodiments and in connection with specific application scenarios.
Referring to fig. 1, fig. 1 is a flowchart illustrating an avatar-based interaction method according to an exemplary embodiment. The method may be applied to a first client corresponding to a first user. The method may perform the steps of:
step 102: acquiring a target photo template selected by a first user from at least one preset photo template; the target photo template comprises a plurality of photo positions.
In the step 102, the first client may display at least one preset photo template to the first user; and responding to the selection operation of the first user on the target photo templates, acquiring the target photo templates selected by the first user from the at least one photo templates.
For example, referring to fig. 2, fig. 2 is a schematic diagram of a selection target photo template according to an exemplary embodiment. As shown in fig. 2, a plurality of preset photo templates, namely a photo template a, a photo template B, a photo template C, a photo template D, etc., can be output in the photo interaction page; and the photo interaction page can comprise a photo option corresponding to each photo template in the plurality of photo templates respectively; responding to the triggering operation of a first user for the option of 'shooting the same money' corresponding to a photo template B, and acquiring a target photo template selected by the first user as the photo template B; wherein, the photo template B can contain two photo positions.
It should be noted that, in the embodiment shown above, the target photo template includes two photo positions, which is only an exemplary description and is not meant to limit the present specification; each of the at least one photo template may also contain more than two photo positions. For ease of description, in some embodiments shown in this specification, description will be continued taking as an example that the target photo template includes two photo positions.
In the present specification, the avatar may be automatically generated based on a photograph provided by a user, or may be created by a user through a face pinching system, which is not limited in the present specification.
In this specification, after the first user creates the avatar corresponding thereto, in response to the first user initiating an entry operation for a virtual scene, the avatar of the first user may be output in the virtual scene, that is, a visualization effect that the avatar of the first user "enters" the virtual scene may be triggered.
In this specification, the virtual scene may be a centralized virtual scene or an decentralized virtual scene. In the centralized virtual scene, clients corresponding to the users can be in communication connection with the server. In the decentralised virtual scene, the clients corresponding to the users can directly communicate with each other; for example, the decentralised virtual scene may be a virtual scene built based on a blockchain technology, and the client may be a browser.
In one embodiment shown, the virtual scene may be a metauniverse virtual scene.
In one embodiment shown, the avatar of the first user may be controlled to move in the virtual scene in response to a movement operation of the first user in the virtual scene with respect to the avatar of the first user.
For example, referring to fig. 3, fig. 3 is a schematic diagram of a virtual scene according to an exemplary embodiment. As shown in fig. 3, a "steering wheel" option associated with the avatar of the first user may be output in the virtual scene; in response to a drag operation of the first user with respect to a "steering wheel" option, a movement operation of the avatar with respect to the first user may be detected, and then the avatar of the first user may be controlled to move in the virtual scene.
Correspondingly, the virtual images of other users can be output in the virtual scene; other users can also control their own avatars to move in the virtual scene through the corresponding clients.
In the present specification, the first client corresponding to the first user may trigger, in the virtual scene, a visual interaction between the avatar of the first user and the avatar of the virtual building or other users, to initiate a joint interaction between the first user and the second user.
In one embodiment, before obtaining the target photo template selected by the first user from the preset at least one photo template, the method may further include: outputting a group photo interaction option for group photo interaction with the avatar of the second user in the virtual scene in response to the distance between the avatar of the first user output in the virtual scene and a preset group photo position in the virtual scene being less than a preset distance; and responding to the triggering operation of the first user on the photo interaction options, and acquiring a target photo template selected by the first user from at least one preset photo template.
For example, as shown in fig. 3, the preset mix position may be a virtual building "photo studio" output in the virtual scene; the group interaction option may be an "enter" option associated with the virtual building "photo studio"; responsive to a distance between the avatar of the first user output in the virtual scene and a virtual building "gym" being less than a first preset distance, an "enter" option associated with the virtual building "gym" may be output in the virtual scene; responding to the triggering operation of the first user for the 'entering' option, and opening a photo interaction page; further, a plurality of preset photo templates and various photo options corresponding to various photo templates in the photo interaction page can be output; and responding to the triggering operation of the first user for the option of 'clapping the same money' corresponding to the photo template B, and acquiring the target photo template selected by the first user as the photo template B.
In another embodiment, before obtaining the target photo template selected by the first user from the preset at least one photo template, the method may further include: outputting a synopsis interaction option for carrying out synopsis interaction with the avatar of the second user in the virtual scene in response to the distance between the avatar of the first user and the avatar of the second user output in the virtual scene being smaller than a preset distance; and responding to the triggering operation of the first user on the photo interaction options, and acquiring a target photo template selected by the first user from at least one preset photo template.
For example, referring to fig. 4, fig. 4 is a schematic diagram of another virtual scene shown in an exemplary embodiment. As shown in fig. 4, the photo interaction option may be a "photo" option displayed in association with the second user; responsive to a distance between the avatar of the first user and the avatar of the second user output in the virtual scene being less than a second preset distance, a "shot" option may be output at a location in the virtual scene corresponding to the second user; responding to the triggering operation of the first user for the 'shot photo' option, and opening a photo interaction page; further, a plurality of preset photo templates and various photo options corresponding to various photo templates in the photo interaction page can be output; and responding to the triggering operation of the first user for the option of 'clapping the same money' corresponding to the photo template B, and acquiring the target photo template selected by the first user as the photo template B.
Step 104: acquiring a first photo position selected by the first user from the plurality of photo positions; and the other photo positions except the first photo position contained in the target photo template are second photo positions reserved for a second user.
For example, as shown in fig. 2, in response to a triggering operation of the first user for a "same type shot" option corresponding to a photo template B, a target photo template selected by the first user may be acquired as the photo template B; the photo template B can comprise two photo positions, namely a station B01 and a station B02; further, in response to the selection operation of the first user for the site B01, the first syndication position selected by the first user may be obtained as the site B01 contained in the syndication template B, and the second syndication position reserved for the second user may be determined as the site B02 contained in the syndication template B.
In the step 104, in response to the triggering operation of the first user on the "switch station" option in the photo interaction page, the station B01 may be updated to the selected state, and the station B02 may be updated to the unselected state, or the station B02 may be updated to the selected state, and the station B01 may be updated to the unselected state; if the station B01 is in a selected state in the group photo interaction page, responding to the triggering operation of the first user for the option of 'preview effect' in the group photo interaction page, and detecting the selection operation of the first user for the station B01.
Or in the step 104, in response to the triggering operation of the first user on the position of the station B01 in the photo interaction page, the selection operation of the first user on the station B01 may be detected.
Step 106: and responding to the first user initiated photo operation, and rendering the avatar of the first user to the first photo position.
For example, as shown in fig. 2, the photo interaction page may include a photo option for initiating a photo operation, where the photo option may be a "preview effect" option; after the photo template B and the station B01 selected by the first user are acquired, in response to a triggering operation of the first user on a preview effect option in the photo interaction page, a photo operation initiated by the first user may be detected, and an avatar of the first user may be rendered to the first photo position (i.e., the station B01 included in the photo template B).
Step 108: and responding to the sharing operation initiated by the first user, sharing the rendered target photo template with a second client corresponding to the second user, so that the second client responds to the photo confirming operation initiated by the second user, and rendering the avatar of the second user to the second photo position.
For example, referring to fig. 5, fig. 5 is a schematic diagram illustrating a sharing operation according to an exemplary embodiment. A photo template B after the virtual image of the first user is rendered to the station B01 can be output in the photo interaction page; the photo interaction page can also comprise an 'invite friend photo' option for initiating sharing operation; and responding to the triggering operation of the first user for the friend group invitation option in the group photo interaction page, and detecting the sharing operation initiated by the first user, wherein the rendered group photo template B can be shared to a second client corresponding to the second user.
In one embodiment shown, the first client may be docked with a plurality of sharing channels; in this case, in response to the sharing operation initiated by the first user, sharing the rendered target photo template with the second client corresponding to the second user may specifically include: and responding to the sharing operation initiated by the first user, acquiring a target sharing channel selected by the first user from the plurality of sharing channels, and sharing the rendered target photo template with a second client corresponding to the second user through the target sharing channel.
For example, in response to a triggering operation of the first user on an "invite friends" option in the photo interaction page, a sharing operation initiated by the first user may be detected, and then sharing channels respectively corresponding to different instant messaging applications may be output in the photo interaction page; further, in response to the selection operation of the first user on the target sharing channel, the target sharing channel selected by the first user from the plurality of sharing channels corresponding to the first client can be acquired, and the rendered snapshot template B can be shared to the second client corresponding to the second user through the target sharing channel.
It should be noted that, in the embodiment shown above, since the first client corresponding to the first user may share the rendered target photo template with the second client corresponding to the second user through different sharing channels, an interaction range in which the first user invites other users to perform the photo interaction based on the avatar may be extended, so as to improve a photo interaction experience of the first user.
In this specification, the first client corresponding to the first user may specifically share the rendered target photo template with the second client corresponding to the second user in the form of graphic encoding, character password, or the like.
In an embodiment, in response to the sharing operation initiated by the first user, sharing the rendered target photo template with the second client corresponding to the second user may specifically include: generating a graphic code corresponding to the rendered target photo template in response to the sharing operation initiated by the first user, and sharing the generated graphic code with a second client corresponding to the second user, so that the second client scans the graphic code to obtain the rendered target photo template corresponding to the graphic code; the graphic codes are used for sharing the rendered target photo templates.
For example, as shown in fig. 5, in response to a triggering operation of the first user on an "invite friend group" option in the group photo interaction page, a sharing operation initiated by the first user may be detected, a two-dimensional code corresponding to a rendered group photo template B may be generated, a sharing picture may be generated based on the generated two-dimensional code and the rendered group photo template B, and the sharing picture may be shared with a second client corresponding to the second user; and the second client can scan the two-dimensional code included in the sharing picture shared by the first client to obtain the rendered photo-combining template B.
In the embodiment shown above, the sharing picture generated based on the rendered target photo template and the graphic code is shared to the second client, so that the avatar of the first user can be more intuitively shown to the invited second user, and the reserved second photo position and the outline modeling of the second photo position can be more intuitively shown to the second user, thereby improving the enthusiasm of the second user for participating in the photo interaction.
In the above-described embodiments, the graphic code corresponding to the target template after rendering is not necessarily directly transmitted to the second client, and the present invention is not limited to this. For example, a graphical code corresponding to the rendered target group template may be output on the other device such that the second user may scan the graphical code presented on the other device through the second client.
In another embodiment, in response to the sharing operation initiated by the first user, sharing the rendered target photo template with the second client corresponding to the second user may specifically include: responding to the sharing operation initiated by the first user, generating a character password corresponding to the rendered target photo template, and sharing the generated character password to a second client corresponding to the second user, so that the second client recognizes the character password and acquires the rendered target photo template corresponding to the character password; the character password is used for sharing the rendered target photo template.
For example, in response to a triggering operation of the first user on an "invite friends group" option in the group interaction page, a sharing operation initiated by the first user may be detected, a character password (e.g., a squeak password) corresponding to the rendered group template B may be generated, and the generated character password may be shared to a second client corresponding to the second user; so that the second client can recognize the character password shared by the first client and acquire the rendered photo template B.
In the embodiment shown above, the character password corresponding to the rendered target photo template is shared to the second client, so that compared with the form of sharing the picture, the network traffic consumed for sharing the second client can be saved, and the experience of photo interaction of the user is improved.
In the above-described embodiments, the character password corresponding to the rendered target group template is not necessarily directly transmitted to the second client, and is not limited in this specification. For example, the second client may obtain a character password manually entered by the second user; alternatively, the second client may obtain a character password copied from a third party.
In one embodiment shown, the second user may be a user in social relationship with the first user; in this case, before sharing the rendered target group template with the second client corresponding to the second user, the method may further include: and acquiring a second user selected by the first user from a user group with social relation with the first user.
For example, in response to a trigger operation of the first user on an "invite friends to photo" option in the photo interaction page, a sharing operation initiated by the first user may be detected, a user list having a social relationship with the first user may be output in the photo interaction page, and a second user, which is selected by the first user from the user list and is expected to perform photo interaction, may be obtained; further, the rendered syndication template B may be shared to a second user selected by the first user to have a social relationship.
Referring to fig. 6, fig. 6 is a flowchart illustrating another avatar-based interaction method according to an exemplary embodiment. The method may be applied to a second client corresponding to a second user. The method may perform the steps of:
Step 602: acquiring a target photo template shared by a first client corresponding to a first user; wherein the target photo template comprises a plurality of photo positions; the plurality of syndicated positions including a first syndicated position selected by the first user and a second syndicated position reserved for a second user; the first joint location is rendered with an avatar of the first user.
For example, referring to fig. 7, fig. 7 is a schematic diagram of a photo interaction page according to an exemplary embodiment. As shown in fig. 7, the second client may obtain a rendered syndication template B shared by the first client; the station B01 contained in the photo template B can be a first photo position selected by the first user, and the station B02 contained in the photo template B can be a second photo position reserved for the second user; the avatar of the first user has been rendered on the site B01 contained in the syndicated template B.
In an embodiment shown, obtaining a target photo template shared by a first client corresponding to a first user may specifically include: and scanning the graphic codes shared by the first client to obtain a rendered target photo template corresponding to the graphic codes.
In another embodiment, the obtaining a target photo template shared by the first client corresponding to the first user may specifically include: and identifying the character password shared by the first client side, and obtaining a rendered target photo template corresponding to the character password.
Step 604: and in response to a photo validation operation initiated by the second user, rendering the avatar of the second user to the second photo location to generate a photo comprising the avatar of the first user and the avatar of the second user.
For example, as shown in fig. 7, the photo interaction page may include a "confirm photo" option for initiating a photo confirmation operation; after acquiring a photo template B which is shared by the first client and used for rendering the avatar of the first user to a station B01, responding to a triggering operation of the second user for a 'confirmation photo' option in the photo interaction page, and detecting a photo confirmation operation initiated by the second user, rendering the avatar of the second user to a second photo position reserved for the second user (namely, a station B02 contained in the photo template B) so as to generate a photo comprising the avatar of the first user and the avatar of the second user.
In one embodiment shown, the second syndication location may be at least one syndication location reserved for the second user; in this case, in response to the second user-initiated syndication check operation, rendering the avatar of the second user to the second syndication location may include: and responding to the second user initiated photo confirmation operation, acquiring a target photo position selected by the second user from the at least one photo position, and rendering the avatar of the second user to the target photo position.
For example, if the target photo template selected by the first user is a photo template E, the photo template E may include 3 photo positions, which are respectively a stand E01, a stand E02, and a stand E03; wherein if the first syndicated position selected by the first user is station E01, station E02 and station E03 may be second syndicated positions reserved for the second user. In this case, after acquiring the photo template E shared by the first client and rendering the avatar of the first user to the station E01, in response to a trigger operation of the second user on a "confirm photo" option in the photo interaction page, it may be detected that a photo confirm operation initiated by the second user is performed, and then the target photo position selected by the second user from the second photo positions may be acquired as the station E03, and the avatar of the second user may be rendered to the station E03 selected by the second user from the reserved plurality of second photo positions.
In one embodiment shown, the second client may need to first determine whether the second user has created a corresponding avatar before generating a photo that contains the avatars of the first user and the second user. In this case, before rendering the avatar of the second user to the second syndication position in response to the second user-initiated syndication confirmation operation, the method may further include: determining whether an avatar of the second user has been created; and if the avatar of the second user is not created, performing an avatar creation prompt on the second user.
For example, in response to obtaining the rendered syndication template B shared by the first client, the second client may first determine whether an avatar of the second user has been created; if the avatar of the second user is not created, a prompt message 'you have not yet created an avatar' can be output in the group photo interaction page, and the creation of the avatar is quickly performed to prompt the second user for creation of the avatar. Optionally, a creation option for triggering the creation of the avatar can be further output in the group photo interaction page; and responding to the triggering operation of the second user for the creation option in the conjunctive interaction page, opening an avatar creation page so that the second user creates the avatar of the second user through the second client side first and regenerates a conjunctive picture containing the avatar of the first user and the avatar of the second user.
For another example, in response to obtaining the rendered syndication template B shared by the first client, the second client may first determine whether an avatar of the second user has been created; if the avatar of the second user is determined to be created, outputting a rendered target photo template shared by the first client and a 'confirmation photo' option for initiating a photo confirmation operation in the photo interaction page; and responding to the triggering operation of the second user for a 'confirmation of a shot' option in the shot interaction page, detecting the shot confirmation operation initiated by the second user, and rendering the avatar of the second user to a second shot position reserved for the second user (namely, a station B02 contained in a shot template B) so as to generate a shot picture containing the avatar of the first user and the avatar of the second user.
According to the technical scheme, on one hand, the first user can select the target photo template from the preset at least one photo template through the first client, then select the first photo position from a plurality of photo positions contained in the target photo template, and reserve the second photo position for the second user from other photo positions contained in the target photo template, so that the follow-up photo pictures generated by respectively rendering the avatar of the first user and the avatar of the second user to the first photo position and the second photo position can be photo pictures generated according to personal preference of the first user, and experience of photo interaction of the first user based on the avatar is improved.
On the other hand, the first client corresponding to the first user can only render the avatar of the first user to the first joint photo position selected by the first user, and share the rendered target joint photo template with the second client corresponding to the second user, so that the second client can render the avatar of the second user to the second joint photo position reserved for the second user after obtaining the target joint photo template shared by the first client, and a joint photo comprising the avatar of the first user and the avatar of the second user is generated, thereby improving the interactivity of the first user and the second user in the joint photo interaction process based on the avatar, and improving the experience of the second user for joint photo interaction based on the avatar.
Corresponding to the above-mentioned embodiments of the avatar-based interaction method, the present specification also provides embodiments of an avatar-based interaction apparatus.
Referring to fig. 8, fig. 8 is a hardware configuration diagram of an electronic device in which an avatar-based interactive apparatus is located, according to an exemplary embodiment. At the hardware level, the device includes a processor 802, an internal bus 804, a network interface 806, memory 808, and non-volatile storage 810, although other hardware required for the service is possible. One or more embodiments of the present description may be implemented in a software-based manner, such as by the processor 802 reading a corresponding computer program from the non-volatile memory 810 into the memory 808 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device.
Referring to fig. 9, fig. 9 is a block diagram illustrating an avatar-based interactive apparatus according to an exemplary embodiment. The avatar-based interactive apparatus may be applied to an electronic device as shown in fig. 8 to implement the technical scheme of the present specification. Wherein, the avatar-based interactive apparatus may include:
a first template obtaining unit 902, configured to obtain a target photo template selected by the first user from at least one preset photo template; wherein the target photo template comprises a plurality of photo positions;
a first position obtaining unit 904, configured to obtain a first photo position selected by the first user from the plurality of photo positions; the target photo-combining template comprises a first photo-combining position and a second photo-combining position, wherein the first photo-combining position is reserved for a first user;
a first rendering unit 906 for rendering the avatar of the first user to the first syndication position in response to the first user initiated syndication operation;
and the sharing unit 908 is configured to share the rendered target photo template with a second client corresponding to the second user in response to the sharing operation initiated by the first user, so that the second client renders the avatar of the second user to the second photo position in response to the photo confirmation operation initiated by the second user.
In this embodiment, the apparatus further includes:
an output unit for outputting a group photo interaction option for group photo interaction with the avatar of the second user in the virtual scene in response to a distance between the avatar of the first user output in the virtual scene and a preset group photo position in the virtual scene or the avatar of the second user being less than a preset distance;
the first template obtaining unit 902 is specifically configured to:
and responding to the triggering operation of the first user on the photo interaction options, and acquiring a target photo template selected by the first user from at least one preset photo template.
In this embodiment, the apparatus further includes:
and a control unit for controlling the avatar of the first user to move in the virtual scene in response to a movement operation of the first user in the virtual scene with respect to the avatar of the first user.
In this embodiment, the first client is docked with a plurality of sharing channels;
the sharing unit 908 is specifically configured to:
and responding to the sharing operation initiated by the first user, acquiring a target sharing channel selected by the first user from the plurality of sharing channels, and sharing the rendered target photo template with a second client corresponding to the second user through the target sharing channel.
In this embodiment, the sharing unit 908 is specifically configured to:
generating a graphic code corresponding to the rendered target photo template in response to the sharing operation initiated by the first user, and sharing the generated graphic code with a second client corresponding to the second user, so that the second client scans the graphic code to obtain the rendered target photo template corresponding to the graphic code; the graphic codes are used for sharing the rendered target photo templates.
In this embodiment, the sharing unit 908 is specifically configured to:
responding to the sharing operation initiated by the first user, generating a character password corresponding to the rendered target photo template, and sharing the generated character password to a second client corresponding to the second user, so that the second client recognizes the character password and acquires the rendered target photo template corresponding to the character password; the character password is used for sharing the rendered target photo template.
In this embodiment, the second user is a user having a social relationship with the first user;
The apparatus further comprises:
the user acquisition unit is used for acquiring a second user selected by the first user from a user group with social relation with the first user.
Referring to fig. 10, fig. 10 is a block diagram illustrating another avatar-based interactive apparatus according to an exemplary embodiment. The avatar-based interactive apparatus may be applied to an electronic device as shown in fig. 8 to implement the technical scheme of the present specification. Wherein, the avatar-based interactive apparatus may include:
a second template obtaining unit 1002, configured to obtain a target photo template shared by a first client corresponding to a first user; wherein the target photo template comprises a plurality of photo positions; the plurality of syndicated positions including a first syndicated position selected by the first user and a second syndicated position reserved for the second user; the virtual image of the first user is rendered on the first joint photo position;
and a second rendering unit 1004 for rendering an avatar of the second user to the second syndication position in response to a syndication confirmation operation initiated by the second user to generate a syndication picture including the avatar of the first user and the avatar of the second user.
In this embodiment, the second joint photo location is at least one joint photo location reserved for the second user;
the second rendering unit 1004 is specifically configured to:
and responding to the second user initiated photo confirmation operation, acquiring a target photo position selected by the second user from the at least one photo position, and rendering the avatar of the second user to the target photo position.
In this embodiment, the apparatus further includes:
a determining unit for determining whether an avatar of the second user has been created;
and the creation prompt unit is used for carrying out an avatar creation prompt on the second user if the avatar of the second user is not created.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are illustrative only, in that the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
User information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to herein are both user-authorized or fully authorized information and data by parties, and the collection, use and processing of relevant data requires compliance with relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation portals for user selection of authorization or denial.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) is (are) merely intended to illustrate the embodiment(s) of the present invention, and it is not intended to limit the embodiment(s) of the present invention to the particular embodiment(s) described.

Claims (14)

1. An interactive method based on an avatar, which is applied to a first client corresponding to a first user; the method comprises the following steps:
acquiring a target photo template selected by the first user from at least one preset photo template; wherein the target photo template comprises a plurality of photo positions;
acquiring a first photo position selected by the first user from the plurality of photo positions; the target photo-combining template comprises a first photo-combining position and a second photo-combining position, wherein the first photo-combining position is reserved for a first user;
responsive to the first user initiated syndication operation, rendering an avatar of the first user to the first syndication location;
and responding to the sharing operation initiated by the first user, sharing the rendered target photo template with a second client corresponding to the second user, so that the second client responds to the photo confirming operation initiated by the second user, and rendering the avatar of the second user to the second photo position.
2. The method of claim 1, prior to obtaining a target photo template selected by the first user from a preset at least one photo template, the method further comprising:
outputting a photo interaction option for photo interaction with the avatar of the second user in the virtual scene in response to the distance between the output avatar of the first user in the virtual scene and a preset photo position in the virtual scene or the distance between the output avatar of the second user in the virtual scene being smaller than a preset distance;
and responding to the triggering operation of the first user on the photo interaction options, and acquiring a target photo template selected by the first user from at least one preset photo template.
3. The method of claim 2, in response to a distance between the output avatar of the first user in the virtual scene and a preset synopsis position in the virtual scene or the avatar of the second user being less than a preset distance, before outputting a synopsis interaction option in the virtual scene for synopsis interaction with the avatar of the second user, the method further comprising:
and controlling the avatar of the first user to move in the virtual scene in response to the movement operation of the first user in the virtual scene for the avatar of the first user.
4. The method of claim 1, the first client interfacing with a plurality of sharing channels;
responding to the sharing operation initiated by the first user, sharing the rendered target photo template to a second client corresponding to the second user, wherein the sharing operation comprises the following steps:
and responding to the sharing operation initiated by the first user, acquiring a target sharing channel selected by the first user from the plurality of sharing channels, and sharing the rendered target photo template with a second client corresponding to the second user through the target sharing channel.
5. The method of claim 1, in response to the first user initiated sharing operation, sharing the rendered target syndication template with a second client corresponding to the second user, comprising:
generating a graphic code corresponding to the rendered target photo template in response to the sharing operation initiated by the first user, and sharing the generated graphic code with a second client corresponding to the second user, so that the second client scans the graphic code to obtain the rendered target photo template corresponding to the graphic code; the graphic codes are used for sharing the rendered target photo templates.
6. The method of claim 1, in response to the first user initiated sharing operation, sharing the rendered target syndication template with a second client corresponding to the second user, comprising:
responding to the sharing operation initiated by the first user, generating a character password corresponding to the rendered target photo template, and sharing the generated character password to a second client corresponding to the second user, so that the second client recognizes the character password and acquires the rendered target photo template corresponding to the character password; the character password is used for sharing the rendered target photo template.
7. The method of claim 1, the second user being a user in social relationship with the first user;
before sharing the rendered target photo template with the second client corresponding to the second user, the method further comprises:
and acquiring a second user selected by the first user from a user group with social relation with the first user.
8. An avatar-based interactive method applied to a second client corresponding to a second user; the method comprises the following steps:
Acquiring a target photo template shared by a first client corresponding to a first user; wherein the target photo template comprises a plurality of photo positions; the plurality of syndicated positions including a first syndicated position selected by the first user and a second syndicated position reserved for the second user; the virtual image of the first user is rendered on the first joint photo position;
and in response to a photo validation operation initiated by the second user, rendering the avatar of the second user to the second photo location to generate a photo comprising the avatar of the first user and the avatar of the second user.
9. The method of claim 8, the second synopsis location being at least one synopsis location reserved for the second user;
and in response to the second user initiated syndication confirmation operation, rendering the avatar of the second user to the second syndication location, including:
and responding to the second user initiated photo confirmation operation, acquiring a target photo position selected by the second user from the at least one photo position, and rendering the avatar of the second user to the target photo position.
10. The method of claim 8, in response to the second user initiated syndication check operation, prior to rendering the avatar of the second user to the second syndication location, the method further comprising:
determining whether an avatar of the second user has been created;
and if the avatar of the second user is not created, performing an avatar creation prompt on the second user.
11. An avatar-based interactive apparatus applied to a first client corresponding to a first user; the device comprises:
the first template acquisition unit is used for acquiring a target photo template selected by the first user from at least one preset photo template; wherein the target photo template comprises a plurality of photo positions;
a first position acquisition unit configured to acquire a first syndication position selected by the first user from the plurality of syndication positions; the target photo-combining template comprises a first photo-combining position and a second photo-combining position, wherein the first photo-combining position is reserved for a first user;
a first rendering unit for rendering an avatar of the first user to the first syndication position in response to a syndication operation initiated by the first user;
And the sharing unit is used for responding to the sharing operation initiated by the first user and sharing the rendered target photo template with a second client corresponding to the second user, so that the second client responds to the photo confirming operation initiated by the second user and renders the avatar of the second user to the second photo position.
12. An avatar-based interactive device applied to a second client corresponding to a second user; the device comprises:
the second template acquisition unit is used for acquiring a target photo template shared by a first client corresponding to the first user; wherein the target photo template comprises a plurality of photo positions; the plurality of syndicated positions including a first syndicated position selected by the first user and a second syndicated position reserved for the second user; the virtual image of the first user is rendered on the first joint photo position;
and a second rendering unit for rendering an avatar of the second user to the second syndication position in response to a syndication confirmation operation initiated by the second user to generate a syndication picture including the avatar of the first user and the avatar of the second user.
13. An electronic device comprises a communication interface, a processor, a memory and a bus, wherein the communication interface, the processor and the memory are connected with each other through the bus;
the memory stores machine readable instructions, the processor executing the method of any of claims 1-7 or 8-10 by invoking the machine readable instructions.
14. A machine-readable storage medium storing machine-readable instructions which, when invoked and executed by a processor, implement the method of any one of claims 1-7 or 8-10.
CN202310029013.0A 2023-01-09 2023-01-09 Virtual image-based interaction method and device, electronic equipment and storage medium Pending CN116309951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310029013.0A CN116309951A (en) 2023-01-09 2023-01-09 Virtual image-based interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310029013.0A CN116309951A (en) 2023-01-09 2023-01-09 Virtual image-based interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116309951A true CN116309951A (en) 2023-06-23

Family

ID=86784081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310029013.0A Pending CN116309951A (en) 2023-01-09 2023-01-09 Virtual image-based interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116309951A (en)

Similar Documents

Publication Publication Date Title
US11783862B2 (en) Routing messages by message parameter
TWI669634B (en) Method and device for assigning virtual objects based on augmented reality
EP3713159B1 (en) Gallery of messages with a shared interest
EP3531649A1 (en) Method and device for allocating augmented reality-based virtual objects
US20220078143A1 (en) Third-party resource coordination
KR102619465B1 (en) Confirm consent
US20230098615A1 (en) Augmented-reality experience control through non-fungible token
KR20230019925A (en) 3rd party resource accreditation
US11531986B2 (en) Cross-platform data management and integration
KR20230044213A (en) Motion Expressions for Articulating Animation
US11452939B2 (en) Graphical marker generation system for synchronizing users
US20190164323A1 (en) Method and program for generating virtual reality contents
US20230308411A1 (en) Relationship-agnostic messaging system
CN115803783A (en) Reconstruction of 3D object models from 2D images
KR20240021247A (en) Present content received by the Messaging Application from third party resources.
WO2022146772A1 (en) Flow-guided motion retargeting
US20240223519A1 (en) External messaging function for an interaction system
CN108959311B (en) Social scene configuration method and device
US20230273950A1 (en) Location-based timeline media content system
CN116309951A (en) Virtual image-based interaction method and device, electronic equipment and storage medium
CN115016688A (en) Virtual information display method and device and electronic equipment
CN109949407B (en) Head portrait generation method and device and electronic equipment
CN111861645A (en) Paid electronic book unlocking method, device and computer readable medium
US20230177789A1 (en) Computer vision tools for creating augmented reality (ar) models
US20240267351A1 (en) Rule-based messaging and user interaction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination