CN113112614A - Interaction method and device based on augmented reality - Google Patents

Interaction method and device based on augmented reality Download PDF

Info

Publication number
CN113112614A
CN113112614A CN202110267750.5A CN202110267750A CN113112614A CN 113112614 A CN113112614 A CN 113112614A CN 202110267750 A CN202110267750 A CN 202110267750A CN 113112614 A CN113112614 A CN 113112614A
Authority
CN
China
Prior art keywords
capability
attribute
interactive
virtual
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110267750.5A
Other languages
Chinese (zh)
Other versions
CN113112614B (en
Inventor
吴瑾
段青龙
季婧
吴承军
程佳慧
王亚迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN202110267750.5A priority Critical patent/CN113112614B/en
Publication of CN113112614A publication Critical patent/CN113112614A/en
Application granted granted Critical
Publication of CN113112614B publication Critical patent/CN113112614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Abstract

This specification proposes an interaction method based on augmented reality, including: enhancing and displaying a virtual interactive object in the scanned live-action picture; responding to the interactive operation with the virtual interactive image initiated by the user, and acquiring an action image sequence corresponding to the interactive action of the virtual interactive object; rendering and fusing the obtained action image sequence and the real scene picture to generate an interactive image sequence, and generating an interactive record of the virtual interactive image based on the interactive image sequence.

Description

Interaction method and device based on augmented reality
Technical Field
The application relates to the field of augmented reality, in particular to an interaction method and device based on augmented reality.
Background
An AR (Augmented Reality) technology is a technology for acquiring a live-action picture by scanning a real environment in real time, and superimposing corresponding virtual data (such as an image, a video, a 3D model, and the like) on the live-action picture, thereby fusing a virtual world and a real world. Because the virtual data are superposed on the live-action picture, the display effect of the live-action picture can be enhanced, and a brand-new interactive experience can be provided for a user by introducing the AR technology.
Disclosure of Invention
The present specification provides an interaction method based on augmented reality, the method comprising:
enhancing and displaying a virtual interactive object in the scanned live-action picture;
responding to the interactive operation with the virtual interactive image initiated by the user, and acquiring an action image sequence corresponding to the interactive action of the virtual interactive object;
rendering and fusing the obtained action image sequence and the real scene picture to generate an interactive image sequence, and generating an interactive record of the virtual interactive image based on the interactive image sequence.
This specification also provides an interactive device based on augmented reality, the device includes:
the display module is used for enhancing and displaying the virtual interactive object in the scanned live-action picture;
the acquisition module is used for responding to the interactive operation initiated by the user and the virtual interactive image and acquiring an action image sequence corresponding to the interactive action of the virtual interactive object;
the fusion module is used for rendering and fusing the acquired action image sequence and the live-action picture to generate an interactive image sequence;
and the generating module generates an interaction record with the virtual interaction image based on the interaction image sequence.
This specification also proposes an electronic device including:
a processor;
a memory for storing machine executable instructions;
wherein, by reading and executing machine-executable instructions stored by the memory that correspond to augmented reality-based interaction logic, the processor is caused to:
enhancing and displaying a virtual interactive object in the scanned live-action picture;
responding to the interactive operation with the virtual interactive image initiated by the user, and acquiring an action image sequence corresponding to the interactive action of the virtual interactive object;
rendering and fusing the obtained action image sequence and the real scene picture to generate an interactive image sequence, and generating an interactive record of the virtual interactive image based on the interactive image sequence.
In the above technical solution, on one hand, because the virtual interaction object is enhanced and displayed in the live-action picture, the virtual interaction image can be integrated into the live-action picture to interact with the user; therefore, the sense of reality of interaction between the user and the virtual interactive object can be improved;
on the other hand, due to the interactive operation with the virtual interactive object initiated by the user, the action image sequence corresponding to the interactive action of the virtual interactive object is triggered to be rendered and fused with the live-action picture, and an interactive record is generated according to the rendering and fusing result; therefore, for the user, the initiated interactive operation with the virtual interactive object can finally affect the content recorded by the generated interactive record, so that the user can trigger the fusion of the diversified interactive action of the virtual interactive image and the live-action picture by initiating different forms of interactive operation to generate the interactive record with rich content.
Drawings
Fig. 1 is a flowchart of an interaction method based on augmented reality according to an exemplary embodiment.
Fig. 2 is a schematic diagram of enhancing and displaying a virtual interactive object in a live-action picture according to an exemplary embodiment.
FIG. 3 is a schematic diagram of a group photo interaction with a virtual interactive object according to an exemplary embodiment.
FIG. 4 is a schematic diagram of a capability-enhancing interaction with a virtual interactive object according to an exemplary embodiment.
FIG. 5 is a schematic diagram of a decorative virtual interactive object, according to an exemplary embodiment.
FIG. 6 is a schematic illustration of a level of ability to view a virtual interactive object provided by an exemplary embodiment.
FIG. 7 is a schematic diagram of viewing an interaction record with a virtual interactive object, according to an exemplary embodiment.
Fig. 8 is a block diagram of an augmented reality-based interaction device according to an exemplary embodiment.
Fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
The present specification aims to provide an interaction mode in which a user interacts with a virtual interaction object that is enhanced and displayed in a live-action picture to trigger rendering and fusing of an action image sequence corresponding to an interaction action of the virtual interaction object and the live-action picture, and then generates an interaction record.
When the method is implemented, a user can scan the image of the real environment through the AR client; the AR client can respond to the operation of displaying the virtual interactive object in the live-action picture initiated by the user, and enhance and display the preset virtual interactive object in the scanned live-action picture;
for example, in practical applications, the virtual interactive object may be a dynamic avatar (e.g., a dynamic 3D avatar) capable of performing various interactive actions to interact with the user; and in the scanned live-action picture, a "call" button corresponding to the virtual interactive object may be provided, and the user may operate the "call" button by clicking or the like, so as to trigger the AR client to enhance and display the virtual interactive object at a relative position in the live-action picture.
After the AR client side displays the virtual interactive object in the live-action picture in an enhanced manner, the user can interact with the virtual interactive object by initiating corresponding interactive operation; after detecting the interactive operation initiated by the user, the AR client may respond to the interactive operation initiated by the user with the virtual interactive object, obtain an action image sequence (e.g., action animation) corresponding to the interactive action of the virtual interactive object, and render and fuse the obtained action image sequence and the live-action picture to generate an interactive image sequence; then, an interaction record with the virtual interactive character can be further generated based on the generated interactive image sequence.
In the technical scheme, a user can trigger rendering and fusion of an action image sequence corresponding to the interactive action of the virtual interactive object and the live-action picture by interacting with the virtual interactive object which is enhanced and displayed in the live-action picture, and further generates an interactive record based on the interactive image sequence generated by rendering and fusion;
on one hand, because the virtual interaction object is enhanced and displayed in the live-action picture, the virtual interaction image can be integrated into the live-action picture to interact with the user; therefore, the sense of reality of interaction between the user and the virtual interactive object can be improved;
on the other hand, due to the interactive operation with the virtual interactive object initiated by the user, the action image sequence corresponding to the interactive action of the virtual interactive object is triggered to be rendered and fused with the live-action picture, and an interactive record is generated according to the rendering and fusing result; therefore, for the user, the initiated interactive operation with the virtual interactive object can finally affect the content recorded by the generated interactive record, so that the user can trigger the fusion of the diversified interactive action of the virtual interactive image and the live-action picture by initiating different forms of interactive operation to generate the interactive record with rich content.
The present application is described below with reference to specific embodiments and specific application scenarios.
Referring to fig. 1, fig. 1 is a flowchart illustrating an interaction method based on augmented reality, applied to an AR client, according to an embodiment of the present application, where the method includes the following steps:
102, enhancing and displaying a virtual interactive object in the scanned live-action picture;
104, responding to the interactive operation with the virtual interactive image initiated by the user, and acquiring a motion image sequence corresponding to the interactive motion of the virtual interactive object;
step 106, rendering and fusing the obtained action image sequence and the live-action picture to generate an interactive image sequence;
and 108, generating an interaction record with the virtual interaction image based on the interaction image sequence.
The AR client comprises client software which is developed based on an AR technology or integrates an AR function; for example, the AR client may be an APP application integrated with an AR service function; the AR client can scan the image of the real environment where the user is located through the carried image scanning function to obtain the live-action picture; and performing visual rendering on virtual data (such as an image sequence needing to be enhanced and displayed in a live-action picture) pushed by the AR server of the background through the AR engine carried by the AR client, and overlapping and fusing the virtual data and the live-action picture to complete the enhanced display of the virtual data in the live-action picture.
The AR server comprises a server for providing service for the AR client, a server cluster or a distributed service platform constructed based on the server cluster; for example, the AR server may be a distributed platform that provides a docking service for an APP integrated with an AR function; the AR server side can perform content management on virtual data needing to be enhanced and displayed in the live-action picture scanned by the AR client side, and push the virtual data needing to be enhanced and displayed in the live-action picture to the AR client side.
The virtual interactive object comprises a dynamic virtual image in any form, which can make diversified interactive actions to interact with a user; for example, a dynamic 3D cartoon character capable of making various interactive actions.
It should be noted that the interactive action that can be made by the virtual interactive object can be designed by the operator of the AR client or by an Independent Software developer (ISV) that provides services to the operator based on actual interactive requirements, and a corresponding action image sequence (i.e., an animation file related to the interactive action) is generated.
The generated motion image sequence corresponding to the interactive motion of the virtual interactive object can be uniformly maintained and managed by the AR server as virtual data needing to be enhanced and displayed in the live-action picture.
The interaction record is specifically used for recording an interaction result of a user interacting with the virtual interaction object; wherein, the specific form of the interactive record is not particularly limited in the present specification;
for example, in one implementation, taking the virtual interactive object as a 3D cartoon character as an example, the interactive record may be specifically abstracted into a diary of the 3D cartoon character and presented to the user.
In the present description, in the user interface of the AR client, an AR scanning portal may be provided; the user can enter the AR scanning interface by triggering the scanning entrance and initiate image scanning on the real environment;
for example, in one implementation, a "scan" portal may be provided in the user homepage of the AR client, and the user may enter the AR scanning interface by triggering the "scan" portal to scan the image of the real environment.
And the AR client can output the scanned real-scene picture in real time in the AR scanning interface, and defaults to enhance and display an operation entrance used for triggering the enhanced display of the virtual interactive object in the real-scene picture in the output real-scene picture, so that a user can initiate the display of the virtual interactive object in the real-scene picture output by the AR scanning interface through the operation entrance.
The description is given by taking the virtual interactive object as a virtual 3D cartoon image named 'star treasure'.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an enhanced display of a virtual interactive object in a live-action picture according to the present specification.
As shown in fig. 2, the operation entry may be specifically a "call" button corresponding to "star bao", and the user may trigger the "call" button by clicking or the like, so as to call the "star bao" to the live-action picture for enhanced display.
After detecting the triggering operation of the user on the "call" button, the AR client may acquire, from the AR server, an action image sequence (e.g., an animation file) related to the interaction action of the virtual interaction object, and perform enhanced display on a relative position of the acquired action image sequence in the live-action picture.
For example, in one implementation, for several types of interaction actions of a virtual interaction object maintained and managed at an AR server, one type of interaction action for default display may be set; after detecting the triggering operation of the user on the "call" button, the AR server may obtain, from the AR server, an action image sequence corresponding to the default displayed interactive action, and then perform enhanced display on the obtained action image sequence in the live-action picture.
In this specification, after the AR client displays the virtual interactive object in the live-action picture in an enhanced manner, the user may initiate a specific interactive operation in the live-action picture to interact with the virtual interactive object.
In one illustrated embodiment, the user may complete the interaction by directly interacting with the virtual interaction object;
for example, when the method is implemented, a user can touch a virtual interactive object which is enhanced and displayed in a live-action picture to trigger the virtual interactive object to perform different interactive actions; in this case, the interactive action that the virtual interactive object can make may be triggered randomly by a touch operation of the user on the virtual interactive object. After detecting the touch operation of the user on the virtual interactive object, the AR client may acquire an action image sequence corresponding to the interactive action randomly allocated by the AR server, and then perform enhanced display on the acquired action image sequence in the live-action picture.
In another embodiment shown, the user may also interact with the virtual interactive object in a specific interactive scene by using a plurality of interactive portals which are enhanced and displayed in the live-action picture by the AR client.
In this case, the AR client may enhance and display several interactive portals in the live-action picture; each interactive entrance can respectively correspond to an interactive scene defined by an operator of the AR client and used for interaction between a user and the virtual interactive object; therefore, the user can initiate corresponding interactive operation in the live-action picture by triggering the interactive entrances, and interact with the virtual interactive object under different interactive scenes.
It should be noted that, the interaction mode and the specific interaction form between the user and the virtual interaction object are not particularly limited in this specification, and in practical application, the operator of the AR client may flexibly customize a variety of interaction scenarios based on actual interaction requirements.
The following is described in detail with reference to specific interactive scenarios.
1) Interdynamic with the star treasure
Under an interactive scene, a user can initiate a 'star treasure' to perform group photo interaction with a live-action picture, so as to trigger an AR client to render and fuse an action image sequence corresponding to the currently displayed interactive action of the 'star treasure' with the live-action picture, and then generate an interactive record of group photo interaction.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a group photo interaction with a virtual interactive object according to the present specification.
As shown in fig. 3, the AR client may enhance and display a "group photo interaction" button in the live-action picture; the user can trigger the 'group photo interaction' button by clicking and the like, and initiates the interaction operation of group photo with the 'star treasure' in the live-action picture.
And after detecting the triggering operation of the user for the group photo interaction button, the AR client can respond to the interaction operation of group photo with the star treasure initiated by the user, and obtains an action image sequence corresponding to the currently displayed interaction action of the star treasure from the AR server.
Further, after the AR client acquires the action image sequence corresponding to the currently displayed interactive action of the star treasure from the AR server, the acquired action image sequence and the live-action picture are rendered and fused by an AR engine carried by the AR client to generate an interactive image sequence; then, an interaction record for group photo interaction with the star treasure can be further generated based on the generated interaction image sequence.
For example, during traveling, a user can scan the real environment where the user is located through the AR client, call "star treasure" in the scanned live-action picture, and then trigger the AR client to render and fuse an animation file corresponding to the currently displayed interaction action of the "star treasure" with the live-action picture by initiating the group photo interaction with the "star treasure" in the live-action picture to generate a group photo animation file; the AR client may then generate a "travel diary" for "star treasure" based on the group animation file as a record of the interaction between the user and the "star treasure".
2) Capability enhancement interaction with' star treasure
In another illustrated interactive scenario, capability attributes of multiple categories can be set for "star treasure"; and a plurality of attribute grades can be divided for the capability attributes of each category, and an interaction scene of unlocking a new interaction action can be triggered when the attribute grade is set for 'star treasure'.
Furthermore, an interactive operation capable of improving the capability attribute of the star treasure can be defined for the star treasure, and a user can initiate the interactive operation in a live-action picture to trigger a level improvement event corresponding to the capability attribute of each category of the star treasure, so as to unlock a new interactive action for the star treasure;
furthermore, the AR client can acquire a new interaction action for unlocking the star treasure, and generates an interaction record for improving the capability attribute of the star treasure after rendering and fusing an action image sequence corresponding to the new interaction action for unlocking and a live-action picture.
That is, in such an interactive scenario, the enhancement of the capability attribute of the star treasure triggers the AR client to generate an interactive record for recording the enhancement of the capability attribute of the star treasure; moreover, the content in the interaction record generated by the AR client is triggered by the promotion of different capability attributes of the 'star treasure' is different from the content in the interaction record generated by the AR client.
In an illustrated embodiment, the interactive operation defined for "star bao" and capable of improving the capability attribute thereof may specifically be an interactive operation for performing image recognition on a target object in a live-action picture. That is, the user may initiate an interactive operation of performing image recognition on a target object of the live-action picture to trigger a level-up event corresponding to the capability attribute of each category of the star treasure, and unlock a new interactive action for the star treasure;
under the interaction scenario, the ability attributes of each category of star treasure can be respectively associated with the item set for improving the ability attributes. The user can scan the items associated with the capability attributes of the categories of the star treasure in the live-action picture through the AR client, help the star treasure to explore the real world, and trigger the level-up event corresponding to the capability attributes of the categories of the star treasure.
The following describes an interactive operation defined by "star treasure" and capable of improving the capability attribute thereof, as an example of the interactive operation for performing image recognition on a target object in a live-action picture;
it should be emphasized that the interactive operation defined for "star treasure" and capable of improving the capability attribute thereof is only exemplary for describing the interactive operation for performing image recognition on the target object in the live-action picture as an example; in practical application, the operator of the AR client may also define other forms of interactive operations for the virtual interactive object to improve the capability attribute of the virtual interactive image, which is not listed one by one in this specification;
for example, in another scenario, the interactive operation defined for the virtual interactive object can improve its capability attribute, specifically, the interactive operation may also be an interactive operation that "calls" the virtual interactive image at a new LBS location; the capability attributes of each category of the virtual interactive object may be respectively associated with LBS location sets for improving the capability attributes. Under the interaction scene, the user can call the virtual interaction image at a new LBS position through the AR client side to improve the corresponding capability attribute of the virtual interaction object.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating capability-enhanced interaction with a virtual interactive object according to the present specification.
As shown in fig. 4, the AR client may display an "explore world" button in the live-action scene in an enhanced manner. The user can trigger the 'explore world' button in a clicking mode and the like, and initiates an interactive operation of image recognition on the target object in the live-action picture, so as to help the 'star treasure' to explore the real world and improve the capability attribute of the 'star treasure'.
After detecting the triggering operation of the user for the 'exploration world' button, the AR client can enter an AR image scanning interface, respond to the interactive operation of image recognition of the target object in the live-action picture initiated by the user, perform image scanning on the target object to acquire the image feature of the target object, and perform image recognition on the target object based on the acquired image feature;
it should be noted that the image recognition process described above may be completed on the AR client or on the AR server;
for example, in one case, the AR client may synchronize the image feature sample library stored on the AR server to the local in advance; wherein a plurality of predefined image feature samples of the item are stored in the image feature sample library. After the AR client acquires the image characteristics of the target object, similarity matching can be performed on the image characteristic samples in the image characteristic sample library of the image characteristics respectively; when the extracted image features match the image feature samples of any one of the predefined items stored in the image feature library, it can be confirmed that the target item is successfully identified from the scanned live-action image. In another case, the AR client may also upload the image features of the target item to the AR server, and the AR server performs image recognition based on the same manner as described above, and then returns the image recognition result to the AR client.
After the image recognition of the target object is completed, the AR client confirms whether the target object matches the object associated with the capability attribute of each category of the 'star treasure' or not based on the image recognition result;
on one hand, if the target item matches an item associated with any type of capability attribute, the AR client may increase an attribute value corresponding to the capability attribute based on a first preset amplitude at this time;
on the other hand, if the target object match is not matched with the objects associated with the capability attributes of the categories, the attribute values corresponding to the capability attributes of the categories can be simultaneously improved based on the second preset amplitude; the second predetermined amplitude may be much smaller than the first predetermined amplitude.
In this way, when the target object matches an object associated with any category of capability attribute, the attribute value corresponding to the capability attribute can be increased by a larger amplitude; and when the target object is not matched with the object associated with the capability attribute of each category, simultaneously improving the attribute value corresponding to the capability attribute of each category according to a smaller amplitude.
By the method, whether the target object is matched with the object associated with the capability attribute of each category or not can be guaranteed, the attribute value corresponding to the capability attribute of each category can be improved to a certain extent, and therefore the interactive experience of a user can be improved.
Of course, in practical applications, if the target item matches with none of the items associated with the capability attributes of each category, the attribute value corresponding to the capability attribute of each category may not be promoted in any manner, and is not particularly limited in this specification.
Referring to fig. 5, in an embodiment shown, the AR client may further enhance and display a "decoration" button in the live-action picture, and the user may trigger the "decoration" button by clicking or the like, initiate to decorate the avatar of the "star treasure" enhanced and displayed in the live-action picture, and set a favorite avatar for the "star treasure" customization.
And the AR client side can enter a decoration element list configured for the star treasure after detecting the triggering operation of the user for the decoration button, and at the moment, the user can select favorite decoration elements in the decoration element list to decorate the star treasure so as to set favorite virtual images for the self-definition of the star treasure.
The types of the decorative elements and specific decorative elements presented in the decorative element list are not particularly limited in this specification;
for example, the decoration element list shown in fig. 5 specifically includes the kinds of decoration elements such as "head ornaments", "neck ornaments", "clothes", and the like.
With continued reference to fig. 5, in an embodiment shown, the decoration elements provided in the decoration element list may include a small number of decoration elements that can be directly selected by the user, and may also include a plurality of upper unlocking decoration elements;
wherein for each decorative element that has not been unlocked, one or more designated items may be associated, respectively.
In this case, when the image recognition for the target item is completed, the AR client may further determine, based on the image recognition result, whether the target item is a designated item associated with various decorative elements that have not been unlocked; if the target article is the specified article, the AR client may obtain a decoration element associated with the specified article, add the decoration element to the decoration element list, and unlock the decoration element in the decoration element list.
Through the mode, when the user carries out image recognition on the target object in the live-action picture to improve the capability attribute of the star treasure, the decorative element used for decorating the star treasure can be obtained with a certain probability, so that the interactive experience of the user can be improved.
In an embodiment shown, the AR client may, in addition to dividing the attribute levels for the capability attributes of the respective categories of the "star treasure", perform weighted calculation on the attribute values corresponding to the capability attributes of the respective categories of the "star treasure" to obtain a total capability value of the "star treasure", calculate the capability level of the "star treasure" according to the total capability value, and output the calculated total capability value of the "star treasure" and the calculated capability level of the "star treasure" to the user in a live view;
referring to fig. 6, in an embodiment shown, the AR client may enhance and display a "capability level" button in the live-action screen, and the user may trigger the "capability level" button, such as by clicking, to view the attribute values of the attribute levels of the categories of "star treasure", the total capability value of "star treasure" calculated by the AR client and the capability level of "star treasure" calculated based on the total capability value.
And after detecting the triggering operation of the user on the 'ability level' button, the AR client can enter the 'star treasure' ability attribute interface, and show the attribute values of the ability attributes of each category of the 'star treasure' to the user through the ability attribute interface, the 'star treasure' total ability value calculated by the AR client and the 'star treasure' ability level calculated based on the total ability value.
For example, the ability attributes set for "star treasure" shown in fig. 6 include ability attributes such as "life", "sport", "creation", and "social"; in the capability attribute interface, the attribute values corresponding to capability attributes such as 'life', 'sports', 'creation', and 'social', etc. are displayed; the AR client can set a weight coefficient for each of the four capability attributes, and after attribute values corresponding to the four capability attributes are multiplied by the set weight coefficients, summation operation is performed to obtain a total capability value of the 'star treasure'; and then, calculating the capability level of the 'star treasure' based on the calculated total capability value of the 'star treasure', and displaying the calculated total capability value and capability level of the 'star treasure' in the capability attribute interface.
It should be noted that, in this specification, whether the capability attribute of each category of the "star treasure" is graded or the total capability value of the "star treasure" is graded, the grading can be realized by setting a corresponding threshold for each grade;
for example, referring to fig. 6, a plurality of attribute levels may be divided for ability attributes such as "life", "sport", "creation", and "social contact" of "star bao", and a threshold value is set for each attribute level, and if an attribute value of a certain ability attribute reaches a threshold value corresponding to a certain attribute level, it may be considered that the ability attribute is promoted to the attribute level;
similarly, for the total capacity value of the 'star treasure', a plurality of capacity levels can be marked, a threshold value is set for each capacity level, and if the total capacity value obtained through weighting calculation reaches the threshold value corresponding to a certain capacity level, the total capacity value of the 'star treasure' can be considered to be improved to the capacity level; for example, as shown in fig. 6, the current capability level of "starbao" is level5, and the threshold of the next capability level6 is 800, that is, when the current total capability value of "starbao" reaches 800, the capability level of "starbao" will be increased to level 6.
In this specification, a user scans items associated with capability attributes of each category of "star treasure" in a live-action picture through an AR client, and a triggered level-up event corresponding to the capability attribute of each category of "star treasure" may specifically include several types of events shown below:
first, an event of attribute level promotion of capability attributes of each category of "star bao";
second, an event of an increase in the level of ability of "star treasure";
third, the attribute level of the capability attribute of each category of "star treasure" and the event that the capability level of "star treasure" is promoted at the same time.
In an illustrated embodiment, taking the above-mentioned level-raising event as an example of an event of raising the attribute level of the capability attribute of each category of "star bao", if the AR client completes image recognition on a target object scanned by a user and raises the attribute value of the capability attribute of one or more categories of "star bao" according to the image recognition result, it may be further determined whether the raised attribute value triggers raising the attribute level of the capability attribute of "star bao";
for example, if the attribute value of a certain category of capability attribute is raised, the AR client may compare the raised attribute value with a threshold corresponding to each attribute level of the capability attribute to determine whether the raised attribute value triggers the raising of the attribute level of the capability attribute; for example, as shown in fig. 6, it is assumed that the value of the attribute after the "sports" capability attribute of "star bao" is improved is 648, and the threshold of the next capability level of the "sports" capability attribute is 800; then the AR client may compare the attribute value 648 to the threshold 800; at this time, the attribute value after the promotion is smaller than the threshold value, and it can be considered that the attribute value after the promotion does not trigger the promotion of the attribute capability of the "motion" capability attribute of "star bao".
If the promoted attribute value triggers the promotion of the attribute level of the capability attribute of the star treasure, the AR client can further determine the target capability attribute of which the attribute level is promoted in the capability attributes of all the categories of the star treasure;
on one hand, if the capability attributes of each category only include the unique capability attribute with the enhanced attribute level, that is, the enhanced attribute value only triggers the enhancement of the attribute level of a single capability attribute in the capability attributes of multiple categories of "star treasure", the capability attribute with the enhanced capability level can be determined as the target capability attribute;
for example, please refer to fig. 6, assuming that the AR client determines that the target item scanned by the user matches an item associated with the "living" capability attribute of "star bao" according to the image recognition result, at this time, the attribute value corresponding to the "living" capability attribute of "star bao" may be increased based on the first preset amplitude; in this case, if the attribute value of the "life" ability attribute of "star bao" after the promotion triggers the attribute level promotion event of the "life" ability attribute, the "life" ability attribute of "star bao" may be determined as the target ability attribute.
On the other hand, if the capability attributes of each category include capability attributes of multiple categories with enhanced attribute levels, that is, the enhanced attribute values trigger the enhancement of the attribute levels of the multiple capability attributes of "star bao", the attribute values of the multiple capability attributes with enhanced attribute levels may be further compared at this time, and the capability attribute with the highest attribute value among the multiple capability attributes is determined as the target capability attribute.
Referring to fig. 6, assuming that the AR client determines that the target item scanned by the user is not matched with the items associated with the capability attributes of each category of "star bao", the attribute values corresponding to the capability attributes of "life", "motion", "creation", and "social contact" of "star bao" may be respectively promoted based on the second preset amplitude;
in this case, if the attribute values of the "life", "exercise", "creation", and "social contact" of the "star treasure" after the capability attribute is promoted trigger the attribute level promotion events of the "life", "exercise", "creation", and "social contact" capability attributes, respectively, the attribute values of the "life", "exercise", "creation", and "social contact" capability attributes of the "star treasure" may be further compared, and the "life" capability attribute with the highest attribute value may be determined as the target capability attribute.
Further, after the AR client determines the target capability attribute with the enhanced attribute level in the manner shown above, the AR client may acquire the action image sequence corresponding to the unlocked latest interaction action from the AR server when the attribute level of the target capability attribute is enhanced, then render and fuse the acquired action image sequence with the live-action picture to generate an interaction image sequence, and generate an interaction record for performing capability enhancement interaction with "star bao" based on the generated interaction image sequence.
For example, a user can scan the real environment where the user is located through the AR client, call "star treasure" in the scanned live-action picture, and then perform image recognition by initiating an article associated with the capability attribute of each category of "star treasure" in the live-action picture, so as to improve the capability attribute of "star treasure" and unlock a new interactive action; if the level of the capability attribute of the star treasure is improved and the new interactive action is unlocked, the AR client can acquire an animation file corresponding to the unlocked new interactive action and render and fuse the animation file and the live-action picture; then, the AR client may generate a "capability promotion diary" of "star treasure" as an interaction record between the user and "star treasure" based on rendering the animation file generated after the fusion.
Specific file formats of the motion image sequence and the interactive image sequence are not particularly limited in this specification; for example, in one embodiment, the motion image sequence and the interactive image sequence may be both picture files in GIF format.
In an illustrated embodiment, taking the above level promotion event as an example of a promotion event of a total capability level of "star bao", if the AR client completes image recognition of a target object scanned by a user, and promotes attribute values of capability attributes of one or more categories of "star bao" according to an image recognition result, it may be further determined whether the promoted attribute values trigger promotion of the capability level of "star bao";
for example, after the attribute values of the capability attributes of one or more categories are improved, the AR client may perform weighted calculation again on the attribute values of the capability attributes of each category to obtain a total capability value, and recalculate the capability level of "star bao" based on the calculated total capability value; the recalculated "star bao" capability level can then be compared to the "star bao" current capability level to determine if the "star bao" capability level is elevated.
If the promoted attribute value triggers the promotion of the ability level of the 'star treasure', the AR client can further determine the target ability attribute of which the current attribute level is the same as the ability level of the 'star treasure' in the ability attributes of each category of the 'star treasure';
on one hand, if the capability attributes of each category only include a unique capability attribute with the current attribute level being the same as that of the "star treasure", the capability attribute can be determined as a target capability attribute;
for example, referring to fig. 6, it is assumed that the capability level of "star bao" is raised from level4 to level5, at this time, the AR client may find whether the capability attribute of level5 also exists in the capability attributes of "life", "sport", "creation", and "social contact" of "star bao"; assuming that only the "life" capability attribute is level5 and the other capability attributes are all below level5, the "life" capability attribute may be determined as the target capability attribute.
On the other hand, if the capability attributes of the respective categories include a plurality of capability attributes having the current attribute level the same as the capability level of "star treasure", the attribute values of the plurality of capability attributes may be further compared, and the capability attribute having the highest attribute value among the plurality of capability attributes may be determined as the target capability attribute.
For example, referring to fig. 6, it is assumed that the capability level of "star bao" is raised from level4 to level5, at this time, the AR client may find whether the capability attribute of level5 also exists in the capability attributes of "life", "sport", "creation", and "social contact" of "star bao"; assuming that all the ability attributes such as "life", "exercise", "creation", and "social contact" of "star bao" are level5, the "life" ability attribute with the highest attribute value among the ability attributes such as "life", "exercise", "creation", and "social contact" may be determined as the target ability attribute.
Further, after the AR client determines the target capability attribute with the enhanced attribute level in the manner described above, the AR client may obtain, from the AR server, an action image sequence corresponding to the unlocked interactive action when the attribute level of the target capability attribute is enhanced to the current attribute level, and then render and fuse the obtained action image sequence with the live-action picture to generate an interactive image sequence, and generate an interactive record for performing capability enhancement interaction with "star bao" based on the generated interactive image sequence.
For example, the AR client may still generate a "capability promotion diary" of "star treasure" based on rendering the merged generated animation file as a record of the interaction between the user and the "star treasure".
In an illustrated embodiment, taking the above-mentioned level-raising event, as an example, if the AR client completes image recognition of a target object scanned by a user and raises attribute values of capability attributes of one or more categories of "star treasure" according to an image recognition result, it may be further determined whether the raised attribute values simultaneously trigger raising of the attribute levels of the capability attributes of "star treasure"; and the improvement of the ability level of 'star treasure';
if the promoted attribute value simultaneously triggers the promotion and promotion of the attribute level of the capability attribute of 'star treasure'; and the AR client can further determine the target capability attribute with the current attribute level being the same as the capability level of the star treasure in the capability attributes of all categories of the star treasure by improving the capability level of the star treasure;
on one hand, if the capability attributes of each category only include a unique capability attribute with the current attribute level being the same as that of the "star treasure", the capability attribute can be determined as a target capability attribute;
on the other hand, if the capability attributes of the respective categories include a plurality of capability attributes having the current attribute level the same as the capability level of "star treasure", the attribute values of the plurality of capability attributes may be further compared, and the capability attribute having the highest attribute value among the plurality of capability attributes may be determined as the target capability attribute.
Further, after the AR client determines the target capability attribute with the enhanced attribute level in the manner shown above, the AR client may still obtain the attribute level of the target capability attribute from the AR server and when the attribute level is enhanced to the current attribute level, perform rendering fusion on the obtained action image sequence and the live-action picture to generate an interaction image sequence, and generate an interaction record for performing capability enhancement interaction with "star treasure" based on the generated interaction image sequence, which is not described in detail in the specific process.
In the above embodiment, the interactive scene of group photo interaction with "star treasure" and capability enhancement interaction with "star treasure" is taken as an example for explanation; it should be emphasized that the above two scenarios are only exemplary, and in practical applications, an operator of the AR client may also flexibly customize other types of interaction scenarios based on actual interaction requirements, which is not listed in this specification.
In this specification, no matter in which interactive scene, the action image sequence acquired by the AR client from the AR server does not usually include a decoration element custom-set by the user for "star treasure"; for example, the acquired motion image sequence may be only a skeleton animation of "star bao" when performing a certain interactive motion; therefore, if the AR client directly renders and fuses the acquired motion image sequence and the live-action picture, the decoration element custom-set by the user as "star bao" is absent in the finally obtained interaction image sequence, so that the avatar defined by the user as "star bao" is lost in the fusion result.
Based on this, in this specification, when rendering and fusing the acquired motion image sequence and the real-scene image, the decoration element customized for "star treasure" by the user may be enhanced and displayed at the preset position in the acquired motion image sequence to generate a corresponding dynamic virtual image, and then the generated dynamic virtual image and the real-scene image may be rendered and fused.
In an embodiment shown, after the AR client obtains the action image sequence from the AR server, the AR client may first obtain a decoration element currently set for "star treasure" by the user, and perform enhanced display on the decoration element set by the user at a preset position in the obtained action image sequence to generate a dynamic avatar of "star treasure" corresponding to the action image sequence;
the specific position of the decoration element in the motion image sequence for enhanced display generally depends on the position where the decoration element can be modified, and is not particularly limited in this specification; for example, for a headwear of "star bao," the head position may be displayed in an animation of the action of "star bao" as an enhancement.
After the dynamic avatar corresponding to the action image sequence is generated, the AR client may render and fuse the generated dynamic avatar with the live-action picture to generate an interactive image sequence corresponding to the current dynamic avatar of "star treasure".
Correspondingly, when the user updates the avatar of the star treasure by setting the decoration element for the star treasure again in the decoration element list, the AR client can obtain the decoration element updated for the star treasure by the user, and performs enhanced display on the decoration element updated by the user at a preset position in the action image sequence according to the same mode, and performs synchronous update on the avatar of the star treasure.
Further, the AR client may render and fuse the updated dynamic avatar of "star bao" with the live-action picture again to generate an interactive image sequence corresponding to the updated dynamic avatar of "star bao", and then synchronously update an interactive image sequence corresponding to the original dynamic avatar of "star bao" in the generated interactive record based on the interactive image sequence corresponding to the updated dynamic avatar of "star bao".
In another embodiment shown, the AR client may also arrange and combine the decoration elements in the decoration element set in advance to generate a plurality of kinds of decoration element collocation;
that is, by means of permutation and combination, all the decoration schemes of "star treasure" that can be obtained based on the existing decoration elements are listed.
After the AR client acquires the action image sequence from the AR server, each generated decorative element can be matched, and enhanced display is respectively carried out at a preset position in the dynamic image sequence to generate a plurality of dynamic virtual images of 'star treasure' corresponding to the action image sequence;
further, the AR client may render and fuse each of the generated "star treasure" dynamic avatars with the live-action pictures, respectively, to generate an interactive image sequence of the "star treasure" corresponding to each of the dynamic avatars.
In this case, when the AR client generates an interaction record for "star bao" based on the generated interaction image sequence, the AR client may select an interaction image sequence corresponding to the current dynamic avatar of "star bao" from the generated interaction image sequences corresponding to each dynamic avatar, and then generate an interaction record for "star bao" according to the selected interaction image sequence.
Correspondingly, when the user updates the avatar of the star treasure by setting the decoration element for the star treasure again in the decoration element list, the AR client can obtain the decoration element updated for the star treasure by the user, and performs enhanced display on the decoration element updated by the user at a preset position in the action image sequence according to the same mode, and performs synchronous update on the avatar of the star treasure.
Further, the AR client may further select an interactive image sequence corresponding to the updated dynamic avatar of the "star treasure" from the generated interactive image sequences corresponding to each dynamic avatar, and then synchronously update the interactive image sequence corresponding to the original pre-updated dynamic avatar in the generated interactive record based on the selected interactive image sequence.
In this way, since the AR client has previously generated the interactive image sequence of "star bao" corresponding to each kind of dynamic avatar, after the user updates the avatar of "star bao", for the AR client, it is no longer necessary to render and fuse the updated dynamic avatar and the live-action picture again, but instead, the interactive image sequence corresponding to the updated dynamic avatar of "star bao" can be directly selected from the interactive image sequence of "star bao" corresponding to each kind of dynamic avatar that has been generated, and the interactive image sequence corresponding to the original dynamic avatar of "star bao" in the generated interactive log is immediately updated synchronously.
That is, after the virtual image of the star treasure is changed, the content of the generated interactive record is also changed in real time; for a user, the generated interaction record is perceptually dynamically changed in real time, and the interaction experience of the user can be remarkably improved.
In this specification, after the AR client generates an interaction record for performing capability enhancement interaction with the star treasure through the implementation process, the generated interaction record may be displayed to the user;
in one embodiment shown, the AR client may enhance a "view" button in the live-action picture, and the user may trigger the "view" button, such as by clicking, to view the interaction record generated by the AR client.
After detecting the triggering operation of the user for the 'view' button, the AR client can enter an interactive record display interface, and output and display the generated interactive record to the user through the interactive record display interface;
for example, referring to fig. 7, taking the form of abstracting the interaction record generated by the AR client into a diary of "star treasure", the "view" button may be specifically a "diary" button, and the user may operate the button to trigger the AR client to output a star treasure diary view interface to view a "travel diary" such as "star treasure", "capability promotion diary" generated by the AR client, and the like.
Through the technical scheme of the embodiments, on one hand, as the virtual interaction object is enhanced and displayed in the live-action picture, the virtual interaction image can be integrated into the live-action picture to interact with the user; therefore, the sense of reality of interaction between the user and the virtual interactive object can be improved;
on the other hand, due to the interactive operation with the virtual interactive object initiated by the user, the action image sequence corresponding to the interactive action of the virtual interactive object is triggered to be rendered and fused with the live-action picture, and an interactive record is generated according to the rendering and fusing result; therefore, for the user, the initiated interactive operation with the virtual interactive object can finally affect the content recorded by the generated interactive record, so that the user can trigger the fusion of the diversified interactive action of the virtual interactive image and the live-action picture by initiating different forms of interactive operation to generate the interactive record with rich content.
Corresponding to the method embodiment, the application also provides an embodiment of the device.
Referring to fig. 8, the present application provides an augmented reality-based interaction device 80, which is applied to an AR client; referring to fig. 9, the hardware architecture of the client carrying the augmented reality-based interaction device 80 generally includes a CPU, a memory, a non-volatile memory, a network interface, an internal bus, and the like; taking a software implementation as an example, the augmented reality-based interaction device 80 can be generally understood as a computer program loaded in a memory, and a logic device formed by combining software and hardware after being run by a CPU, where the device 80 includes:
a display module 801 for displaying the virtual interactive object in the scanned live-action picture in an enhanced manner;
an obtaining module 802, configured to obtain, in response to an interaction operation with the virtual interaction image initiated by a user, a motion image sequence corresponding to an interaction motion of the virtual interaction object;
a fusion module 803, which renders and fuses the obtained motion image sequence and the live-action picture to generate an interactive image sequence;
a generating module 804, configured to generate an interaction record with the virtual interaction image based on the interaction image sequence.
In this embodiment, the obtaining module 802:
and responding to the interactive operation initiated by the user and the virtual interactive image, and acquiring an action image sequence corresponding to the interactive action currently displayed by the virtual interactive object.
In this embodiment, the virtual interactive object includes capability attributes of multiple categories; wherein, the ability attribute of each category is divided into a plurality of attribute grades; when the attribute level is raised, a new interaction action is triggered to be unlocked;
the acquisition module 802:
responding to an interactive operation initiated by a user and the virtual interactive image, and determining whether the interactive operation triggers a grade raising event corresponding to the capability attribute;
and if the interactive operation triggers a level-up event corresponding to the capability attribute, acquiring an action image sequence corresponding to the unlocked interactive action.
In this embodiment, the interactive operation includes an operation of performing image recognition on a target object in the live-action picture; the capability attributes of the various categories are respectively associated with items for improving the capability attributes;
the apparatus 80 further comprises:
an identifying module 805 (not shown in fig. 8) for performing image identification on the target object in the live-action picture in response to a user-initiated interaction operation with the virtual interactive figure;
a promotion module 806 (not shown in FIG. 8) that determines whether the target item matches an item associated with a capability attribute of a respective category of the virtual interactive object based on a result of the image recognition; and if the target object matches an object associated with the capability attribute of any category, improving the attribute value corresponding to the capability attribute of the category based on a first preset amplitude.
In this embodiment, the lifting module 806 further:
if the target object is not matched with the object associated with the capability attribute of each category, respectively improving the attribute value corresponding to the capability attribute of each category based on a second preset amplitude;
wherein the second preset amplitude is lower than the first preset amplitude.
In this embodiment, the apparatus 80 further includes:
a decoration module 807 (not shown in fig. 8) that determines whether the target item is a designated item based on a result of the image recognition; wherein the designated item is associated with a decorative element associated with the virtual interactive object; and if the target article is the specified article, obtaining a decoration element associated with the specified article, and adding the decoration element to a decoration element list related to the virtual interaction object, so that a user sets a decoration element for the virtual interaction object based on the decoration element in the decoration element list.
In this embodiment, the apparatus 80 further includes:
a calculating module 808 (not shown in fig. 8) for performing weighted calculation on the attribute values corresponding to the capability attributes of the respective categories to obtain capability values of the virtual interactive image; calculating a capability level corresponding to the virtual interactive figure based on the capability value;
the output module 809 (not shown in fig. 8) outputs the calculated capability level to the user in the live-action picture.
In this embodiment, the level-up event includes:
a promotion event of an attribute level of the capability attribute; and/or, a boost event for the capability level;
the acquisition module further:
determining whether the boosted attribute value triggers a boost of the attribute level of the capability attribute;
and/or determining whether the boosted attribute value triggers a boost of the capability level.
In this embodiment, the obtaining module 802 further:
determining a target capability attribute with an improved attribute level in the capability attributes of each category;
if the capability attributes of each category comprise the unique capability attribute with the attribute level being improved, determining the capability attribute as a target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes with the attribute grades being improved, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the attribute level of the target capability attribute is improved.
In this embodiment, the obtaining module 802 further:
determining a target capability attribute with the current attribute level being the same as the capability level in the capability attributes of all categories;
if the capability attributes of each category comprise the unique capability attribute with the current attribute level being the same as the capability level, determining the capability attribute as the target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes of which the current attribute level is the same as the capability level, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the target capability attribute is promoted to the current attribute level.
In this embodiment, the fusion module 803:
acquiring decorative elements set for the virtual interactive object by a user;
the decorative elements set by a user are subjected to enhanced display at preset positions in the action image sequence to generate a dynamic virtual image corresponding to the virtual interaction object;
rendering and fusing the dynamic virtual image and the real scene picture to generate an interactive image sequence corresponding to the dynamic virtual image.
In this embodiment, the fusion module 803:
arranging and combining the decorative elements in a preset decorative element set to generate a plurality of decorative element matches;
matching the generated decorative elements on preset positions in the dynamic image sequence to be subjected to enhanced display so as to generate a plurality of virtual images corresponding to the virtual interactive object;
and rendering and fusing the plurality of virtual images with the real scene picture respectively to generate an interactive image sequence corresponding to the plurality of virtual images.
In this embodiment, the generating module 804:
acquiring an interactive image sequence corresponding to the current virtual image of the virtual interactive object from the interactive image sequences corresponding to the plurality of virtual images;
and generating an interaction record with the virtual interaction image based on the acquired interaction image sequence.
In this embodiment, the obtaining module 802 further:
acquiring a decoration element updated by a user for the virtual interactive object;
the apparatus 80 further comprises:
an updating module 810 (not shown in fig. 8) for displaying the updated decoration element of the user in an enhanced manner at a preset position in the action image sequence to update the avatar of the virtual interactive object; and acquiring an interactive image sequence corresponding to the updated virtual image from the interactive image sequences corresponding to the plurality of virtual images, and synchronously updating the interactive image sequence in the interactive record based on the acquired interactive image sequence.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The systems, devices, modules or modules illustrated in the above embodiments may be implemented by a computer chip or an entity, or by an article of manufacture with certain functionality. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
Corresponding to the method embodiment, the present specification also provides an embodiment of an electronic device. The electronic device includes: a processor and a memory for storing machine executable instructions; wherein the processor and the memory are typically interconnected by an internal bus. In other possible implementations, the device may also include an external interface to enable communication with other devices or components.
In this embodiment, the processor is caused to:
enhancing and displaying a virtual interactive object in the scanned live-action picture;
responding to the interactive operation with the virtual interactive image initiated by the user, and acquiring an action image sequence corresponding to the interactive action of the virtual interactive object;
rendering and fusing the obtained action image sequence and the real scene picture to generate an interactive image sequence, and generating an interactive record of the virtual interactive image based on the interactive image sequence.
In this embodiment, the processor is caused to:
and responding to the interactive operation initiated by the user and the virtual interactive image, and acquiring an action image sequence corresponding to the interactive action currently displayed by the virtual interactive object.
In this embodiment, the virtual interactive object includes capability attributes of multiple categories; wherein, the ability attribute of each category is divided into a plurality of attribute grades; when the attribute level is raised, a new interaction action is triggered to be unlocked;
by reading and executing machine-executable instructions stored by the memory that correspond to augmented reality-based interactive control logic, the processor is caused to:
responding to an interactive operation initiated by a user and the virtual interactive image, and determining whether the interactive operation triggers a grade raising event corresponding to the capability attribute;
and if the interactive operation triggers a level-up event corresponding to the capability attribute, acquiring an action image sequence corresponding to the unlocked interactive action.
In this embodiment, the interactive operation includes an operation of performing image recognition on a target object in the live-action picture; the capability attributes of the various categories are respectively associated with items for improving the capability attributes;
by reading and executing machine-executable instructions stored by the memory that correspond to augmented reality-based interactive control logic, the processor is caused to:
responding to the interactive operation initiated by the user and the virtual interactive image, and carrying out image recognition on the target object in the live-action picture;
determining whether the target item matches an item associated with a capability attribute of each category of the virtual interactive object based on a result of the image recognition;
and if the target object matches an object associated with the capability attribute of any category, improving the attribute value corresponding to the capability attribute of the category based on a first preset amplitude.
In this embodiment, the processor is caused to:
if the target object is not matched with the object associated with the capability attribute of each category, respectively improving the attribute value corresponding to the capability attribute of each category based on a second preset amplitude;
wherein the second preset amplitude is lower than the first preset amplitude.
In this embodiment, the processor is caused to:
determining whether the target item is a designated item based on a result of the image recognition; wherein the designated item is associated with a decorative element associated with the virtual interactive object;
and if the target article is the specified article, obtaining a decoration element associated with the specified article, and adding the decoration element to a decoration element list related to the virtual interaction object, so that a user sets a decoration element for the virtual interaction object based on the decoration element in the decoration element list.
In this embodiment, the processor is caused to:
performing weighted calculation on attribute values corresponding to the capability attributes of the categories to obtain capability values of the virtual interaction image;
calculating a capability level corresponding to the virtual interactive figure based on the capability value; and the number of the first and second groups,
and outputting the calculated capability level to the user in the live-action picture.
In this embodiment, the level-up event includes: a promotion event of an attribute level of the capability attribute; and/or, a boost event for the capability level;
by reading and executing machine-executable instructions stored by the memory that correspond to augmented reality-based interactive control logic, the processor is caused to:
determining whether the boosted attribute value triggers a boost of the attribute level of the capability attribute;
and/or determining whether the boosted attribute value triggers a boost of the capability level.
In this embodiment, the processor is caused to:
determining a target capability attribute with an improved attribute level in the capability attributes of each category;
if the capability attributes of each category comprise the unique capability attribute with the attribute level being improved, determining the capability attribute as a target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes with the attribute grades being improved, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the attribute level of the target capability attribute is improved.
In this embodiment, the processor is caused to:
determining a target capability attribute with the current attribute level being the same as the capability level in the capability attributes of all categories;
if the capability attributes of each category comprise the unique capability attribute with the current attribute level being the same as the capability level, determining the capability attribute as the target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes of which the current attribute level is the same as the capability level, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the target capability attribute is promoted to the current attribute level.
In this embodiment, the processor is caused to:
acquiring decorative elements set for the virtual interactive object by a user;
the decorative elements set by a user are subjected to enhanced display at preset positions in the action image sequence to generate a dynamic virtual image corresponding to the virtual interaction object;
rendering and fusing the dynamic virtual image and the real scene picture to generate an interactive image sequence corresponding to the dynamic virtual image.
In this embodiment, the processor is caused to:
arranging and combining the decorative elements in a preset decorative element set to generate a plurality of decorative element matches;
matching the generated decorative elements on preset positions in the dynamic image sequence to be subjected to enhanced display so as to generate a plurality of virtual images corresponding to the virtual interactive object;
and rendering and fusing the plurality of virtual images with the real scene picture respectively to generate an interactive image sequence corresponding to the plurality of virtual images.
In this embodiment, the processor is caused to:
acquiring an interactive image sequence corresponding to the current virtual image of the virtual interactive object from the interactive image sequences corresponding to the plurality of virtual images;
generating an interaction record with the virtual interaction image based on the acquired interaction image sequence
In this embodiment, the processor is caused to:
acquiring a decoration element updated by a user for the virtual interactive object;
the decoration elements updated by the user are displayed in an enhanced mode at preset positions in the action image sequence so as to update the virtual image of the virtual interaction object; and the number of the first and second groups,
and acquiring an interactive image sequence corresponding to the updated virtual image from the interactive image sequences corresponding to the plurality of virtual images, and synchronously updating the interactive image sequence in the interactive record based on the acquired interactive image sequence.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (27)

1. An augmented reality based interaction method, the method comprising:
enhancing and displaying a virtual interactive object in the scanned live-action picture; the virtual interactive object comprises a plurality of categories of capability attributes; wherein, the ability attribute of each category is divided into a plurality of attribute grades; when the attribute level is improved, a new interaction action is triggered to be unlocked, and the capability attribute of each category is respectively associated with an article for improving the capability attribute;
responding to an interactive operation initiated by a user and the virtual interactive image, and determining whether the interactive operation triggers a grade raising event corresponding to the capability attribute;
if the interactive operation triggers a level-up event corresponding to the capability attribute, acquiring an action image sequence corresponding to the unlocked interactive action;
rendering and fusing the acquired action image sequence and the live-action picture to generate an interactive image sequence, and generating an interactive record with the virtual interactive object based on the interactive image sequence.
2. The method of claim 1, further comprising:
and responding to the interactive operation initiated by the user and the virtual interactive object, and acquiring an action image sequence corresponding to the interactive action currently displayed by the virtual interactive object.
3. The method of claim 1, the interactive operation comprising an operation of image recognition for a target item in the live-action scene; the capability attributes of the various categories are respectively associated with items for improving the capability attributes;
the method further comprises the following steps:
responding to the interactive operation initiated by the user and the virtual interactive image, and carrying out image recognition on the target object in the live-action picture;
determining whether the target item matches an item associated with a capability attribute of each category of the virtual interactive object based on a result of the image recognition;
and if the target object matches an object associated with the capability attribute of any category, improving the attribute value corresponding to the capability attribute of the category based on a first preset amplitude.
4. The method of claim 3, further comprising:
if the target object is not matched with the object associated with the capability attribute of each category, respectively improving the attribute value corresponding to the capability attribute of each category based on a second preset amplitude;
wherein the second preset amplitude is lower than the first preset amplitude.
5. The method of claim 4, further comprising:
determining whether the target item is a designated item based on a result of the image recognition; wherein the designated item is associated with a decorative element associated with the virtual interactive object;
and if the target article is the specified article, obtaining a decoration element associated with the specified article, and adding the decoration element to a decoration element list related to the virtual interaction object, so that a user sets a decoration element for the virtual interaction object based on the decoration element in the decoration element list.
6. The method of claim 4, further comprising:
performing weighted calculation on attribute values corresponding to the capability attributes of the various categories to obtain capability values of the virtual interactive objects;
calculating a capability level corresponding to the virtual interactive object based on the capability value; and the number of the first and second groups,
and outputting the calculated capability level to the user in the live-action picture.
7. The method of claim 6, the level-up event comprising:
a promotion event of an attribute level of the capability attribute; and/or, a boost event for the capability level;
the determining whether the interactive operation triggers a level-up event corresponding to the capability attribute comprises:
determining whether the boosted attribute value triggers a boost of the attribute level of the capability attribute;
and/or determining whether the boosted attribute value triggers a boost of the capability level.
8. The method of claim 7, the obtaining a sequence of motion images corresponding to an unlocked interactive motion, comprising:
determining a target capability attribute with an improved attribute level in the capability attributes of each category;
if the capability attributes of each category comprise the unique capability attribute with the attribute level being improved, determining the capability attribute as a target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes with the attribute grades being improved, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the attribute level of the target capability attribute is improved.
9. The method of claim 7, the obtaining a sequence of motion images corresponding to an unlocked interactive motion, comprising:
determining a target capability attribute with the current attribute level being the same as the capability level in the capability attributes of all categories;
if the capability attributes of each category comprise the unique capability attribute with the current attribute level being the same as the capability level, determining the capability attribute as the target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes of which the current attribute level is the same as the capability level, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the target capability attribute is promoted to the current attribute level.
10. The method according to claim 1, wherein the rendering and fusing the acquired motion image sequence and the live-action picture to generate an interactive image sequence comprises:
acquiring decorative elements set for the virtual interactive object by a user;
the decorative elements set by a user are subjected to enhanced display at preset positions in the action image sequence to generate a dynamic virtual image corresponding to the virtual interaction object;
rendering and fusing the dynamic virtual image and the real scene picture to generate an interactive image sequence corresponding to the dynamic virtual image.
11. The method according to claim 1, wherein the rendering and fusing the acquired motion image sequence and the live-action picture to generate an interactive image sequence comprises:
arranging and combining the decorative elements in a preset decorative element set to generate a plurality of decorative element matches;
matching the generated decorative elements on preset positions in the action image sequence to be subjected to enhanced display so as to generate a plurality of virtual images corresponding to the virtual interaction objects;
and rendering and fusing the plurality of virtual images with the real scene picture respectively to generate an interactive image sequence corresponding to the plurality of virtual images.
12. The method of claim 11, the generating an interaction record with the virtual interaction object based on the sequence of interaction images, comprising:
acquiring an interactive image sequence corresponding to the current virtual image of the virtual interactive object from the interactive image sequences corresponding to the plurality of virtual images;
and generating an interaction record with the virtual interaction object based on the acquired interaction image sequence.
13. The method of claim 12, further comprising:
acquiring a decoration element updated by a user for the virtual interactive object;
the decoration elements updated by the user are displayed in an enhanced mode at preset positions in the action image sequence so as to update the virtual image of the virtual interaction object; and the number of the first and second groups,
and acquiring an interactive image sequence corresponding to the updated virtual image from the interactive image sequences corresponding to the plurality of virtual images, and synchronously updating the interactive image sequence in the interactive record based on the acquired interactive image sequence.
14. An augmented reality based interaction device, the device comprising:
the display module is used for enhancing and displaying the virtual interactive object in the scanned live-action picture; the virtual interactive object is a dynamic virtual cartoon image which can make diversified interactive actions to interact with a user; the virtual interactive object comprises a plurality of categories of capability attributes; wherein, the ability attribute of each category is divided into a plurality of attribute grades; when the attribute level is improved, a new interaction action is triggered to be unlocked, and the capability attribute of each category is respectively associated with an article for improving the capability attribute;
the acquisition module is used for responding to an interactive operation initiated by a user and with the virtual interactive image and determining whether the interactive operation triggers a grade lifting event corresponding to the capability attribute;
if the interactive operation triggers a level-up event corresponding to the capability attribute, acquiring an action image sequence corresponding to the unlocked interactive action;
the fusion module is used for rendering and fusing the acquired action image sequence and the live-action picture to generate an interactive image sequence;
and the generating module is used for generating an interaction record with the virtual interaction object based on the interaction image sequence.
15. The apparatus of claim 14, the acquisition module further to:
and responding to the interactive operation initiated by the user and the virtual interactive object, and acquiring an action image sequence corresponding to the interactive action currently displayed by the virtual interactive object.
16. The device of claim 14, the interactive operation comprising an operation of image recognition for a target item in the live-action scene; the capability attributes of the various categories are respectively associated with items for improving the capability attributes;
the device further comprises:
the identification module responds to the interactive operation initiated by the user and the virtual interactive image and carries out image identification on the target object in the live-action picture;
a promotion module that determines whether the target item matches an item associated with a capability attribute of each category of the virtual interactive object based on a result of the image recognition; and if the target object matches an object associated with the capability attribute of any category, improving the attribute value corresponding to the capability attribute of the category based on a first preset amplitude.
17. The apparatus of claim 16, the lifting module further to:
if the target object is not matched with the object associated with the capability attribute of each category, respectively improving the attribute value corresponding to the capability attribute of each category based on a second preset amplitude;
wherein the second preset amplitude is lower than the first preset amplitude.
18. The apparatus of claim 17, the apparatus further comprising:
a decoration module for determining whether the target item is a designated item based on the result of the image recognition; wherein the designated item is associated with a decorative element associated with the virtual interactive object; and if the target article is the specified article, obtaining a decoration element associated with the specified article, and adding the decoration element to a decoration element list related to the virtual interaction object, so that a user sets a decoration element for the virtual interaction object based on the decoration element in the decoration element list.
19. The apparatus of claim 17, the apparatus further comprising:
the computing module is used for carrying out weighted computation on the attribute values corresponding to the capability attributes of all the categories to obtain the capability values of the virtual interactive objects; calculating a capability level corresponding to the virtual interactive object based on the capability value;
and the output module is used for outputting the calculated capability level to a user in the live-action picture.
20. The apparatus of claim 19, the level boost event comprising:
a promotion event of an attribute level of the capability attribute; and/or, a boost event for the capability level;
the acquisition module further:
determining whether the boosted attribute value triggers a boost of the attribute level of the capability attribute;
and/or determining whether the boosted attribute value triggers a boost of the capability level.
21. The apparatus of claim 20, the acquisition module further to:
determining a target capability attribute with an improved attribute level in the capability attributes of each category;
if the capability attributes of each category comprise the unique capability attribute with the attribute level being improved, determining the capability attribute as a target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes with the attribute grades being improved, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the attribute level of the target capability attribute is improved.
22. The apparatus of claim 21, the acquisition module further to:
determining a target capability attribute with the current attribute level being the same as the capability level in the capability attributes of all categories;
if the capability attributes of each category comprise the unique capability attribute with the current attribute level being the same as the capability level, determining the capability attribute as the target capability attribute;
if the capability attributes of each category comprise a plurality of capability attributes of which the current attribute level is the same as the capability level, determining the capability attribute with the highest attribute value in the plurality of capability attributes as the target capability attribute;
and acquiring an action sequence image corresponding to the unlocked interactive action when the target capability attribute is promoted to the current attribute level.
23. The apparatus of claim 14, the fusion module to:
acquiring decorative elements set for the virtual interactive object by a user;
the decorative elements set by a user are subjected to enhanced display at preset positions in the action image sequence to generate a dynamic virtual image corresponding to the virtual interaction object;
rendering and fusing the dynamic virtual image and the real scene picture to generate an interactive image sequence corresponding to the dynamic virtual image.
24. The apparatus of claim 14, the fusion module to:
arranging and combining the decorative elements in a preset decorative element set to generate a plurality of decorative element matches;
matching the generated decorative elements on preset positions in the action image sequence to be subjected to enhanced display so as to generate a plurality of virtual images corresponding to the virtual interaction objects;
and rendering and fusing the plurality of virtual images with the real scene picture respectively to generate an interactive image sequence corresponding to the plurality of virtual images.
25. The apparatus of claim 23, the generation module to:
acquiring an interactive image sequence corresponding to the current virtual image of the virtual interactive object from the interactive image sequences corresponding to the plurality of virtual images;
and generating an interaction record with the virtual interaction object based on the acquired interaction image sequence.
26. The apparatus of claim 25, the acquisition module further to:
acquiring a decoration element updated by a user for the virtual interactive object;
the device further comprises:
the updating module is used for enhancing and displaying the decorative elements updated by the user at preset positions in the action image sequence so as to update the virtual image of the virtual interactive object; and acquiring an interactive image sequence corresponding to the updated virtual image from the interactive image sequences corresponding to the plurality of virtual images, and synchronously updating the interactive image sequence in the interactive record based on the acquired interactive image sequence.
27. An electronic device, the electronic device comprising:
a processor;
a memory for storing machine executable instructions;
wherein, by reading and executing machine-executable instructions stored by the memory that correspond to augmented reality-based interaction logic, the processor is caused to:
enhancing and displaying a virtual interactive object in the scanned live-action picture; the virtual interactive object is a dynamic virtual cartoon image which can make diversified interactive actions to interact with a user; the virtual interactive object comprises a plurality of categories of capability attributes; wherein, the ability attribute of each category is divided into a plurality of attribute grades; when the attribute level is improved, a new interaction action is triggered to be unlocked, and the capability attribute of each category is respectively associated with an article for improving the capability attribute;
responding to an interactive operation initiated by a user and the virtual interactive image, and determining whether the interactive operation triggers a grade raising event corresponding to the capability attribute;
if the interactive operation triggers a level-up event corresponding to the capability attribute, acquiring an action image sequence corresponding to the unlocked interactive action;
rendering and fusing the acquired action image sequence and the live-action picture to generate an interactive image sequence, and generating an interactive record with the virtual interactive object based on the interactive image sequence.
CN202110267750.5A 2018-08-27 2018-08-27 Interaction method and device based on augmented reality Active CN113112614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110267750.5A CN113112614B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810982275.8A CN109345637B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality
CN202110267750.5A CN113112614B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810982275.8A Division CN109345637B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality

Publications (2)

Publication Number Publication Date
CN113112614A true CN113112614A (en) 2021-07-13
CN113112614B CN113112614B (en) 2024-03-19

Family

ID=65291641

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110267750.5A Active CN113112614B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality
CN201810982275.8A Active CN109345637B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201810982275.8A Active CN109345637B (en) 2018-08-27 2018-08-27 Interaction method and device based on augmented reality

Country Status (3)

Country Link
CN (2) CN113112614B (en)
TW (1) TWI721466B (en)
WO (1) WO2020042786A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112614B (en) * 2018-08-27 2024-03-19 创新先进技术有限公司 Interaction method and device based on augmented reality
CN110430553B (en) * 2019-07-31 2022-08-16 广州小鹏汽车科技有限公司 Interaction method and device between vehicles, storage medium and control terminal
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110941341B (en) * 2019-11-29 2022-02-01 维沃移动通信有限公司 Image control method and electronic equipment
CN111083509B (en) * 2019-12-16 2021-02-09 腾讯科技(深圳)有限公司 Interactive task execution method and device, storage medium and computer equipment
CN113041615A (en) * 2019-12-27 2021-06-29 阿里巴巴集团控股有限公司 Scene presenting method, device, client, server, equipment and storage medium
CN113657891A (en) * 2020-01-11 2021-11-16 支付宝(杭州)信息技术有限公司 Interaction method and device based on electronic certificate and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110104676A (en) * 2010-03-17 2011-09-23 에스케이텔레콤 주식회사 Augmented reality system and method for realizing interaction between virtual object using the plural marker
US20110316845A1 (en) * 2010-06-25 2011-12-29 Palo Alto Research Center Incorporated Spatial association between virtual and augmented reality
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
US20160284134A1 (en) * 2015-03-24 2016-09-29 Intel Corporation Augmentation modification based on user interaction with augmented reality scene
CN106502671A (en) * 2016-10-21 2017-03-15 苏州天平先进数字科技有限公司 A kind of lock screen system interactive based on virtual portrait
CN106888203A (en) * 2016-12-13 2017-06-23 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN106991723A (en) * 2015-10-12 2017-07-28 莲嚮科技有限公司 Interactive house browsing method and system of three-dimensional virtual reality
CN107783648A (en) * 2016-08-31 2018-03-09 宅妆股份有限公司 interaction method and system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930284B (en) * 2009-06-23 2014-04-09 腾讯科技(深圳)有限公司 Method, device and system for implementing interaction between video and virtual network scene
US8743244B2 (en) * 2011-03-21 2014-06-03 HJ Laboratories, LLC Providing augmented reality based on third party information
JP6192264B2 (en) * 2012-07-18 2017-09-06 株式会社バンダイ Portable terminal device, terminal program, augmented reality system, and clothing
US20160012136A1 (en) * 2013-03-07 2016-01-14 Eyeducation A.Y. LTD Simultaneous Local and Cloud Searching System and Method
US10165199B2 (en) * 2015-09-01 2018-12-25 Samsung Electronics Co., Ltd. Image capturing apparatus for photographing object according to 3D virtual object
CN105959718A (en) * 2016-06-24 2016-09-21 乐视控股(北京)有限公司 Real-time interaction method and device in video live broadcasting
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN106920079B (en) * 2016-12-13 2020-06-30 阿里巴巴集团控股有限公司 Virtual object distribution method and device based on augmented reality
CN107741809B (en) * 2016-12-21 2020-05-12 腾讯科技(深圳)有限公司 Interaction method, terminal, server and system between virtual images
CN107204031B (en) * 2017-04-27 2021-08-24 腾讯科技(深圳)有限公司 Information display method and device
CN107274465A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of virtual reality
CN108021896B (en) * 2017-12-08 2019-05-10 北京百度网讯科技有限公司 Image pickup method, device, equipment and computer-readable medium based on augmented reality
CN108229937A (en) * 2017-12-20 2018-06-29 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN113112614B (en) * 2018-08-27 2024-03-19 创新先进技术有限公司 Interaction method and device based on augmented reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110104676A (en) * 2010-03-17 2011-09-23 에스케이텔레콤 주식회사 Augmented reality system and method for realizing interaction between virtual object using the plural marker
US20110316845A1 (en) * 2010-06-25 2011-12-29 Palo Alto Research Center Incorporated Spatial association between virtual and augmented reality
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20160284134A1 (en) * 2015-03-24 2016-09-29 Intel Corporation Augmentation modification based on user interaction with augmented reality scene
CN106991723A (en) * 2015-10-12 2017-07-28 莲嚮科技有限公司 Interactive house browsing method and system of three-dimensional virtual reality
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
CN107783648A (en) * 2016-08-31 2018-03-09 宅妆股份有限公司 interaction method and system
CN106502671A (en) * 2016-10-21 2017-03-15 苏州天平先进数字科技有限公司 A kind of lock screen system interactive based on virtual portrait
CN106888203A (en) * 2016-12-13 2017-06-23 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality

Also Published As

Publication number Publication date
CN113112614B (en) 2024-03-19
CN109345637B (en) 2021-01-26
TW202009682A (en) 2020-03-01
CN109345637A (en) 2019-02-15
WO2020042786A1 (en) 2020-03-05
TWI721466B (en) 2021-03-11

Similar Documents

Publication Publication Date Title
CN109345637B (en) Interaction method and device based on augmented reality
CN111260545B (en) Method and device for generating image
CN109688451B (en) Method and system for providing camera effect
US10970843B1 (en) Generating interactive content using a media universe database
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
US11513658B1 (en) Custom query of a media universe database
US11914836B2 (en) Hand presence over keyboard inclusiveness
WO2024077909A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
KR101977893B1 (en) Digital actor managing method for image contents
CN113727039B (en) Video generation method and device, electronic equipment and storage medium
CN113271486B (en) Interactive video processing method, device, computer equipment and storage medium
CN114610998A (en) Meta-universe virtual character behavior personalized information recommendation method and system
CN112866577B (en) Image processing method and device, computer readable medium and electronic equipment
CN110089076B (en) Method and device for realizing information interaction
CN114222995A (en) Image processing method and device and electronic equipment
CN115202481A (en) Object interaction method, intelligent terminal, electronic device and storage medium
CN116843802A (en) Virtual image processing method and related product
CN113779293A (en) Image downloading method, device, electronic equipment and medium
CN113763568A (en) Augmented reality display processing method, device, equipment and storage medium
CN113707179A (en) Audio identification method, device, equipment and medium
JP6839771B2 (en) Video correction method and system by correction pattern analysis
CN111510582A (en) Apparatus for providing image having virtual character
CN115311400A (en) Social interaction method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant