WO2020042786A1 - 基于增强现实的互动方法及装置 - Google Patents

基于增强现实的互动方法及装置 Download PDF

Info

Publication number
WO2020042786A1
WO2020042786A1 PCT/CN2019/096094 CN2019096094W WO2020042786A1 WO 2020042786 A1 WO2020042786 A1 WO 2020042786A1 CN 2019096094 W CN2019096094 W CN 2019096094W WO 2020042786 A1 WO2020042786 A1 WO 2020042786A1
Authority
WO
WIPO (PCT)
Prior art keywords
interactive
capability
attribute
virtual
image
Prior art date
Application number
PCT/CN2019/096094
Other languages
English (en)
French (fr)
Inventor
吴瑾
段青龙
季婧
吴承军
程佳慧
王亚迪
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020042786A1 publication Critical patent/WO2020042786A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • the present application relates to the field of augmented reality, and in particular, to an interactive method and device based on augmented reality.
  • AR Augmented Reality, Augmented Reality, AR
  • virtual data such as images, videos, 3D models, etc.
  • the display effect of the real scene picture can be enhanced, so by introducing AR technology, it can provide users with a completely new interactive experience.
  • This specification proposes an interactive method based on augmented reality, which includes:
  • This specification also proposes an interactive device based on augmented reality, which includes:
  • An obtaining module in response to a user-initiated interactive operation with the virtual interactive image, obtaining an action image sequence corresponding to an interactive action of the virtual interactive object;
  • a fusion module that renders and fuses the acquired action image sequence and the real scene picture to generate an interactive image sequence
  • a generating module generates an interaction record with the virtual interactive image based on the interactive image sequence.
  • This specification also proposes an electronic device, which includes:
  • Memory for storing machine-executable instructions
  • the processor is caused to:
  • the virtual interactive object is enhanced and displayed in the real scene, the virtual interactive image can be incorporated into the real scene to interact with the user; therefore, the user's realism of interacting with the virtual interactive object can be enhanced;
  • the user-initiated interactive operation with the virtual interactive object will trigger the rendering and fusion of the action image sequence corresponding to the interactive action of the virtual interactive object with the real scene picture, and generate an interaction record according to the rendering fusion result; therefore, for As far as the user is concerned, the initiated interactive operation with the virtual interactive object will ultimately affect the content recorded in the generated interactive record, so that the user can trigger different interactive actions of the virtual interactive image by initiating different forms of interactive operations.
  • Real-world images are fused to generate rich interactive records.
  • FIG. 1 is a flowchart of an augmented reality-based interaction method according to an exemplary embodiment.
  • FIG. 2 is a schematic diagram of an enhanced display of a virtual interaction object in a real scene picture according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of a group photo interaction with a virtual interactive object according to an exemplary embodiment.
  • FIG. 4 is a schematic diagram of a capability enhancement interaction with a virtual interactive object according to an exemplary embodiment.
  • FIG. 5 is a schematic diagram of a decoration virtual interactive object according to an exemplary embodiment.
  • FIG. 6 is a schematic diagram of a capability level of a virtual interactive object according to an exemplary embodiment.
  • FIG. 7 is a schematic diagram of viewing an interaction record with a virtual interactive object according to an exemplary embodiment.
  • FIG. 8 is a block diagram of an augmented reality-based interactive device according to an exemplary embodiment.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
  • the purpose of this specification is to propose a method for generating interactive records by interacting with a virtual interaction object that is displayed in the real scene to trigger the rendering and fusion of the action image sequence corresponding to the interactive action of the virtual interaction object with the real scene.
  • Interactive mode is to propose a method for generating interactive records by interacting with a virtual interaction object that is displayed in the real scene to trigger the rendering and fusion of the action image sequence corresponding to the interactive action of the virtual interaction object with the real scene.
  • users can scan images of the real environment through the AR client; and the AR client can respond to the user-initiated operation to display virtual interactive objects in the real scene screen, and enhance the display of the preview in the scanned real scene screen.
  • Virtual interaction object
  • the above-mentioned virtual interactive object may specifically be a dynamic virtual image (for example, a dynamic 3D cartoon image) capable of making various interactive actions to interact with the user; and in the scanned real scene image, it may provide A "call" button corresponding to the above-mentioned virtual interactive object.
  • the user can operate the "call” button by clicking, for example, to trigger the AR client to display the above-mentioned virtual interactive object in a relative position in the real scene screen.
  • the user can interact with the virtual interactive object by initiating a corresponding interactive operation; and the AR client can respond after detecting the interactive operation initiated by the user
  • the user initiates an interactive operation with the virtual interactive object to obtain an action image sequence (for example, an action animation) corresponding to the interactive action of the virtual interactive object, and renders and fuses the acquired action image sequence with the above-mentioned real scene image to generate interaction.
  • Image sequence then, based on the generated interactive image sequence, an interaction record with the virtual interactive image may be further generated.
  • the user can trigger the rendering and fusion of the action image sequence corresponding to the interactive action of the virtual interactive object and the real scene picture, and generate the fusion based on the rendering fusion.
  • Interactive image sequence to further generate interactive records;
  • the virtual interactive object is enhanced in the real scene picture
  • the virtual interactive image can be integrated into the real scene picture to interact with the user; therefore, the user's realism of interacting with the virtual interactive object can be enhanced;
  • the user-initiated interactive operation with the virtual interactive object will trigger the rendering and fusion of the action image sequence corresponding to the interactive action of the virtual interactive object with the real scene picture, and generate an interaction record according to the rendering fusion result; therefore, for As far as the user is concerned, the initiated interactive operation with the virtual interactive object will ultimately affect the content recorded in the generated interactive record, so that the user can trigger different interactive actions of the virtual interactive image by initiating different forms of interactive operations.
  • Real-world images are fused to generate rich interactive records.
  • FIG. 1 is an embodiment of an application based on an augmented reality-based interactive method applied to an AR client. The method performs the following steps:
  • Step 102 Enhance and display the virtual interactive object in the scanned real scene picture
  • Step 104 Obtain an action image sequence corresponding to an interactive action of the virtual interactive object in response to a user-initiated interactive operation with the virtual interactive image;
  • Step 106 Render and fuse the obtained action image sequence and the real scene picture to generate an interactive image sequence.
  • Step 108 Generate an interaction record with the virtual interactive image based on the interactive image sequence.
  • the AR client includes client software developed based on AR technology or integrated with the AR function; for example, the AR client may be an APP application integrated with the AR service function; the AR client may be scanned by a built-in image Function to scan the real environment in which the user is located to obtain the real scene picture; and, through the AR engine provided by the AR client, virtual data pushed by the AR server in the background (such as those that need to be enhanced in the real scene picture) Image sequence) for visual rendering and superimposing and merging it with the real scene picture to complete the enhanced display of the virtual data in the real scene picture.
  • the AR client may be an APP application integrated with the AR service function
  • the AR client may be scanned by a built-in image Function to scan the real environment in which the user is located to obtain the real scene picture
  • virtual data pushed by the AR server in the background such as those that need to be enhanced in the real scene picture) Image sequence
  • the AR server includes a server, a server cluster, or a distributed service platform based on the server cluster that provides services to the AR client.
  • the AR server may be a distribution service that provides docking services for apps that integrate AR functions.
  • the AR server can perform content management on the virtual data that needs to be displayed in the real scene scanned by the AR client, and push the virtual data that needs to be displayed in the real scene to the AR client.
  • the above-mentioned virtual interactive objects include dynamic virtual images of any form capable of making various interactive actions to interact with users; for example, dynamic 3D cartoon images capable of making various interactive actions.
  • the interactive actions that the virtual interactive object can make can be performed by the operator of the AR client or a third-party ISV (Independent Software Vendors) that provides services to the above operators based on the actual interaction needs Action design, and generate the corresponding action image sequence (that is, animation files related to interactive actions).
  • ISV Independent Software Vendors
  • the generated motion image sequence corresponding to the interactive action of the virtual interactive object can be uniformly maintained and managed by the AR server as virtual data that needs to be displayed in the real scene screen.
  • the above-mentioned interaction record is specifically used to record the interaction result of the interaction between the user and the virtual interaction object; among them, the specific form of the above-mentioned interaction record is not specifically limited in this specification;
  • the interaction record may be abstracted into a diary form of the 3D cartoon image to present to the user.
  • an AR scan portal can be provided in the user interface of the AR client; the user can trigger the scan portal to enter the AR scan interface and initiate an image scan of the real environment in which it is located;
  • a “scan and scan” entry may be provided in the user homepage of the AR client, and the user can enter the AR scanning interface by triggering the “scan and scan” entry to perform real-world environment analysis. Image scanning.
  • the AR client can output the scanned real-life scene in real-time in the AR scanning interface, and by default, an operation entry for triggering the enhanced display of virtual interactive objects in the real-life scene is displayed in the output real-life scene by default, so that the user can Through this operation portal, it is initiated to display the virtual interactive object in the real scene screen output by the AR scanning interface.
  • FIG. 2 is a schematic diagram of an enhanced display of a virtual interactive object in a real scene picture shown in the present specification.
  • the above operation portal may specifically be a “call” button corresponding to “Xingbao”.
  • the user can trigger the “call” button by clicking, for example, to summon “Xingbao” to the real scene screen. For enhanced display.
  • the AR client After the AR client detects a user's trigger operation on the "call” button, it can obtain an action image sequence (such as an animation file) related to the interactive action of the virtual interactive object from the AR server, and the obtained action image The sequence's relative position in the live view is enhanced.
  • an action image sequence such as an animation file
  • an interactive action for default display may be set; when the AR server detects that the user targets the above-mentioned " After the operation of the "Summon” button is triggered, an action image sequence corresponding to the interactive action displayed by default can be obtained from the AR server, and then the acquired action image sequence is enhanced and displayed on the real scene screen.
  • the user can interact with the virtual interactive object by initiating a specific interactive operation in the real scene picture.
  • the user can complete the interaction by directly interacting with the virtual interactive object
  • the user can trigger the virtual interactive object to make different interactive actions by touching the virtual interactive object that is displayed in the real scene picture; in this case, the interactive action that the virtual interactive object can make , Which can be triggered randomly by the user's touch operation on the virtual interactive object.
  • the AR client After the AR client detects a user's touch operation on the virtual interactive object, the AR client can obtain an action image sequence corresponding to an interactive action randomly allocated by the AR server, and then display the acquired action image sequence in a real-life picture.
  • the user may also use the AR client to enhance several interactive portals displayed in the real scene screen to interact with the virtual interactive object in a specific interactive scene.
  • the AR client can enhance the display of several interactive portals in the real scene screen; each of the interactive portals can correspond to a type of user defined by the operator of the AR client that interacts with the virtual interactive object.
  • Interactive scenes therefore, users can trigger these interactive portals to initiate corresponding interactive operations in real-life pictures and interact with virtual interactive objects in different interactive scenes.
  • interaction methods and specific interaction forms between users and virtual interaction objects are not specifically limited in this specification.
  • the operator of the AR client can be based on the actual interaction needs. To customize a variety of interactive scenarios flexibly.
  • the user can initiate a photo interaction of “Xingbao” with the real scene to trigger the AR client to render the action image sequence corresponding to the interactive action currently displayed by “Xingbao” and render it with the real scene.
  • an interactive record of group photo interaction is generated.
  • FIG. 3 is a schematic diagram of a group photo interaction with a virtual interactive object shown in the specification.
  • the AR client can enhance the display of a "group photo interaction” button in the real scene screen; the user can trigger the "group photo interaction” button, such as clicking, to initiate a group photo with "Xingbao” in the real scene screen.
  • Interactive operation can enhance the display of a "group photo interaction” button in the real scene screen; the user can trigger the "group photo interaction” button, such as clicking, to initiate a group photo with "Xingbao" in the real scene screen.
  • the AR client after detecting the user ’s trigger operation on the “Photo Group Interaction” button, can respond to the user ’s interactive operation of taking photos with “Xingbao”, and obtain the current interaction of “Xingbao” from the AR server.
  • the sequence of action images corresponding to the action.
  • the AR engine provided by the AR client can render the obtained action image sequence with the real scene picture Fusion to generate an interactive image sequence; then, based on the generated interactive image sequence, an interactive record of group photo interaction with "Xingbao" can be further generated.
  • users can scan the real environment through the AR client, and summon "Xingbao” in the scanned real scene, and then initiate a photo interaction with "Xingbao” in the real scene. Trigger the AR client to render and fuse the animation file corresponding to the interactive action currently displayed by "Xingbao” with the real scene picture to generate a group photo animation file. Then, the AR client can generate the "Xingbao” based on the group animation file. Travel Diary ", as a record of interaction between users and" Xingbao ".
  • multiple categories of capability attributes can be set for "Xingbao”; several attribute levels can also be divided for the capability attributes of each category, and when the level of attribute improvement is set for "Xingbao"
  • Interactive scenarios that trigger unlocking new interactive actions can be set for "Xingbao"
  • an interactive operation can be defined for "Xingbao” that can improve its ability attributes.
  • the user can initiate the interactive operation in the real scene to trigger the level improvement of the ability attributes corresponding to each category of "Xingbao” Event to unlock new interactive actions for "Xingbao”;
  • the AR client can obtain a new interactive action for the "Star Treasure", and after rendering and merging the action image sequence corresponding to the unlocked new interactive action with the real scene image, generate a "Star Treasure” capability attribute Interaction record.
  • the improvement of the "Xingbao" capability attribute will trigger the AR client to generate an interactive record that records the improvement of the "Xingbao” capability attribute; and, the "Xingbao" different capability attribute
  • the content of the interaction records generated by the triggering AR client will also be different.
  • the interactive operation defined for "Xingbao" to enhance its ability attributes may specifically be an interactive operation for image recognition of a target item in a real scene picture. That is, the user can initiate an interactive operation for image recognition of the target item of the real scene to trigger a level-up event corresponding to the capability attributes of each category of "Xingbao” to unlock new interactive actions for "Xingbao”;
  • the capability attributes of each category of "Xingbao” can be associated with the collection of items used to enhance the capability attributes.
  • the user can scan the capability attributes of each category of "Xingbao” in the real scene through the AR client.
  • Associated items help Starbucks explore the real world to trigger level-up events that correspond to the power attributes of each category of Starbucks.
  • the following takes the interactive operation defined for "Xingbao" to enhance its ability attributes and the interactive operation for image recognition of the target object in the real scene as an example.
  • an interactive operation defined for a virtual interactive object that can enhance its ability attributes can specifically be an interactive operation that “calls” a virtual interactive image at a new LBS position; and each category of the virtual interactive object
  • the capability attributes can be associated with the LBS location set used to enhance the capability attributes, respectively.
  • users can use the AR client to summon a virtual interactive image on a new LBS location to improve the ability attributes of the virtual interactive object.
  • FIG. 4 is a schematic diagram of capability enhancement interaction with a virtual interactive object shown in the present specification.
  • the AR client can enhance and display a "Explore the World” button in the real scene picture.
  • the user can trigger the "Explore the World” button, such as clicking, to initiate the interactive operation of image recognition of the target items in the real scene to help "Xingbao" to explore the real world and improve the ability of "Xingbao”.
  • the AR client can enter the AR image scanning interface to respond to the user's interactive operation of image recognition of the target item in the real scene screen, and perform the above-mentioned target item.
  • Image scanning to collect image characteristics of the target object, and perform image recognition on the target object based on the collected image characteristics;
  • the AR client may synchronize the image feature sample library stored on the AR server to the local area in advance; wherein, the image feature sample database stores a large number of image feature samples of predefined items.
  • the image feature samples in the image feature sample library of the image features can be matched with similarity respectively; when the extracted image features are stored in the image feature database, When the image feature samples of any of the predefined items match, at this time, it can be confirmed that the target article is successfully identified from the scanned real-world image.
  • the AR client can also upload the image characteristics of the above-mentioned target items to the AR server, and the AR server completes the image recognition based on the same method as shown above, and then returns the image recognition result to AR client.
  • the AR client confirms whether the target object matches an item associated with the capability attribute of each category of "Star Treasure” based on the image recognition result;
  • the AR client may increase the attribute value corresponding to the capability attribute based on the first preset amplitude
  • the attribute value corresponding to the capability attribute of each category may be simultaneously improved based on the second preset amplitude; wherein, the second preset The amplitude may be much smaller than the first preset amplitude.
  • the attribute value corresponding to the capability attribute can be increased by a large margin;
  • the attribute values corresponding to the ability attributes of each category are simultaneously increased in a small margin.
  • the attribute values corresponding to the capability attributes of each category may not be improved in any form, which is not described in this specification. It is specifically limited.
  • the AR client may further display a “decorative” button in the real scene screen, and the user may trigger the “decorative” button by clicking, for example, to initiate a live scene.
  • the avatar of "Xingbao” displayed in the screen is enhanced for decoration, and the favorite avatar is customized for "Xingbao”.
  • the AR client after detecting the user's trigger operation on the "decoration” button, can enter the decoration element list configured for "Xingbao". At this time, the user can select the favorite decoration element in the decoration element list and "To decorate and customize your favorite avatar for" Xingbao ".
  • the decoration element list shown in FIG. 5 specifically includes types of decoration elements such as “head ornaments”, “neck ornaments”, and “apparel”.
  • the decorative elements provided in the above decorative element list may include a small number of decorative elements that can be directly selected by the user, and may also include several upper-level unlocked decorations. element;
  • one or more designated items may be associated respectively.
  • the AR client may also determine whether the target item is a designated item associated with various decorative elements that have not been unlocked based on the image recognition result;
  • the item is the above-mentioned specified item.
  • the AR client may obtain a decoration element associated with the specified item, add the decoration element to the list of decoration elements, and unlock the decoration element in the list of decoration elements.
  • the user when the user improves the ability attribute of "Xingbao" by image recognition of the target item in the real scene picture, the user can also have a certain probability to obtain the decorative element for decorating "Xingbao". This can enhance the user ’s interactive experience.
  • the AR client in addition to the AR client can assign attribute levels to the capability attributes of each category of "Xingbao", the AR client can also weight the attribute values corresponding to the capability attributes of each category of "Xingbao” Calculate to get the total power value of "Xingbao”, and calculate the ability level of "Xingbao” according to the total ability value, and then calculate the total value of "Xingbao” and the calculated “Xingbao” Ability level, output to the user in the real scene picture;
  • the AR client may enhance and display a “capability level” button in the real scene screen, and the user may trigger the “capability level” button to view “ The attribute value of the attribute level of each category of "Xingbao", and the total ability value of "Xingbao” calculated by the AR client and the ability level of "Xingbao” calculated based on the total ability value.
  • the AR client After the AR client detects the user's trigger operation on the "capability level” button, it can enter the "Singbao" capability attribute interface, and display the attribute values of the "Singbao” capability attributes to the user through the capability attribute interface. , And the total capability value of "Xingbao” calculated by the AR client and the "Xingbao” ability level calculated based on the total ability value.
  • the capability attributes set for "Xingbao” shown in Figure 6 include capability attributes such as “living”, “sports”, “creation”, and “social”; in the above capability attribute interface, the “life” ",” Sports “,” creation “, and” social “attribute values; among them, the AR client can set a weight coefficient for each of the above four capability attributes, and the attributes corresponding to the above four capability attributes After multiplying the value by the set weight coefficient, the summation operation is performed to obtain the total capacity value of "Xingbao”; then, based on the calculated total capacity value of "Xingbao", the capacity of "Xingbao” is calculated Level. The calculated total value and level of "Xingbao" will be displayed in the above-mentioned capability attribute interface.
  • the total capability value of "Xingbao” it is also possible to divide several capability levels and set a threshold for each capability level. If the total capability value obtained by weighting reaches the threshold corresponding to a certain capability level After that, it can be considered that the total capability value of "Xingbao” has been increased to this level; for example, as shown in Fig. 6, the current capability level of "Xingbao” is level5, and the threshold of the next level of level6 is 800. That is, when the current total power value of "Xingbao" reaches 800, the ability level of "Xingbao" will be increased to level6.
  • the user scans through the AR client the items associated with the capability attributes of each category of "Xingbao" in the real scene screen, and triggers the level promotion event corresponding to the capability attributes of each category of "Xingbao", which can specifically include Several types of events are shown below:
  • the AR client can compare the promoted attribute value with the threshold corresponding to each attribute level of the capability attribute to determine whether the promoted attribute value triggers
  • the attribute level of the capability attribute is improved; for example, as shown in FIG. 6, it is assumed that the value of the "sports" capability attribute of the "sports" after the promotion is 648, and the threshold of the next capability level of the "sport” capability attribute is 800; Then the AR client can compare the attribute value of 648 with the threshold value of 800; at this time, the attribute value after the promotion is smaller than the threshold value, and it can be considered that the attribute value after the promotion does not trigger the attribute ability of the "sports" ability of "Xingbao" Promotion.
  • the AR client can further determine the target capability attribute for which the attribute level has been raised among the capability attributes of each category of "Xingbao";
  • the capability attribute of the capability level improvement can be determined as the target capability attribute;
  • the AR client determines that the target item scanned by the user matches the item associated with the "living" ability attribute of "Xingbao” based on the image recognition result.
  • the "Star” can be increased based on the first preset amplitude Attribute value corresponding to the "living" ability attribute of "Bao”; in this case, if the attribute value of the "living” ability attribute of "Xingbao” is raised, the attribute level promotion event of the "living” ability attribute is triggered, then The "living" ability attribute of "Xingbao” can be determined as the target ability attribute.
  • the ability attributes of each category include the ability attributes of multiple categories with an improved attribute level, that is, the improved attribute value triggers the promotion of the attribute levels of multiple ability attributes of "Xingbao", at this time, It is possible to further compare the attribute values of the plurality of capability attributes whose attribute levels are improved, and determine the capability attribute with the highest attribute value among the plurality of capability attributes as the target capability attribute.
  • the AR client determines that the target item scanned by the user does not match the items associated with the capability attributes of each category of “Xingbao” based on the image recognition result. Attribute values of "living”, “exercise”, “creation”, and “social”;
  • the AR client determines the target capability attribute whose attribute level has been improved in the manner shown above, it can obtain the target capability attribute's attribute level promotion from the AR server, and the unlocked latest interactive action corresponds to The action image sequence, and then the obtained action image sequence is rendered and fused with the real scene image to generate an interactive image sequence, and based on the generated interactive image sequence, an interactive record for enhancing the interaction with "Xingbao" is generated.
  • the user can scan the real environment through the AR client, and summon "Xingbao” in the scanned real-life picture, and then initiate the ability attributes associated with each category of "Xingbao” in the real-life picture.
  • the items are image-recognized to improve the ability attributes of "Xingbao” and unlock new interactive actions. If the level of the ability attributes of "Xingbao” is increased to unlock new interactive actions, AR clients can obtain new unlocked interactive actions.
  • the specific file formats of the motion image sequence and the interactive image sequence are not specifically limited in this specification; for example, in one embodiment, the motion image sequence and the interactive image sequence may both be pictures in the GIF format. file.
  • the above-mentioned level promotion event is taken as an example of the "star treasure” overall capability level promotion event. If the AR client completes the image recognition of the target item scanned by the user, and according to the image recognition As a result, after improving the attribute value of the capability attribute of one or more categories of "Xingbao", it can be further determined whether the enhanced attribute value triggers the improvement of the "Xingbao" ability level;
  • the AR client can perform weighted calculation on the attribute value of the capability attribute of each category to obtain the total capability value, and re-calculate it based on the calculated total capability value. Calculate "Xingbao” ability level; then, you can compare the recalculated “Xingbao” ability level with "Xingbao” current ability level to determine whether the "Xingbao” ability level is improved.
  • the AR client can further determine that among the capability attributes of each category of "Starcraft", the current attribute level is the same as the "Starcraft" capability level Target capability attributes
  • the capability attributes of each category include only the unique capability attributes whose current attribute level is the same as that of "Xingbao", then this capability attribute may be determined as the target capability attribute;
  • the AR client can find the "life", “sports", “creation”, and “social” of "Xingbao” Is there a level 5 capability attribute among equal capability attributes; assuming that only the "living" capability attribute is level 5 and all other capability attributes are lower than level 5, the "living" capability attribute can be determined as the target capability attribute.
  • the ability attributes of each category include multiple ability attributes whose current attribute level is the same as the "star treasure” ability level
  • the attribute values of the multiple ability attributes can be further compared to the multiple ability attributes.
  • the capability attribute with the highest attribute value among the attributes is determined as the target capability attribute.
  • the AR client can find the "life”, “sports”, “creation”, and “social” of "Xingbao” Is there a level 5 capability attribute among equal capability attributes; assuming that the "life”, “sports”, “creation”, and “social” capability attributes of "Xingbao” are all level5, the "living", "sport” Among the ability attributes such as “", “creation”, and “social”, the "living" ability attribute with the highest value is determined as the target ability attribute.
  • the attribute level of the target capability attribute can be obtained from the AR server.
  • the attribute level of the target capability attribute is upgraded to the current attribute level, it is unlocked.
  • the action image sequence corresponding to the interactive action, and then the obtained action image sequence is rendered and fused with the real scene image to generate an interactive image sequence, and based on the generated interactive image sequence, an interaction with the "star treasure” to enhance the interaction is generated recording.
  • the AR client can still generate the "capability improvement diary" of "Xingbao” based on the animation file generated after the fusion, as a record of interaction between the user and "Xingbao".
  • the attribute level of the capability attributes of each category of "Xingbao”, and the event that the level of "Xingbao” is upgraded at the same time are taken as an example, if the AR client completes After recognizing the image of the target item scanned by the user, and improving the attribute value of the capability attribute of one or more categories of "Xingbao" according to the image recognition result, it can be further determined whether the promoted attribute value simultaneously triggers " The promotion of the attribute level of the "Xingbao" ability attribute; and the promotion of the "Xingbao" ability level;
  • the AR client can further determine the Among the ability attributes, the current ability level has the same target ability attribute as the "star treasure” ability level;
  • the capability attributes of each category include only the unique capability attributes whose current attribute level is the same as that of "Xingbao", then this capability attribute may be determined as the target capability attribute;
  • the ability attributes of each category include multiple ability attributes whose current attribute level is the same as the "star treasure” ability level
  • the attribute values of the multiple ability attributes can be further compared to the multiple ability attributes.
  • the capability attribute with the highest attribute value among the attributes is determined as the target capability attribute.
  • the attribute level of the target capability attribute can still be obtained from the AR server and unlocked when it is upgraded to the current attribute level.
  • the action image sequence corresponding to the interactive action of the user, and then the obtained action image sequence is rendered and fused with the real scene image to generate an interactive image sequence, and based on the generated interactive image sequence, the ability to interact with the "star treasure” to enhance its capabilities is generated. Interaction record, the specific process will not be repeated.
  • the interactive scenes with Starbucks and the interaction with Starbucks to enhance their capabilities are described as examples; it is important to emphasize that the above two scenarios are used as examples for illustration.
  • the operator of the AR client can also flexibly customize other forms of interaction scenarios based on actual interaction needs, which will not be enumerated one by one in this specification.
  • the action image sequence obtained by the AR client from the AR server usually does not include the decorative elements customized by the user for "Xingbao”; for example, the obtained The action image sequence may be just the skeletal animation of "Xingbao" when performing an interactive action; therefore, if the AR client directly renders and fuses the acquired action image sequence with the real scene picture, the final interactive image sequence will be The decorative elements customized by the user for "Xingbao” will be missing, causing the avatar defined by the user for "Xingbao” to be lost in the fusion result.
  • the decorative elements customized by the user for "Xingbao" can be pre-defined in the acquired motion image sequence.
  • the corresponding dynamic avatar is generated, and then the generated dynamic avatar and the real scene image are rendered and fused.
  • the AR client after the AR client obtains the action image sequence from the AR server, it can first obtain the decoration elements currently set by the user for "Xingbao", and obtain the decoration elements set by the user at Enhanced display is performed at a preset position in the action image sequence of, to generate a dynamic virtual image of "Xingbao" corresponding to the action image sequence;
  • the specific position of the decorative element in the action image sequence is usually determined by the position where the decorative element can be modified, and is not specifically limited in this specification; for example, for the headwear of "Xingbao", The position of the head is enhanced in the action animation of "Treasure".
  • the AR client can then render and fuse the generated dynamic avatar with the real scene picture to generate an interactive image sequence corresponding to the current dynamic avatar of "Xingbao".
  • the AR client can obtain the updated element updated by the user for "Xingbao”
  • the decorative elements updated by the user are enhanced and displayed at preset positions in the action image sequence, and the dynamic avatar of "Xingbao" is updated synchronously.
  • the AR client can re-render and fuse the updated dynamic avatar of "Xingbao” with the real scene image to generate an interactive image sequence corresponding to the updated dynamic avatar of "Xingbao", and then based on the "Xingbao"
  • the interactive image sequence corresponding to the updated dynamic avatar synchronizes and updates the interactive image sequence corresponding to the original dynamic avatar before the "Star treasure” update in the generated interactive record.
  • the AR client may also arrange and combine the decorative elements in the above-mentioned decorative element set in advance to generate several types of decorative element combinations;
  • the AR client After the AR client obtains the action image sequence from the AR server, it can match each of the decorative elements generated above, and perform enhanced display at the preset position in the dynamic image sequence, respectively, to generate the corresponding "Xingbao" Several dynamic avatars for the action image sequence;
  • the AR client can render and fuse each type of dynamic virtual image of the "Xingbao" with the real scene image separately to generate an interactive image sequence of "Xingbao" corresponding to each type of dynamic virtual image.
  • the AR client when the AR client generates an interactive record for "Xingbao” based on the generated interactive image sequence, the AR client can select from the generated "Xingbao" interactive image sequence corresponding to each type of dynamic avatar. An interactive image sequence corresponding to the current dynamic avatar of "Xingbao” is generated, and then an interactive record is generated for "Xingbao" according to the selected interactive image sequence.
  • the AR client can obtain the updated element updated by the user for "Xingbao”
  • the decorative elements updated by the user are enhanced and displayed at preset positions in the action image sequence, and the dynamic avatar of "Xingbao" is updated synchronously.
  • the AR client can also select an interactive image sequence corresponding to the updated dynamic avatar of "Xingbao” from the generated interactive image sequence of "Xingbao” corresponding to each type of dynamic avatar, and then based on The selected interactive image sequence synchronizes and updates the interactive image sequence corresponding to the original dynamic avatar before the update in the generated interactive record.
  • the AR client since the AR client has previously generated an interactive image sequence of "Xingbao" corresponding to each type of dynamic avatar, when the user updates the "Xingbao" avatar, the AR client will In other words, it is no longer necessary to re-render and fuse the updated dynamic avatar with the real scene image, but can directly select from the interactive image sequence of the generated "Xingbao” corresponding to each type of dynamic avatar.
  • the interactive image sequence corresponding to the dynamic avatar updated by "Bao” can be immediately updated synchronously with the interactive image sequence corresponding to the dynamic avatar before the "Star Bao" update in the generated interaction log.
  • the content of the generated interactive records will also change in real time; for the user, the generated interactive records will change dynamically and perceptually in real time, which can significantly Improve user interaction experience.
  • the generated interaction record can also be displayed to the user;
  • the AR client can enhance the display of a "view” button in the real scene screen, and the user can trigger the "view” button, such as clicking, to view the interaction record generated by the AR client .
  • the AR client can enter the interactive record display interface after detecting the user's trigger operation on the "view” button, and output the interactive record that has been generated to the user through this interactive record display interface;
  • the above “view” button may be a “diary” button. Button to trigger the AR client to output the Starbucks diary viewing interface to view the "travel diary", “capacity improvement diary”, etc. generated by the AR client.
  • the virtual interactive object is enhanced in the real scene, the virtual interactive image can be incorporated into the real scene to interact with the user; therefore, the user's interaction with the virtual interactive object can be improved.
  • the user-initiated interactive operation with the virtual interactive object will trigger the rendering and fusion of the action image sequence corresponding to the interactive action of the virtual interactive object with the real scene picture, and generate an interaction record according to the rendering fusion result; therefore, for As far as the user is concerned, the initiated interactive operation with the virtual interactive object will ultimately affect the content recorded in the generated interactive record, so that the user can trigger different interactive actions of the virtual interactive image by initiating different forms of interactive operations.
  • Real-world images are fused to generate rich interactive records.
  • this application also provides embodiments of the device.
  • this application proposes an augmented reality-based interactive device 80 that is applied to an AR client.
  • FIG. 9 a hardware architecture involved in carrying the AR-based interactive device 80 -based client.
  • the augmented reality-based interactive device 80 can generally be understood as a computer program loaded in the memory and run by the CPU
  • the device 80 includes:
  • the obtaining module 802 obtains an action image sequence corresponding to an interactive action of the virtual interactive object in response to an interactive operation with the virtual interactive image initiated by a user;
  • the generating module 804 generates an interaction record with the virtual interactive image based on the interactive image sequence.
  • the obtaining module 802 the obtaining module 802:
  • an action image sequence corresponding to the interactive action currently displayed by the virtual interactive object is obtained.
  • the virtual interaction object includes multiple types of capability attributes; among which, the capability attributes of each category are divided into several attribute levels; when the attribute level is raised, a new interaction action is triggered to unlock;
  • the obtaining module 802 is a obtaining module 802 :
  • an action image sequence corresponding to the unlocked interactive action is obtained.
  • the interactive operation includes an operation of performing image recognition on a target item in the real scene picture; the capability attributes of each category are associated with items for improving the capability attributes, respectively;
  • the device 80 further includes:
  • a recognition module 805 (not shown in FIG. 8), in response to a user-initiated interactive operation with the virtual interactive image, performing image recognition on a target item in the real scene picture;
  • the lifting module 806 determines whether the target item matches an item associated with the capability attribute of each category of the virtual interactive object based on a result of the image recognition; if the target item matches with any An item associated with a category's ability attribute improves the attribute value corresponding to the category's ability attribute based on the first preset amplitude.
  • the lifting module 806 further:
  • the attribute value corresponding to the capability attribute of each category is improved based on the second preset amplitude
  • the second preset amplitude is lower than the first preset amplitude.
  • the device 80 further includes:
  • the decoration module 807 determines whether the target item is a designated item based on a result of the image recognition; wherein the designated item is associated with a decorative element related to the virtual interactive object;
  • the target item is the specified item, obtain a decoration element associated with the specified item, and add the decoration element to a decoration element list related to the virtual interactive object, so that the user can use the decoration based on the decoration in the decoration element list
  • the element sets a decorative element for the virtual interactive object.
  • the device 80 further includes:
  • the calculation module 808 (not shown in FIG. 8) performs weighted calculation on the attribute values corresponding to the capability attributes of each category to obtain the capability value of the virtual interactive image; and calculates the corresponding value of the virtual interactive image based on the capability value.
  • the output module 809 (not shown in FIG. 8) outputs the calculated capability level to the user in the real scene screen.
  • the level promotion event includes:
  • a promotion event of the attribute level of the capability attribute and / or a promotion event of the capability level;
  • the obtaining module further:
  • the obtaining module 802 further:
  • the capability attributes of each category include the only capability attribute whose attribute level is improved, the capability attribute is determined as the target capability attribute;
  • capability attributes of each category include multiple capability attributes with improved attribute levels, determining the capability attribute with the highest attribute value among the plurality of capability attributes as the target capability attribute;
  • the obtaining module 802 further:
  • capability attributes of each category include a unique capability attribute whose current attribute level is the same as the capability level, determining the capability attribute as the target capability attribute;
  • capability attributes of each category include multiple capability attributes with the same current attribute level as the capability level, determining the capability attribute with the highest attribute value among the plurality of capability attributes as the target capability attribute;
  • An action sequence image corresponding to the interactive action that is unlocked when the target ability attribute is promoted to the current attribute level is obtained.
  • the fusion module 803 the fusion module 803:
  • the fusion module 803 the fusion module 803:
  • the generating module 804 the generating module 804:
  • An interaction record with the virtual interactive image is generated based on the acquired interactive image sequence.
  • the obtaining module 802 further:
  • the device 80 further includes:
  • the update module 810 (not shown in FIG. 8) enhances and displays decorative elements updated by the user at preset positions in the action image sequence to update the virtual image of the virtual interactive object;
  • the virtual image interactive image sequence an interactive image sequence corresponding to the updated virtual image is acquired, and the interactive image sequence in the interactive record is updated synchronously based on the acquired interactive image sequence.
  • the relevant part may refer to the description of the method embodiment.
  • the device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, which may be located in One place, or can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. Those of ordinary skill in the art can understand and implement without creative efforts.
  • the system, device, module, or module described in the foregoing embodiments may be specifically implemented by a computer chip or entity, or may be implemented by a product having a certain function.
  • a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or a combination of any of these devices.
  • the electronic device includes a processor and a memory for storing machine-executable instructions; wherein the processor and the memory are usually connected to each other through an internal bus.
  • the device may further include an external interface to enable communication with other devices or components.
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • an action image sequence corresponding to the interactive action currently displayed by the virtual interactive object is obtained.
  • the virtual interaction object includes multiple types of capability attributes; among which, the capability attributes of each category are divided into several attribute levels; when the attribute level is raised, a new interaction action is triggered to unlock;
  • the processor By reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • an action image sequence corresponding to the unlocked interactive action is obtained.
  • the interactive operation includes an operation of performing image recognition on a target item in the real scene picture; the capability attributes of each category are associated with items for improving the capability attributes, respectively;
  • the processor By reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the attribute value corresponding to the capability attribute of the category is increased based on the first preset amplitude.
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the attribute value corresponding to the capability attribute of each category is improved based on the second preset amplitude
  • the second preset amplitude is lower than the first preset amplitude.
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the target item is the specified item, obtaining a decoration element associated with the specified item, and adding the decoration element to a decoration element list related to the virtual interactive object, so that the user may A decorative element sets a decorative element for the virtual interactive object.
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the calculated ability level is output to the user in the real scene picture.
  • the level promotion event includes: the attribute level promotion event of the capability attribute; and / or, the capability level promotion event;
  • the processor By reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the capability attributes of each category include the only capability attribute whose attribute level is improved, the capability attribute is determined as the target capability attribute;
  • capability attributes of each category include multiple capability attributes with improved attribute levels, determining the capability attribute with the highest attribute value among the plurality of capability attributes as the target capability attribute;
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • capability attributes of each category include a unique capability attribute whose current attribute level is the same as the capability level, determining the capability attribute as the target capability attribute;
  • capability attributes of each category include multiple capability attributes with the same current attribute level as the capability level, determining the capability attribute with the highest attribute value among the plurality of capability attributes as the target capability attribute;
  • An action sequence image corresponding to the interactive action that is unlocked when the target ability attribute is promoted to the current attribute level is obtained.
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:
  • the processor by reading and executing machine-executable instructions corresponding to the augmented reality-based interactive control logic stored in the memory, the processor is caused to:

Abstract

本说明书提出了一种基于增强现实的互动方法,包括:在扫描到的实景画面中增强显示虚拟互动对象;响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,并基于所述互动图像序列生成与所述虚拟互动形象的互动记录。

Description

基于增强现实的互动方法及装置 技术领域
本申请涉及增强现实领域,尤其涉及一种基于增强现实的互动方法及装置。
背景技术
AR(Augmented Reality,增强现实)技术,是一种通过实时扫描真实环境获取实景画面,并在实景画面上叠加相应的虚拟数据(比如图像、视频、3D模型等),进而对虚拟世界与现实世界进行融合的技术。由于在实景画面上叠加虚拟数据,可以对实景画面的展示效果进行增强,因此通过引入AR技术,能够面向用户提供一个全新的交互体验。
发明内容
本说明书提出一种基于增强现实的互动方法,所述方法包括:
在扫描到的实景画面中增强显示虚拟互动对象;
响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,并基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
本说明书还提出一种基于增强现实的互动装置,所述装置包括:
显示模块,在扫描到的实景画面中增强显示虚拟互动对象;
获取模块,响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
融合模块,将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列;
生成模块,基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
本说明书还提出一种电子设备,所述电子设备包括:
处理器;
用于存储机器可执行指令的存储器;
其中,通过读取并执行所述存储器存储的与基于增强现实的互动逻辑对应的机器可执行指令,所述处理器被促使:
在扫描到的实景画面中增强显示虚拟互动对象;
响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,并基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
在以上技术方案中,一方面,由于在实景画面中增强显示虚拟互动对象,可以将虚拟互动形象融入到实景画面中与用户进行互动;因此,可以提升用户与虚拟互动对象进行互动的真实感;
另一方面,由于用户发起的与虚拟互动对象的互动操作,会触发将虚拟互动对象的互动动作对应的动作图像序列与实景画面进行渲染融合,并根据渲染融合结果来生成互动记录;因此,对于用户而言,发起的与虚拟互动对象的互动操作,最终会影响生成的互动记录所记录的内容,使得用户可以通过发起不同形式的互动操作,来触发将虚拟互动形象的多样化的互动动作与实景画面进行融合,来生成内容丰富的互动记录。
附图说明
图1是一示例性实施例提供的一种基于基于增强现实的互动方法的流程图。
图2是一示例性实施例提供的一种在实景画面中增强显示虚拟交互对象的示意图。
图3是一示例性实施例提供的一种与虚拟互动对象进行合影互动的示意图。
图4是一示例性实施例提供的一种与虚拟互动对象进行能力提升互动的示意图。
图5是一示例性实施例提供的一种装饰虚拟互动对象的示意图。
图6是一示例性实施例提供的一种查看虚拟互动对象的能力等级的示意图。
图7是一示例性实施例提供的一种查看与虚拟互动对象的交互记录的示意图。
图8是一示例性实施例提供的一种基于增强现实的交互装置的框图。
图9是一示例性实施例提供的一种电子设备的结构示意图。
具体实施方式
本说明书旨在提出了一种用户通过与在实景画面中增强显示的虚拟互动对象进行互动,来触发将虚拟互动对象的互动动作对应的动作图像序列与实景画面进行渲染融合后,生成互动记录的交互模式。
在实现时,用户可以通过AR客户端对所处的真实环境进行图像扫描;而AR客户端可以响应用户发起的在实景画面中显示虚拟互动对象的操作,在扫描到的实景画面中增强显示预设的虚拟互动对象;
例如,在实际应用中,上述虚拟互动对象具体可以是一个能够做出各种互动动作与用户进行互动的动态虚拟形象(比如,动态3D卡通形象);而在扫描到的实景画面中,可以提供一个与上述虚拟互动对象对应的“召唤”按钮,用户可以通过诸如点击等方式来操作该“召唤”按钮,触发AR客户端在实景画面中相对的位置上来增强显示上述虚拟互动对象。
当AR客户端在实景画面中增强显示了上述虚拟互动对象之后,用户可以通过发起相应的互动操作,与该虚拟互动对象进行互动;而AR客户端在检测到用户发起的互动操作后,可以响应用户发起的与上述虚拟互动对象的互动操作,获取与该虚拟互动对象的互动动作对应的动作图像序列(比如,动作动画),并将获取到的动作图像序列与上述实景画面进行渲染融合生成互动图像序列;然后,可以基于生成的互动图像序列,来进一步生成与上述虚拟互动形象的互动记录。
在以上技术方案中,使得用户通过与在实景画面中增强显示的虚拟互动对象进行互动,就可以触发将虚拟互动对象的互动动作对应的动作图像序列与实景画面进行渲染融合,并基于渲染融合生成的互动图像序列,来进一步生成互动记录;
一方面,由于在实景画面中增强显示虚拟互动对象,可以将虚拟互动形象融入到实景画面中与用户进行互动;因此,可以提升用户与虚拟互动对象进行互动的真实感;
另一方面,由于用户发起的与虚拟互动对象的互动操作,会触发将虚拟互动对象的互动动作对应的动作图像序列与实景画面进行渲染融合,并根据渲染融合结果来生成互动记录;因此,对于用户而言,发起的与虚拟互动对象的互动操作,最终会影响生成的互动记录所记录的内容,使得用户可以通过发起不同形式的互动操作,来触发将虚拟互动形象的多样化的互动动作与实景画面进行融合,来生成内容丰富的互动记录。
下面通过具体实施例并结合具体的应用场景对本申请进行描述。
请参考图1,图1是本申请一实施例提供的一种基于基于增强现实的互动方法,应用于AR客户端,所述方法执行以下步骤:
步骤102,在扫描到的实景画面中增强显示虚拟互动对象;
步骤104,响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
步骤106,将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列;
步骤108,基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
上述AR客户端,包括基于AR技术开发的,或者集成了AR功能的客户端软件;例如,上述AR客户端,可以是集成了AR服务功能的APP应用;上述AR客户端可以通过搭载的图像扫描功能,对用户所处的真实环境进行图像扫描,来获取实景画面;以及,通过上述AR客户端搭载的AR引擎,对后台的AR服务端推送的虚拟数据(比如需要在实景画面中增强显示的图像序列)进行可视化渲染,将其与实景画面进行叠加融合,以完成上述虚拟数据在实景画面中的增强显示。
上述AR服务端,包括面向上述AR客户端提供服务的服务器、服务器集群或者基于服务器集群构建的分布式服务平台;例如,上述AR服务端,可以是面向集成了AR功能的APP提供对接服务的分布式平台;上述AR服务端可以对需要在AR客户端扫描到的实景画面中增强显示的虚拟数据进行内容管理,并向上述AR客户端推送需要在实景画面中增强显示的虚拟数据。
上述虚拟互动对象,包括能够做出多样化的互动动作与用户进行互动的,任意形态的动态虚拟形象;例如,能够做出各种互动动作的动态3D卡通形象。
需要说明的是,虚拟互动对象能够做出的互动动作,可以由AR客户端的运营方、或者向上述运营方提供服务的第三方ISV(Independent Software Vendors,独立软件开发商)基于实际的互动需求进行动作设计,并生成相应的动作图像序列(即与互动动作相关的动画文件)。
其中,对于生成的与虚拟互动对象的互动动作对应的动作图像序列,可以由AR服务端作为需要在实景画面中增强显示的虚拟数据进行统一维护和管理。
上述互动记录,具体用于记录用户与虚拟互动对象进行互动的互动结果;其中,上 述互动记录的具体形式,在本说明书中不进行特别限定;
例如,在一种实现方式中,以上述虚拟交互对象为3D卡通形象为例,上述互动记录具体可以抽象成3D卡通形象的日记的形式,向用户进行呈现。
在本说明中,在AR客户端的用户界面中,可以提供一个AR扫描入口;用户可以通过触发该扫描入口,进入到AR扫描界面,发起对所处的真实环境进行图像扫描;
例如,在一种实现方式中,在AR客户端的用户主页中,可以提供一个“扫一扫”入口,用户通过触发该“扫一扫”入口可以进入到AR扫描界面,对所处真实环境进行图像扫描。
而AR客户端,可以在AR扫描界面中实时的输出扫描到的实景画面,并默认在输出的实景画面中增强显示一个用于触发在实景画面中增强显示虚拟互动对象的操作入口,使得用户可以通过该操作入口发起在AR扫描界面输出的实景画面中显示虚拟互动对象。
以上述虚拟交互对象为名为“星宝”的虚拟3D卡通形象为例进行说明。
请参见图2,图2为本说明书示出的一种在实景画面中增强显示虚拟交互对象的示意图。
如图2所示,上述操作入口,具体可以是一个与“星宝”对应的“召唤”按钮,用户可以通过诸如点击等方式来触发该“召唤”按钮,将“星宝”召唤至实景画面进行增强显示。
而AR客户端在检测到用户针对上述“召唤”按钮的触发操作后,可以从AR服务端获取与虚拟互动对象的互动动作相关的动作图像序列(比如动画文件),并将获取到的动作图像序列在实景画面中相对的位置进行增强显示。
例如,在一种实现方式中,对于在AR服务端维护管理的虚拟互动对象的若干种互动动作中,可以设置一种用于默认展示的互动动作;当AR服务端在检测到用户针对上述“召唤”按钮的触发操作后,可以从AR服务端获取与该默认展示的互动动作对应的动作图像序列,然后将获取到的动作图像序列,在实景画面中进行增强显示。
在本说明书中,当AR客户端在实景画面中增强显示了虚拟互动对象之后,用户可以通过在实景画面中发起特定的互动操作,来与虚拟互动对象进行互动。
在示出的一种实施方式中,用户可以通过直接与虚拟互动对象进行交互来完成互动;
例如,在实现时,用户可以通过触碰在实景画面中增强显示的虚拟互动对象,来触发虚拟互动对象做出不同的互动动作;在这种情况下,虚拟互动对象所能做出的互动动作,具体可以由用户针对虚拟互动对象的触碰操作,来随机触发。AR客户端在检测到用户针对该虚拟互动对象的触碰操作后,可以获取AR服务端随机分配的互动动作对应的动作图像序列,然后将获取到的动作图像序列在实景画面中进行增强显示。
在示出的另一种实施方式中,用户也可以通过AR客户端在实景画面中增强显示的若干互动入口,在特定的互动场景下与虚拟互动对象进行互动。
在这种情况下,AR客户端可以在实景画面中增强显示若干互动入口;其中,每一个互动入口,都可以分别对应一种由AR客户端的运营方定义的,用户与虚拟互动对象进行互动的互动场景;从而,用户可以通过触发这些互动入口,在实景画面中发起相应的互动操作,在不同的互动场景下与虚拟互动对象进行互动。
其中,需要说明的是,用户与虚拟互动对象之间的互动方式以及具体互动形式,在本说明书中均不进行特别限定,在实际应用中,可以由AR客户端的运营方基于实际的互动需求,来灵活定制多样化的互动场景。
以下结合具体的互动场景进行详细说明。
1)与“星宝”进行合影互动
在示出的一种互动场景下,用户可以发起“星宝”与实景画面进行合影互动,来触发AR客户端将“星宝”当前展示的互动动作对应的动作图像序列,与实景画面进行渲染融合后,生成合影互动的互动记录。
请参见图3,图3为本说明书示出的一种与虚拟互动对象进行合影互动的示意图。
如图3所示,AR客户端可以在实景画面中增强显示一个“合影互动”按钮;用户可以通过诸如点击等方式触发该“合影互动”按钮,在实景画面中发起与“星宝”进行合影的互动操作。
而AR客户端在检测到用户针对“合影互动”按钮的触发操作后,可以对用户发起的与“星宝”进行合影的互动操作进行响应,从AR服务端获取“星宝”当前展示的互动动作对应的动作图像序列。
进一步的,当AR客户端从AR服务端获取“星宝”当前展示的互动动作对应的动作图像序列之后,可以通过AR客户端搭载的AR引擎将获取到的动作图像序列,与实 景画面进行渲染融合,生成互动图像序列;然后,可以基于生成的互动图像序列来进一步生成与“星宝”进行合影互动的互动记录。
例如,用户在旅行中,可以通过AR客户端扫描所处的真实环境,并在扫描到的实景画面中召唤“星宝”,然后通过在实景画面中发起与“星宝”进行合影互动,来触发AR客户端将“星宝”当前展示的互动动作对应的动画文件,与实景画面进行渲染融合,生成合影动画文件;然后,AR客户端可以基于该合影动画文件来生成“星宝”的“旅行日记”,作为用户与“星宝”之间的互动记录。
2)与“星宝”进行能力提升互动
在示出的另一种互动场景下,可以为“星宝”设置多个类别的能力属性;还可以为各个类别的能力属性划分出若干属性等级,并为“星宝”设置属性等级提升时会触发解锁新的互动动作的互动情景。
进一步的,还可以为“星宝”定义一种能够提升其能力属性的互动操作,用户可以通过在实景画面中发起该互动操作,来触发对应于“星宝”各个类别的能力属性的等级提升事件,为“星宝”解锁新的互动动作;
进而,AR客户端可以获取为“星宝”解锁新的互动动作,将该解锁的新的互动动作对应的动作图像序列,与实景画面进行渲染融合后,生成提升“星宝”的能力属性的互动记录。
也即,在这种互动场景下,“星宝”的能力属性的提升,会触发AR客户端生成记录“星宝”的能力属性的提升的互动记录;并且,“星宝”不同的能力属性的提升,触发AR客户端生成的互动记录中的内容,也会各不相同。
在示出的一种实施方式中,为“星宝”定义的能够提升其能力属性的互动操作,具体可以是针对实景画面中的目标物品进行图像识别的互动操作。也即,用户可以发起针对实景画面的目标物品进行图像识别的互动操作,来触发对应于“星宝”各个类别的能力属性的等级提升事件,为“星宝”解锁新的互动动作;
在这种互动情景下,星宝”的各个类别的能力属性,可以分别关联用于提升能力属性的物品集合。用户可以通过AR客户端扫描实景画面中与“星宝”的各个类别的能力属性关联的物品,帮助“星宝”探索真实世界,来触发对应于“星宝”各个类别的能力属性的等级提升事件。
以下以为“星宝”定义的能够提升其能力属性的互动操作,为针对实景画面中的目 标物品进行图像识别的互动操作为例进行说明;
其中,需要强调的是,以为“星宝”定义的能够提升其能力属性的互动操作,为针对实景画面中的目标物品进行图像识别的互动操作为例进行说明,仅为示例性的;在实际应用中,AR客户端的运营方,也可以为虚拟互动对象定义其它形式的互动操作,来提升上述虚拟互动形象的能力属性,在本说明书中不再进行一一列举;
例如,在另一种情景下,为虚拟互动对象定义的能够提升其能力属性的互动操作,具体也可以是在新的LBS位置“召唤”虚拟互动形象的互动操作;而虚拟互动对象的各个类别的能力属性,可以分别关联用于提升能力属性的LBS位置集合。在这种互动情景下,用户可以通过AR客户端在一个新的LBS位置上召唤虚拟互动形象,来提升该虚拟互动对象对应的能力属性。
请参见图4,图4为本说明书示出的一种与虚拟互动对象进行能力提升互动的示意图。
如图4所示,AR客户端可以在实景画面中增强显示一个“探索世界”按钮。用户可以通过诸如点击等方式触发该“探索世界”按钮,发起对实景画面中的目标物品进行图像识别的互动操作,来帮助“星宝”探索真实世界,以提升“星宝”的能力属性。
AR客户端在检测到用户针对“探索世界”按钮的触发操作后,可以进入AR图像扫描界面,对用户发起的对实景画面中的目标物品进行图像识别的互动操作进行响应,对上述目标物品进行图像扫描,来采集上述目标物品的图像特征,并基于采集到的图像特征对上述目标物品进行图像识别;
其中,需要说明的是,以上描述的图像识别过程,可以在AR客户端上完成,也可以在AR服务端上完成;
例如,在一种情况下,AR客户端可以预先将AR服务端上存储的图像特征样本库同步到本地;其中,在该图像特征样本库中存储了大量预先定义的物品的图像特征样本。而AR客户端在采集到上述目标物品的图像特征后,可以将该图像特征上述图像特征样本库中的图像特征样本分别进行相似度匹配;当提取到的图像特征与上述图像特征库中存储的任一预先定义的物品的图像特征样本匹配时,此时可以确认从扫描的实景图像中成功识别到了该目标物品。在另一种情况下,AR客户端也可以将采集到上述目标物品的图像特征上传至AR服务端,由AR服务端基于以上示出的相同方式来完成图像识别,然后将图像识别结果返回给AR客户端。
当完成针对上述目标物品的图像识别后,AR客户端基于图像识别结果,来确认上述目标物品是否匹配与“星宝”的各个类别的能力属性关联的物品;
一方面,如果上述目标物品匹配与任一类别的能力属性关联的物品,此时AR客户端可以基于第一预设幅度提升该能力属性对应的属性值;
另一方面,如果上述目标物品匹配与各个类别的能力属性关联的物品均不匹配,此时可以基于第二预设幅度提升同时提升各个类别的能力属性对应的属性值;其中,上述第二预设幅度可以远小于上述第一预设幅度。
通过这种方式,可以在上述目标物品匹配与任一类别的能力属性关联的物品时,按照一个较大的幅度来提升该能力属性对应的属性值;以及,在上述目标物品与各个类别的能力属性关联的物品均不匹配时,按照一个较小的幅度同时提升各个类别得能力属性对应的属性值。
通过这种方式,可以保证无论上述目标物品是否匹配与各个类别的能力属性关联的物品,各个类别的能力属性对应的属性值均能够保持一定大小的提升,从而可以提升用户的互动体验。
当然,在实际应用中,如果上述目标物品匹配与各个类别的能力属性关联的物品均不匹配,此时也可以不对各个类别的能力属性对应的属性值进行任何形式的提升,在本说明书中不进行特别限定。
请参见图5,在示出的一种实施方式中,AR客户端还可以在实景画面中增强显示一个“装饰”按钮,用户可以通过诸如点击等方式触发该“装饰”按钮,发起对在实景画面中增强显示的“星宝”的虚拟形象进行装饰,为“星宝”自定义设置喜欢的虚拟形象。
而AR客户端在检测到用户针对“装饰”按钮的触发操作后,可以进入为“星宝”配置的装饰元素列表,此时用户可以在装饰元素列表中选择喜欢的装饰元素,对“星宝”进行装饰,来为“星宝”自定义设置喜欢的虚拟形象。
其中,上述装饰元素列表中呈现的装饰元素的种类以及具体的装饰元素,在本说明书不进行特别限定;
例如,图5中示出的装饰元素列表中,具体包括“头饰”、“颈饰”、“服饰”等种类的装饰元素。
请继续参见图5,在示出的一种实施方式中,上述装饰元素列表中提供的装饰元素 中,可以包括少量的可供用户直接选择的装饰元素,还可以包括若干种上位解除锁定的装饰元素;
其中,对于每种尚未解除锁定的装饰元素,可以分别关联一种或者多种指定物品。
在这种情况下,当完成针对上述目标物品的图像识别后,AR客户端还可以基于图像识别结果,确定该目标物品是否为与尚未解除锁定的各种装饰元素关联的指定物品;如果上述目标物品为上述指定物品,AR客户端可以获取与该指定物品关联的装饰元素,将该装饰元素添加至与上述装饰元素列表,并将该装饰元素在上述装饰元素列表中进行解除锁定。
通过这种方式,使得用户在通过对实景画面中的目标物品进行图像识别,来提升“星宝”的能力属性时,还可以有一定概率获得用于对“星宝”进行装饰的装饰元素,从可以可以提升用户的互动体验。
在示出的一种实施方式中,AR客户端除了可以为“星宝”的各个类别的能力属性划分属性等级以外,还可以针对“星宝”的各个类别的能力属性对应的属性值进行加权计算,得到“星宝”的总能力值,并根据总能力值来计算出“星宝”的能力等级,然后将计算得到的“星宝”的总能力值,和计算出的“星宝”的能力等级,在实景画面中向用户进行输出;
请参见图6,在示出的一种实施方式中,AR客户端可以在实景画面中增强显示一个“能力等级”按钮,用户可以通过诸如点击等方式触发该“能力等级”按钮,来查看“星宝”各个类别的属性等级的属性值,以及AR客户端计算出的“星宝”的总能力值和基于总能力值计算出的“星宝”的能力等级。
而AR客户端在检测到用户针对“能力等级”按钮的触发操作后,可以进入“星宝”的能力属性界面,通过能力属性界面向用户展示“星宝”的各个类别的能力属性的属性值,以及AR客户端计算出的“星宝”的总能力值和基于总能力值计算出的“星宝”能力等级。
例如,图6中示出的为“星宝”设置的能力属性,包括“生活”、“运动”、“创造”、以及“社交”等能力属性;在上述能力属性界面中,将展示“生活”、“运动”、“创造”、以及“社交”等能力属性对应的属性值;其中,AR客户端可以为以上四种能力属性分别设置一个权重系数,并将以上四种能力属性对应的属性值分别乘以设置的权重系数之后,再进行求和运算,得到“星宝”的总能力值;然后,再基于计算 出的“星宝”的总能力值,计算出“星宝”的能力等级,将计算出的“星宝”的总能力值和能力等级,在上述能力属性界面中进行展示。
其中,需要说明的是,在本说明书中,无论是为“星宝”的各个类别的能力属性划分等级,还是为“星宝”的总能力值划分等级,均可以通过为每一个等级设置相应的阈值来实现;
例如,请参见图6,可以对“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性划分出若干属性等级,并为每一个属性等级设置一个阈值,如果某一能力属性的属性值达到了与某一个属性等级对应的阈值后,即可以认为该能力属性提升至了该属性等级;
相似的,对于“星宝”的总能力值而言,也可以划分出若干能力等级,并为每一个能力等级设置一个阈值,如果通过加权计算得到的总能力值达到某一个能力等级对应的阈值后,即可以认为“星宝”的总能力值提升至了该能力等级;比如,如图6所示,“星宝”当前的能力等级为level5,下一能力等级level6的阈值为800,也即当“星宝”当前的总能力值达到800时,“星宝”的能力等级将会提升至level6。
在本说明书中,用户通过AR客户端扫描实景画面中与“星宝”的各个类别的能力属性关联的物品,触发的对应于“星宝”各个类别的能力属性的等级提升事件,具体可以包括以下示出的几类事件:
第一,“星宝”的各个类别的能力属性的属性等级提升的事件;
第二,“星宝”的能力等级的提升的事件;
第三,“星宝”的各个类别的能力属性的属性等级,和“星宝”的能力等级同时提升的事件。
在示出的一种实施方式中,以上述等级提升事件,为“星宝”的各个类别的能力属性的属性等级提升的事件为例,如果AR客户端完成对用户扫描的目标物品的图像识别,并根据图像识别结果对“星宝”的某一个或者多个类别的能力属性的属性值进行提升后,可以进一步确定提升后的属性值是否触发了“星宝”的能力属性的属性等级的提升;
例如,如果某一个类别的能力属性的属性值提升后,AR客户端可以将提升后的属性值,与该能力属性的各个属性等级对应的阈值进行比较,来确定提升后的属性值是否触发了该能力属性的属性等级的提升;比如,如图6所示,假设“星宝”的“运动” 能力属性提升后的属性值为648,“运动”能力属性下一个能力等级的阈值为800;那么AR客户端可以将该属性值648,与阈值800进行比较;此时提升后的属性值小于阈值,可以认为提升后的属性值并未触发“星宝”的“运动”能力属性的属性能力的提升。
如果提升后的属性值触发了“星宝”的能力属性的属性等级的提升,AR客户端可以进一步确定出“星宝”的各个类别的能力属性中,属性等级发生提升的目标能力属性;
一方面,如果各个类别的能力属性中,仅包括属性等级提升的唯一能力属性,也即提升后的属性值仅触发了“星宝”的多个类别的能力属性中的单个能力属性的属性等级的提升,则可以将能力等级提升的该能力属性确定为目标能力属性;
例如,请参见图6,假设AR客户端根据图像识别结果,确定用户扫描的目标物品匹配与“星宝”的“生活”能力属性关联的物品,此时可以基于第一预设幅度提升“星宝”的“生活”能力属性对应的属性值;在这种情况下,如果“星宝”的“生活”能力属性提升后的属性值,触发了“生活”能力属性的属性等级提升事件,则可以将“星宝”的“生活”能力属性确定为目标能力属性。
另一方面,如果各个类别的能力属性中,包括属性等级提升的多个类别的能力属性,也即提升后的属性值触发了“星宝”的多个能力属性的属性等级的提升,此时可以进一步比较属性等级提升的该多个能力属性的属性值,将该多个能力属性中属性值最高的能力属性确定为目标能力属性。
请参见图6,假设AR客户端根据图像识别结果,确定用户扫描的目标物品与“星宝”的各个类别的能力属性关联的物品均不匹配,可以基于第二预设幅度分别提升“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性对应的属性值;
在这种情况下,如果“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性提升后的属性值,分别触发了“生活”、“运动”、“创造”、以及“社交”等能力属性能的属性等级提升事件,则可以进一步比较“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性的属性值,将属性值最高的“生活”能力属性,确定为目标能力属性。
进一步的,当AR客户端通过以上示出的方式,确定出属性等级发生提升的目标能力属性之后,可以从AR服务端获取该目标能力属性的属性等级提升时,解锁的最新的互动动作对应的动作图像序列,然后将获取到的动作图像序列,与实景画面进行渲 染融合,生成互动图像序列,并基于生成的互动图像序列,来生成与“星宝”进行能力提升互动的互动记录。
例如,用户可以通过AR客户端扫描所处的真实环境,并在扫描到的实景画面中召唤“星宝”,然后通过发起针对实景画面中的与“星宝”的各个类别的能力属性关联的物品进行图像识别,来提升“星宝”的能力属性,解锁新的交互动作;如果“星宝”的能力属性的等级提升解锁了新的互动动作,AR客户端可以获取解锁的新的交互动作对应的动画文件,将该动画文件与实景画面进行渲染融合;然后,AR客户端可以基于渲染融合后生成的动画文件来生成“星宝”的“能力提升日记”,作为用户与“星宝”之间的互动记录。
其中,上述动作图像序列以及上述交互图像序列具体的文件格式,在本说明书中不进行特别限定;例如,在一种实施方式中,上述动作图像序列以及上述交互图像序列均可以为GIF格式的图片文件。
在示出的一种实施方式中,以上述等级提升事件,为“星宝”的总能力等级的提升事件为例,如果AR客户端完成对用户扫描的目标物品的图像识别,并根据图像识别结果对“星宝”的某一个或者多个类别的能力属性的属性值进行提升后,可以进一步确定提升后的属性值是否触发了“星宝”的能力等级的提升;
例如,当某一个或者多个类别的能力属性的属性值提升后,AR客户端可以对各个类别的能力属性的属性值,重新进行加权计算得到总能力值,并基于计算出的总能力值重新计算“星宝”的能力等级;然后,可以将重新计算出的“星宝”的能力等级,与“星宝”当前的能力等级进行比较,来确定“星宝”的能力等级是否提升。
如果提升后的属性值触发了“星宝”的能力等级的提升,AR客户端可以进一步确定出“星宝”的各个类别的能力属性中,当前的属性等级与“星宝”的能力等级相同的目标能力属性;
一方面,如果各个类别的能力属性中,仅包括当前的属性等级与“星宝”的能力等级相同的唯一能力属性,则可以将该能力属性确定为目标能力属性;
例如,请参见图6,假设“星宝”的能力等级,由level4提升到level5,此时AR客户端可以查找“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性中是否也存在level5的能力属性;假设仅有“生活”能力属性为level5,其它能力属性均低于level5,则可以将“生活”能力属性确定为目标能力属性。
另一方面,如果各个类别的能力属性中,包括当前的属性等级与“星宝”的能力等级相同的多个能力属性,可以进一步比较属性该多个能力属性的属性值,将该多个能力属性中属性值最高的能力属性确定为目标能力属性。
例如,请参见图6,假设“星宝”的能力等级,由level4提升到level5,此时AR客户端可以查找“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性中是否也存在level5的能力属性;假设“星宝”的“生活”、“运动”、“创造”、以及“社交”等能力属性均为level5,则可以将“生活”、“运动”、“创造”、以及“社交”等能力属性中,属性值取值最高的“生活”能力属性,确定为目标能力属性。
进一步的,当AR客户端通过以上示出的方式,确定出属性等级发生提升的目标能力属性之后,可以从AR服务端获取该目标能力属性的属性等级在提升至当前的属性等级时,解锁的互动动作对应的动作图像序列,然后将获取到的动作图像序列,与实景画面进行渲染融合,生成互动图像序列,并基于生成的互动图像序列,来生成与“星宝”进行能力提升互动的互动记录。
例如,AR客户端仍然可以基于渲染融合后生成的动画文件来生成“星宝”的“能力提升日记”,作为用户与“星宝”之间的互动记录。
在示出的一种实施方式中,以上述等级提升事件,“星宝”的各个类别的能力属性的属性等级,和“星宝”的能力等级同时提升的事件为例,如果AR客户端完成对用户扫描的目标物品的图像识别,并根据图像识别结果对“星宝”的某一个或者多个类别的能力属性的属性值进行提升后,可以进一步确定提升后的属性值是否同时触发了“星宝”的能力属性的属性等级的提升提升;以及,“星宝”的能力等级的提升;
如果提升后的属性值同时触发了“星宝”的能力属性的属性等级的提升提升;以及,“星宝”的能力等级的提升,AR客户端可以进一步确定出“星宝”的各个类别的能力属性中,当前的属性等级与“星宝”的能力等级相同的目标能力属性;
一方面,如果各个类别的能力属性中,仅包括当前的属性等级与“星宝”的能力等级相同的唯一能力属性,则可以将该能力属性确定为目标能力属性;
另一方面,如果各个类别的能力属性中,包括当前的属性等级与“星宝”的能力等级相同的多个能力属性,可以进一步比较属性该多个能力属性的属性值,将该多个能力属性中属性值最高的能力属性确定为目标能力属性。
进一步的,当AR客户端通过以上示出的方式,确定出属性等级发生提升的目 标能力属性之后,仍然可以从AR服务端获取该目标能力属性的属性等级在提升至当前的属性等级时,解锁的互动动作对应的动作图像序列,然后将获取到的动作图像序列,与实景画面进行渲染融合,生成互动图像序列,并基于生成的互动图像序列,来生成与“星宝”进行能力提升互动的互动记录,具体过程不再赘述。
在以上实施例中,以与“星宝”进行合影互动,以及与“星宝”进行能力提升互动的互动场景为例进行了说明;需要强调的是,以以上两种场景为例进行说明仅为示例性的,在实际应用中,AR客户端的运营方也可以基于实际的互动需求,来灵活定制其它形式的互动场景,在本说明书中不再进行一一列举。
在本说明书中,无论是在哪中互动场景下,AR客户端从AR服务端获取到的动作图像序列,通常并不包含用户为“星宝”自定义设置的装饰元素;比如,获取到的动作图像序列可能仅仅是“星宝”在执行某一个互动动作时的骨骼动画;因此,如果AR客户端直接将获取到的动作图像序列与实景画面进行渲染融合,则最终得到的互动图像序列中会缺失用户为“星宝”自定义设置的装饰元素,导致用户为“星宝”定义的虚拟形象在融合结果中丢失。
基于此,在本说明书中,在将获取到的动作图像序列,与实景画面进行渲染融合时,可以先将用户为“星宝”自定义设置的装饰元素在获取到的该动作图像序列中预设的位置进行增强显示,生成相应的动态虚拟形象之后,再将生成的动态虚拟形象与实景画面进行渲染融合。
在示出的一种实施方式中,AR客户端在从AR服务端获取到动作图像序列之后,首先可以获取用户当前为“星宝”设置的装饰元素,并将用户设置的装饰元素在获取到的动作图像序列中预设的位置上进行增强显示,生成“星宝”对应于该动作图像序列的动态虚拟形象;
其中,装饰元素在动作图像序列中增强显示的具体位置,通常取决于装饰元素能够修改的部位,在本说明书中不进行特别限定;例如,对于“星宝”的头饰而言,可以在“星宝”的动作动画中头部位置进行增强显示。
当生成了对应于该动作图像序列的动态虚拟形象之后,AR客户端可以再将生成的动态虚拟形象,与实景画面进行渲染融合,生成“星宝”当前的动态虚拟形象对应的互动图像序列。
相应的,当用户通过在上述装饰元素列表中重新为“星宝”设置装饰元素,对 “星宝”的虚拟形象进行更新,此时AR客户端可以获取用户为“星宝”更新的装饰元素,并按照相同的方式,将用户更新的装饰元素,在动作图像序列中预设的位置上进行增强显示,对“星宝”的动态虚拟形象进行同步更新。
进一步的,AR客户端可以将“星宝”更新后的动态虚拟形象,与实景画面重新进行渲染融合,生成“星宝”更新后的动态虚拟形象对应的互动图像序列,然后再基于“星宝”更新后的动态虚拟形象对应的互动图像序列,对已经生成的互动记录中原有的“星宝”更新前的动态虚拟形象对应的互动图像序列进行同步更新。
在示出的另一种实施方式中,AR客户端也可以预先对上述装饰元素集合中的装饰元素进行排列组合生成若干种装饰元素搭配;
也即,通过排列组合的方式,列举出基于现有的装饰元素,能够得出的对“星宝”的所有装饰方案。
当AR客户端从AR服务端获取到动作图像序列之后,可以将以上生成的每一种装饰元素搭配,分别在该动态图像序列中预设的位置上进行增强显示,生成“星宝”的对应于该动作图像序列的若干种动态虚拟形象;
进一步的,AR客户端可以将生成的“星宝”的每一种动态虚拟形象,分别与实景画面进行渲染融合,生成“星宝”对应于每一种动态虚拟形象的互动图像序列。
在这种情况下,AR客户端在基于生成的互动图像序列,为“星宝”生成互动记录时,可以从生成的“星宝”对应于每一种动态虚拟形象的互动图像序列中,挑选出与“星宝”当前的动态虚拟形象对应的互动图像序列,然后再根据挑选出的互动图像序列,来为“星宝”生成互动记录。
相应的,当用户通过在上述装饰元素列表中重新为“星宝”设置装饰元素,对“星宝”的虚拟形象进行更新,此时AR客户端可以获取用户为“星宝”更新的装饰元素,并按照相同的方式,将用户更新的装饰元素,在动作图像序列中预设的位置上进行增强显示,对“星宝”的动态虚拟形象进行同步更新。
进一步的,AR客户端还可以从生成的“星宝”对应于每一种动态虚拟形象的互动图像序列中,挑选出与“星宝”更新后的动态虚拟形象对应的互动图像序列,然后基于挑选出的互动图像序列,对已经生成的互动记录中的原有的更新前的动态虚拟形象对应的互动图像序列进行同步更新。
通过这种方式,由于AR客户端已经预先生成了“星宝”对应于每一种动态虚 拟形象的互动图像序列,因此当用户对“星宝”的虚拟形象进行更新后,对于AR客户端而言,不再需要将更新后的动态虚拟形象与实景画面重新进行渲染融合,而是可以直接从已经生成的“星宝”对应于每一种动态虚拟形象的互动图像序列,挑选出与“星宝”更新的动态虚拟形象对应的互动图像序列,立即对已经生成的互动日志中原有的“星宝”更新前的动态虚拟形象对应的互动图像序列进行同步更新即可。
也即,当对“星宝”的虚拟形象进行发生改变后,生成的互动记录的内容也会实时的发生改变;对于用户来说,生成的互动记录在感知上是动态实时变化的,可以显著提升用户的互动体验。
在本说明书中,当AR客户端通过以上出的实施过程,生成与“星宝”进行能力提升互动的互动记录之后,还可以将生成的互动记录向用户进行展示;
在示出的一种实施方式中,AR客户端可以在实景画面中增强显示一个“查看”的按钮,用户可以通过诸如点击等方式触发该“查看”按钮,来查看AR客户端生成的互动记录。
AR客户端在检测到用户针对“查看”按钮的触发操作后,可以进入互动记录展示界面,并通过该互动记录展示界面,向用户输出展示已经生成的互动记录;
例如,请参见图7,以将AR客户端生成的互动记录,抽象成“星宝”的日记的形式为例,上述“查看”按钮,具体可以是一个“日记”按钮,用户可以通过操作该按钮,来触发AR客户端输出星宝日记查看界面,来查看由AR客户端生成的诸如“星宝”的“旅行日记”、“能力提升日记”等等。
通过以上各实施例的技术方案,一方面,由于在实景画面中增强显示虚拟互动对象,可以将虚拟互动形象融入到实景画面中与用户进行互动;因此,可以提升用户与虚拟互动对象进行互动的真实感;
另一方面,由于用户发起的与虚拟互动对象的互动操作,会触发将虚拟互动对象的互动动作对应的动作图像序列与实景画面进行渲染融合,并根据渲染融合结果来生成互动记录;因此,对于用户而言,发起的与虚拟互动对象的互动操作,最终会影响生成的互动记录所记录的内容,使得用户可以通过发起不同形式的互动操作,来触发将虚拟互动形象的多样化的互动动作与实景画面进行融合,来生成内容丰富的互动记录。
与上述方法实施例相对应,本申请还提供了装置的实施例。
请参见图8,本申请提出一种基于增强现实的互动装置80,应用于AR客户端; 请参见图9,作为承载所述基于基于增强现实的互动装置80的客户端所涉及的硬件架构中,通常包括CPU、内存、非易失性存储器、网络接口以及内部总线等;以软件实现为例,所述基于增强现实的互动装置80通常可以理解为加载在内存中的计算机程序,通过CPU运行之后形成的软硬件相结合的逻辑装置,所述装置80包括:
显示模块801,在扫描到的实景画面中增强显示虚拟互动对象;
获取模块802,响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
融合模块803,将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列;
生成模块804,基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
在本实施例中,所述获取模块802:
响应于用户发起的与所述虚拟互动形象的互动操作,获取所述虚拟互动对象当前展示的互动动作对应的动作图像序列。
在本实施例中,所述虚拟互动对象包括多个类别的能力属性;其中,各个类别的能力属性被划分了若干属性等级;属性等级提升时会触发解锁新的互动动作;
所述获取模块802:
响应于用户发起的与所述虚拟互动形象的互动操作,确定所述互动操作是否触发了对应于所述能力属性的等级提升事件;
如果所述互动操作触发了对应于所述能力属性的等级提升事件,获取与解锁的互动动作对应的动作图像序列。
在本实施例中,所述互动操作包括针对所述实景画面中的目标物品进行图像识别的操作;各个类别的能力属性分别关联了用于提升能力属性的物品;
所述装置80还包括:
识别模块805(图8中未示出),响应于用户发起的与所述虚拟互动形象的互动操作,针对所述实景画面中的目标物品进行图像识别;
提升模块806(图8中未示出),基于所述图像识别的结果确定所述目标物品是否匹配与所述虚拟互动对象的各个类别的能力属性关联的物品;如果所述目标物品匹配 与任一类别的能力属性关联的物品,基于第一预设幅度提升该类别的能力属性对应的属性值。
在本实施例中,所述提升模块806进一步:
如果所述目标物品与各个类别的能力属性关联的物品均不匹配,基于第二预设幅度分别提升各个类别的能力属性对应的属性值;
其中,所述第二预设幅度低于所述第一预设幅度。
在本实施例中,所述装置80还包括:
装饰模块807(图8中未示出),基于所述图像识别的结果确定所述目标物品是否为指定物品;其中,所述指定物品关联了与所述虚拟互动对象相关的装饰元素;如果所述目标物品为所述指定物品,获取所述指定物品关联的装饰元素,并将该装饰元素添加至与所述虚拟互动对象相关的装饰元素列表,以由用户基于所述装饰元素列表中的装饰元素为所述虚拟互动对象设置装饰元素。
在本实施例中,所述装置80还包括:
计算模块808(图8中未示出),针对各个类别的能力属性对应的属性值进行加权计算,得到所述虚拟互动形象的能力值;基于所述能力值计算对应于所述虚拟互动形象的能力等级;
输出模块809(图8中未示出),将计算出的能力等级在所述实景画面中向用户输出。
在本实施例中,所述等级提升事件包括:
所述能力属性的属性等级的提升事件;和/或,所述能力等级的提升事件;
所述获取模块进一步:
确定提升后的属性值是否触发了所述能力属性的属性等级的提升;
和/或,确定提升后的属性值是否触发了所述能力等级的提升。
在本实施例中,所述获取模块802进一步:
确定各个类别的能力属性中,属性等级提升的目标能力属性;
如果各个类别的能力属性中,包括属性等级提升的唯一能力属性,将该能力属性确定为目标能力属性;
如果各个类别的能力属性中,包括属性等级提升的多个能力属性,将该多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
获取所述目标能力属性的属性等级提升时解锁的互动动作对应的动作序列图像。
在本实施例中,所述获取模块802进一步:
确定各个类别的能力属性中,当前的属性等级与所述能力等级相同的目标能力属性;
如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的唯一能力属性,将该能力属性确定为所述目标能力属性;
如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的多个能力属性时,将所述多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
获取所述目标能力属性在提升至当前的属性等级时解锁的互动动作对应的动作序列图像。
在本实施例中,所述融合模块803:
获取用户为所述虚拟互动对象设置的装饰元素;
将用户设置的装饰元素在所述动作图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的动态虚拟形象;
将所述动态虚拟形象与所述实景画面进行渲染融合,生成对应于所述动态虚拟形象的互动图像序列。
在本实施例中,所述融合模块803:
对预设的装饰元素集合中的装饰元素进行排列组合生成若干种装饰元素搭配;
将生成的所述若干种装饰元素搭配分别在所述动态图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的若干种虚拟形象;
将所述若干种虚拟形象分别与所述实景画面进行渲染融合,生成对应于所述若干种虚拟形象的互动图像序列。
在本实施例中,所述生成模块804:
从对应于所述若干种虚拟形象的互动图像序列中,获取与所述虚拟互动对象当前的虚拟形象对应的互动图像序列;
基于获取到的互动图像序列生成与所述虚拟互动形象的互动记录。
在本实施例中,所述获取模块802进一步:
获取用户为所述虚拟互动对象更新的装饰元素;
所述装置80还包括:
更新模块810(图8中未示出),将用户更新的装饰元素在所述动作图像序列中预设的位置上增强显示,以更新所述虚拟互动对象的虚拟形象;从对应于所述若干种虚拟形象的互动图像序列中,获取与所述更新后的虚拟形象对应的互动图像序列,并基于获取到的所述互动图像序列对所述互动记录中的互动图像序列进行同步更新。
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
上述实施例阐明的系统、装置、模块或模块,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
与上述方法实施例相对应,本说明书还提供了一种电子设备的实施例。该电子设备包括:处理器以及用于存储机器可执行指令的存储器;其中,处理器和存储器通常通过内部总线相互连接。在其他可能的实现方式中,所述设备还可能包括外部接口,以能够与其他设备或者部件进行通信。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
在扫描到的实景画面中增强显示虚拟互动对象;
响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,并基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
响应于用户发起的与所述虚拟互动形象的互动操作,获取所述虚拟互动对象当前展示的互动动作对应的动作图像序列。
在本实施例中,所述虚拟互动对象包括多个类别的能力属性;其中,各个类别的能力属性被划分了若干属性等级;属性等级提升时会触发解锁新的互动动作;
通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
响应于用户发起的与所述虚拟互动形象的互动操作,确定所述互动操作是否触发了对应于所述能力属性的等级提升事件;
如果所述互动操作触发了对应于所述能力属性的等级提升事件,获取与解锁的互动动作对应的动作图像序列。
在本实施例中,所述互动操作包括针对所述实景画面中的目标物品进行图像识别的操作;各个类别的能力属性分别关联了用于提升能力属性的物品;
通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
响应于用户发起的与所述虚拟互动形象的互动操作,针对所述实景画面中的目标物品进行图像识别;
基于所述图像识别的结果确定所述目标物品是否匹配与所述虚拟互动对象的各个类别的能力属性关联的物品;
如果所述目标物品匹配与任一类别的能力属性关联的物品,基于第一预设幅度提升该类别的能力属性对应的属性值。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
如果所述目标物品与各个类别的能力属性关联的物品均不匹配,基于第二预设幅度分别提升各个类别的能力属性对应的属性值;
其中,所述第二预设幅度低于所述第一预设幅度。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
基于所述图像识别的结果确定所述目标物品是否为指定物品;其中,所述指定物品关联了与所述虚拟互动对象相关的装饰元素;
如果所述目标物品为所述指定物品,获取所述指定物品关联的装饰元素,并将该装饰元素添加至与所述虚拟互动对象相关的装饰元素列表,以由用户基于所述装饰元素列表中的装饰元素为所述虚拟互动对象设置装饰元素。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
针对各个类别的能力属性对应的属性值进行加权计算,得到所述虚拟互动形象的能力值;
基于所述能力值计算对应于所述虚拟互动形象的能力等级;以及,
将计算出的能力等级在所述实景画面中向用户输出。
在本实施例中,所述等级提升事件包括:所述能力属性的属性等级的提升事件;和/或,所述能力等级的提升事件;
通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
确定提升后的属性值是否触发了所述能力属性的属性等级的提升;
和/或,确定提升后的属性值是否触发了所述能力等级的提升。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
确定各个类别的能力属性中,属性等级提升的目标能力属性;
如果各个类别的能力属性中,包括属性等级提升的唯一能力属性,将该能力属性确定为目标能力属性;
如果各个类别的能力属性中,包括属性等级提升的多个能力属性,将该多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
获取所述目标能力属性的属性等级提升时解锁的互动动作对应的动作序列图像。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
确定各个类别的能力属性中,当前的属性等级与所述能力等级相同的目标能力属性;
如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的唯一能力属性,将该能力属性确定为所述目标能力属性;
如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的多个能力属性时,将所述多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
获取所述目标能力属性在提升至当前的属性等级时解锁的互动动作对应的动作序列图像。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
获取用户为所述虚拟互动对象设置的装饰元素;
将用户设置的装饰元素在所述动作图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的动态虚拟形象;
将所述动态虚拟形象与所述实景画面进行渲染融合,生成对应于所述动态虚拟形象的互动图像序列。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
对预设的装饰元素集合中的装饰元素进行排列组合生成若干种装饰元素搭配;
将生成的所述若干种装饰元素搭配分别在所述动态图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的若干种虚拟形象;
将所述若干种虚拟形象分别与所述实景画面进行渲染融合,生成对应于所述若干种虚拟形象的互动图像序列。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
从对应于所述若干种虚拟形象的互动图像序列中,获取与所述虚拟互动对象当前的虚拟形象对应的互动图像序列;
基于获取到的互动图像序列生成与所述虚拟互动形象的互动记录
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的互动控制逻辑对应的机器可执行指令,所述处理器被促使:
获取用户为所述虚拟互动对象更新的装饰元素;
将用户更新的装饰元素在所述动作图像序列中预设的位置上增强显示,以更新所述虚拟互动对象的虚拟形象;以及,
从对应于所述若干种虚拟形象的互动图像序列中,获取与所述更新后的虚拟形象对应的互动图像序列,并基于获取到的所述互动图像序列对所述互动记录中的互动图像序列进行同步更新。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精 神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (29)

  1. 一种基于增强现实的互动方法,所述方法包括:
    在扫描到的实景画面中增强显示虚拟互动对象;
    响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
    将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,并基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
  2. 根据权利要求1所述的方法,所述响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列,包括:
    响应于用户发起的与所述虚拟互动形象的互动操作,获取所述虚拟互动对象当前展示的互动动作对应的动作图像序列。
  3. 根据权利要求1所述的方法,所述虚拟互动对象包括多个类别的能力属性;其中,各个类别的能力属性被划分了若干属性等级;属性等级提升时会触发解锁新的互动动作;
    所述响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列,包括:
    响应于用户发起的与所述虚拟互动形象的互动操作,确定所述互动操作是否触发了对应于所述能力属性的等级提升事件;
    如果所述互动操作触发了对应于所述能力属性的等级提升事件,获取与解锁的互动动作对应的动作图像序列。
  4. 根据权利要求3所述的方法,所述互动操作包括针对所述实景画面中的目标物品进行图像识别的操作;各个类别的能力属性分别关联了用于提升能力属性的物品;
    所述方法还包括:
    响应于用户发起的与所述虚拟互动形象的互动操作,针对所述实景画面中的目标物品进行图像识别;
    基于所述图像识别的结果确定所述目标物品是否匹配与所述虚拟互动对象的各个类别的能力属性关联的物品;
    如果所述目标物品匹配与任一类别的能力属性关联的物品,基于第一预设幅度提升该类别的能力属性对应的属性值。
  5. 根据权利要求4所述的方法,所述方法还包括:
    如果所述目标物品与各个类别的能力属性关联的物品均不匹配,基于第二预设幅度 分别提升各个类别的能力属性对应的属性值;
    其中,所述第二预设幅度低于所述第一预设幅度。
  6. 根据权利要求5所述的方法,还包括:
    基于所述图像识别的结果确定所述目标物品是否为指定物品;其中,所述指定物品关联了与所述虚拟互动对象相关的装饰元素;
    如果所述目标物品为所述指定物品,获取所述指定物品关联的装饰元素,并将该装饰元素添加至与所述虚拟互动对象相关的装饰元素列表,以由用户基于所述装饰元素列表中的装饰元素为所述虚拟互动对象设置装饰元素。
  7. 根据权利要求5所述的方法,所述方法还包括:
    针对各个类别的能力属性对应的属性值进行加权计算,得到所述虚拟互动形象的能力值;
    基于所述能力值计算对应于所述虚拟互动形象的能力等级;以及,
    将计算出的能力等级在所述实景画面中向用户输出。
  8. 根据权利要求7所述的方法,所述等级提升事件包括:
    所述能力属性的属性等级的提升事件;和/或,所述能力等级的提升事件;
    所述确定所述互动操作是否触发了对应于所述能力属性的等级提升事件,包括:
    确定提升后的属性值是否触发了所述能力属性的属性等级的提升;
    和/或,确定提升后的属性值是否触发了所述能力等级的提升。
  9. 根据权利要求8所述的方法,所述获取解锁的互动动作对应的动作图像序列,包括:
    确定各个类别的能力属性中,属性等级提升的目标能力属性;
    如果各个类别的能力属性中,包括属性等级提升的唯一能力属性,将该能力属性确定为目标能力属性;
    如果各个类别的能力属性中,包括属性等级提升的多个能力属性,将该多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
    获取所述目标能力属性的属性等级提升时解锁的互动动作对应的动作序列图像。
  10. 根据权利要求8所述的方法,所述获取解锁的互动动作对应的动作图像序列,包括:
    确定各个类别的能力属性中,当前的属性等级与所述能力等级相同的目标能力属性;
    如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的唯一能力属性,将该能力属性确定为所述目标能力属性;
    如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的多个能力属性时,将所述多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
    获取所述目标能力属性在提升至当前的属性等级时解锁的互动动作对应的动作序列图像。
  11. 根据权利要求1所述的方法,所述将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,包括:
    获取用户为所述虚拟互动对象设置的装饰元素;
    将用户设置的装饰元素在所述动作图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的动态虚拟形象;
    将所述动态虚拟形象与所述实景画面进行渲染融合,生成对应于所述动态虚拟形象的互动图像序列。
  12. 根据权利要求1所述的方法,所述将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,包括:
    对预设的装饰元素集合中的装饰元素进行排列组合生成若干种装饰元素搭配;
    将生成的所述若干种装饰元素搭配分别在所述动态图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的若干种虚拟形象;
    将所述若干种虚拟形象分别与所述实景画面进行渲染融合,生成对应于所述若干种虚拟形象的互动图像序列。
  13. 根据权利要求12所述的方法,所述基于所述互动图像序列生成与所述虚拟互动形象的互动记录,包括:
    从对应于所述若干种虚拟形象的互动图像序列中,获取与所述虚拟互动对象当前的虚拟形象对应的互动图像序列;
    基于获取到的互动图像序列生成与所述虚拟互动形象的互动记录。
  14. 根据权利要求13所述的方法,还包括:
    获取用户为所述虚拟互动对象更新的装饰元素;
    将用户更新的装饰元素在所述动作图像序列中预设的位置上增强显示,以更新所述虚拟互动对象的虚拟形象;以及,
    从对应于所述若干种虚拟形象的互动图像序列中,获取与所述更新后的虚拟形象对应的互动图像序列,并基于获取到的所述互动图像序列对所述互动记录中的互动图像序列进行同步更新。
  15. 一种基于增强现实的互动装置,所述装置包括:
    显示模块,在扫描到的实景画面中增强显示虚拟互动对象;
    获取模块,响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
    融合模块,将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列;
    生成模块,基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
  16. 根据权利要求15所述的装置,所述获取模块:
    响应于用户发起的与所述虚拟互动形象的互动操作,获取所述虚拟互动对象当前展示的互动动作对应的动作图像序列。
  17. 根据权利要求15所述的装置,所述虚拟互动对象包括多个类别的能力属性;其中,各个类别的能力属性被划分了若干属性等级;属性等级提升时会触发解锁新的互动动作;
    所述获取模块:
    响应于用户发起的与所述虚拟互动形象的互动操作,确定所述互动操作是否触发了对应于所述能力属性的等级提升事件;
    如果所述互动操作触发了对应于所述能力属性的等级提升事件,获取与解锁的互动动作对应的动作图像序列。
  18. 根据权利要求17所述的装置,所述互动操作包括针对所述实景画面中的目标物品进行图像识别的操作;各个类别的能力属性分别关联了用于提升能力属性的物品;
    所述装置还包括:
    识别模块,响应于用户发起的与所述虚拟互动形象的互动操作,针对所述实景画面中的目标物品进行图像识别;
    提升模块,基于所述图像识别的结果确定所述目标物品是否匹配与所述虚拟互动对象的各个类别的能力属性关联的物品;如果所述目标物品匹配与任一类别的能力属性关联的物品,基于第一预设幅度提升该类别的能力属性对应的属性值。
  19. 根据权利要求18所述的装置,所述提升模块进一步:
    如果所述目标物品与各个类别的能力属性关联的物品均不匹配,基于第二预设幅度分别提升各个类别的能力属性对应的属性值;
    其中,所述第二预设幅度低于所述第一预设幅度。
  20. 根据权利要求19所述的装置,所述装置还包括:
    装饰模块,基于所述图像识别的结果确定所述目标物品是否为指定物品;其中,所 述指定物品关联了与所述虚拟互动对象相关的装饰元素;如果所述目标物品为所述指定物品,获取所述指定物品关联的装饰元素,并将该装饰元素添加至与所述虚拟互动对象相关的装饰元素列表,以由用户基于所述装饰元素列表中的装饰元素为所述虚拟互动对象设置装饰元素。
  21. 根据权利要求19所述的装置,所述装置还包括:
    计算模块,针对各个类别的能力属性对应的属性值进行加权计算,得到所述虚拟互动形象的能力值;基于所述能力值计算对应于所述虚拟互动形象的能力等级;
    输出模块,将计算出的能力等级在所述实景画面中向用户输出。
  22. 根据权利要求21所述的装置,所述等级提升事件包括:
    所述能力属性的属性等级的提升事件;和/或,所述能力等级的提升事件;
    所述获取模块进一步:
    确定提升后的属性值是否触发了所述能力属性的属性等级的提升;
    和/或,确定提升后的属性值是否触发了所述能力等级的提升。
  23. 根据权利要求22所述的装置,所述获取模块进一步:
    确定各个类别的能力属性中,属性等级提升的目标能力属性;
    如果各个类别的能力属性中,包括属性等级提升的唯一能力属性,将该能力属性确定为目标能力属性;
    如果各个类别的能力属性中,包括属性等级提升的多个能力属性,将该多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
    获取所述目标能力属性的属性等级提升时解锁的互动动作对应的动作序列图像。
  24. 根据权利要求23所述的装置,所述获取模块进一步:
    确定各个类别的能力属性中,当前的属性等级与所述能力等级相同的目标能力属性;
    如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的唯一能力属性,将该能力属性确定为所述目标能力属性;
    如果各个类别的能力属性中,包括当前的属性等级与所述能力等级相同的多个能力属性时,将所述多个能力属性中属性值最高的能力属性确定为所述目标能力属性;
    获取所述目标能力属性在提升至当前的属性等级时解锁的互动动作对应的动作序列图像。
  25. 根据权利要求15所述的装置,所述融合模块:
    获取用户为所述虚拟互动对象设置的装饰元素;
    将用户设置的装饰元素在所述动作图像序列中预设的位置上进行增强显示,以生成 对应于所述虚拟互动对象的动态虚拟形象;
    将所述动态虚拟形象与所述实景画面进行渲染融合,生成对应于所述动态虚拟形象的互动图像序列。
  26. 根据权利要求15所述的装置,所述融合模块:
    对预设的装饰元素集合中的装饰元素进行排列组合生成若干种装饰元素搭配;
    将生成的所述若干种装饰元素搭配分别在所述动态图像序列中预设的位置上进行增强显示,以生成对应于所述虚拟互动对象的若干种虚拟形象;
    将所述若干种虚拟形象分别与所述实景画面进行渲染融合,生成对应于所述若干种虚拟形象的互动图像序列。
  27. 根据权利要求26所述的装置,所述生成模块:
    从对应于所述若干种虚拟形象的互动图像序列中,获取与所述虚拟互动对象当前的虚拟形象对应的互动图像序列;
    基于获取到的互动图像序列生成与所述虚拟互动形象的互动记录。
  28. 根据权利要求27所述的装置,所述获取模块进一步:
    获取用户为所述虚拟互动对象更新的装饰元素;
    所述装置还包括:
    更新模块,将用户更新的装饰元素在所述动作图像序列中预设的位置上增强显示,以更新所述虚拟互动对象的虚拟形象;从对应于所述若干种虚拟形象的互动图像序列中,获取与所述更新后的虚拟形象对应的互动图像序列,并基于获取到的所述互动图像序列对所述互动记录中的互动图像序列进行同步更新。
  29. 一种电子设备,所述电子设备包括:
    处理器;
    用于存储机器可执行指令的存储器;
    其中,通过读取并执行所述存储器存储的与基于增强现实的互动逻辑对应的机器可执行指令,所述处理器被促使:
    在扫描到的实景画面中增强显示虚拟互动对象;
    响应于用户发起的与所述虚拟互动形象的互动操作,获取与所述虚拟互动对象的互动动作对应的动作图像序列;
    将获取到的动作图像序列与所述实景画面进行渲染融合,生成互动图像序列,并基于所述互动图像序列生成与所述虚拟互动形象的互动记录。
PCT/CN2019/096094 2018-08-27 2019-07-16 基于增强现实的互动方法及装置 WO2020042786A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810982275.8A CN109345637B (zh) 2018-08-27 2018-08-27 基于增强现实的互动方法及装置
CN201810982275.8 2018-08-27

Publications (1)

Publication Number Publication Date
WO2020042786A1 true WO2020042786A1 (zh) 2020-03-05

Family

ID=65291641

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096094 WO2020042786A1 (zh) 2018-08-27 2019-07-16 基于增强现实的互动方法及装置

Country Status (3)

Country Link
CN (2) CN113112614B (zh)
TW (1) TWI721466B (zh)
WO (1) WO2020042786A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112614B (zh) * 2018-08-27 2024-03-19 创新先进技术有限公司 基于增强现实的互动方法及装置
CN110430553B (zh) * 2019-07-31 2022-08-16 广州小鹏汽车科技有限公司 车辆间的互动方法、装置、存储介质及控制终端
CN110716645A (zh) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 一种增强现实数据呈现方法、装置、电子设备及存储介质
CN110941341B (zh) * 2019-11-29 2022-02-01 维沃移动通信有限公司 图像控制方法及电子设备
CN111083509B (zh) * 2019-12-16 2021-02-09 腾讯科技(深圳)有限公司 交互任务执行方法、装置、存储介质和计算机设备
CN113041615A (zh) * 2019-12-27 2021-06-29 阿里巴巴集团控股有限公司 场景呈现方法、装置、客户端、服务器、设备及存储介质
CN111192053B (zh) * 2020-01-11 2021-07-13 支付宝(杭州)信息技术有限公司 基于电子凭证的互动方法及装置、电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092475A1 (en) * 2009-06-23 2012-04-19 Tencent Technology (Shenzhen) Company Limited Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene
CN105959718A (zh) * 2016-06-24 2016-09-21 乐视控股(北京)有限公司 一种视频直播中实时互动的方法及装置
CN107204031A (zh) * 2017-04-27 2017-09-26 腾讯科技(深圳)有限公司 信息展示方法及装置
CN107274465A (zh) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 一种虚拟现实的主播方法、装置和系统
CN108021896A (zh) * 2017-12-08 2018-05-11 北京百度网讯科技有限公司 基于增强现实的拍摄方法、装置、设备及计算机可读介质
CN109345637A (zh) * 2018-08-27 2019-02-15 阿里巴巴集团控股有限公司 基于增强现实的互动方法及装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101227237B1 (ko) * 2010-03-17 2013-01-28 에스케이플래닛 주식회사 복수의 마커를 이용하여 가상 객체간 인터렉션을 구현하는 증강현실 시스템 및 방법
US20110316845A1 (en) * 2010-06-25 2011-12-29 Palo Alto Research Center Incorporated Spatial association between virtual and augmented reality
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US8743244B2 (en) * 2011-03-21 2014-06-03 HJ Laboratories, LLC Providing augmented reality based on third party information
JP6192264B2 (ja) * 2012-07-18 2017-09-06 株式会社バンダイ 携帯端末装置、端末プログラム、拡張現実感システム、および衣類
US20160012136A1 (en) * 2013-03-07 2016-01-14 Eyeducation A.Y. LTD Simultaneous Local and Cloud Searching System and Method
US9791917B2 (en) * 2015-03-24 2017-10-17 Intel Corporation Augmentation modification based on user interaction with augmented reality scene
US10165199B2 (en) * 2015-09-01 2018-12-25 Samsung Electronics Co., Ltd. Image capturing apparatus for photographing object according to 3D virtual object
TWI628614B (zh) * 2015-10-12 2018-07-01 李曉真 立體虛擬實境的互動房屋瀏覽方法及其系統
CN105976417B (zh) * 2016-05-27 2020-06-12 腾讯科技(深圳)有限公司 动画生成方法和装置
CN106131536A (zh) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 一种裸眼3d增强现实互动展示系统及其展示方法
TWI585617B (zh) * 2016-08-31 2017-06-01 宅妝股份有限公司 互動方法及系統
CN106502671A (zh) * 2016-10-21 2017-03-15 苏州天平先进数字科技有限公司 一种基于虚拟人物互动的锁屏系统
CN111899003A (zh) * 2016-12-13 2020-11-06 创新先进技术有限公司 基于增强现实的虚拟对象分配方法及装置
CN111654473B (zh) * 2016-12-13 2022-07-19 创新先进技术有限公司 基于增强现实的虚拟对象分配方法及装置
CN107741809B (zh) * 2016-12-21 2020-05-12 腾讯科技(深圳)有限公司 一种虚拟形象之间的互动方法、终端、服务器及系统
CN108229937A (zh) * 2017-12-20 2018-06-29 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092475A1 (en) * 2009-06-23 2012-04-19 Tencent Technology (Shenzhen) Company Limited Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene
CN105959718A (zh) * 2016-06-24 2016-09-21 乐视控股(北京)有限公司 一种视频直播中实时互动的方法及装置
CN107204031A (zh) * 2017-04-27 2017-09-26 腾讯科技(深圳)有限公司 信息展示方法及装置
CN107274465A (zh) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 一种虚拟现实的主播方法、装置和系统
CN108021896A (zh) * 2017-12-08 2018-05-11 北京百度网讯科技有限公司 基于增强现实的拍摄方法、装置、设备及计算机可读介质
CN109345637A (zh) * 2018-08-27 2019-02-15 阿里巴巴集团控股有限公司 基于增强现实的互动方法及装置

Also Published As

Publication number Publication date
CN113112614B (zh) 2024-03-19
TW202009682A (zh) 2020-03-01
CN113112614A (zh) 2021-07-13
CN109345637B (zh) 2021-01-26
CN109345637A (zh) 2019-02-15
TWI721466B (zh) 2021-03-11

Similar Documents

Publication Publication Date Title
WO2020042786A1 (zh) 基于增强现实的互动方法及装置
US10726637B2 (en) Virtual reality and cross-device experiences
CN107294838B (zh) 社交应用的动画生成方法、装置、系统以及终端
JP6349031B2 (ja) 画像に表されたオブジェクトの認識及び照合のための方法及び装置
CN109688451B (zh) 摄像机效应的提供方法及系统
WO2022237129A1 (zh) 视频录制方法、装置、设备、介质及程序
JP2021534473A (ja) 拡張現実環境におけるマルチデバイスマッピングおよび共同
CN109087376B (zh) 图像处理方法、装置、存储介质及电子设备
CN111314759B (zh) 视频处理方法、装置、电子设备及存储介质
CN115735229A (zh) 在消息收发系统中更新化身服装
TW202304212A (zh) 直播方法、系統、電腦設備及電腦可讀儲存媒體
CN115803723A (zh) 在消息收发系统中更新化身状态
CN112148404B (zh) 头像生成方法、装置、设备以及存储介质
US11899719B2 (en) Systems and methods for determining whether to modify content
WO2022048373A1 (zh) 图像处理方法、移动终端及存储介质
CN112261481A (zh) 互动视频的创建方法、装置、设备及可读存储介质
CN113760161A (zh) 数据生成、图像处理方法、装置、设备及存储介质
US11876634B2 (en) Group contact lists generation
WO2019134501A1 (zh) 模拟用户试装的方法、装置、存储介质及移动终端
WO2019100234A1 (zh) 实现信息互动的方法和装置
CN114047979A (zh) 展示项目配置及显示方法、装置、设备、存储介质
CN114067084A (zh) 图像展示方法及装置
US20230344953A1 (en) Camera settings and effects shortcuts
US20230252733A1 (en) Displaying blockchain data associated with a three-dimensional digital object
JP2020510936A (ja) 補正パターン分析による映像補正方法およびシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19854641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19854641

Country of ref document: EP

Kind code of ref document: A1