CN113282179A - Interaction method, interaction device, computer equipment and storage medium - Google Patents

Interaction method, interaction device, computer equipment and storage medium Download PDF

Info

Publication number
CN113282179A
CN113282179A CN202110681015.9A CN202110681015A CN113282179A CN 113282179 A CN113282179 A CN 113282179A CN 202110681015 A CN202110681015 A CN 202110681015A CN 113282179 A CN113282179 A CN 113282179A
Authority
CN
China
Prior art keywords
navigation
target
special effect
task
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110681015.9A
Other languages
Chinese (zh)
Inventor
田真
李斌
欧华富
刘旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110681015.9A priority Critical patent/CN113282179A/en
Publication of CN113282179A publication Critical patent/CN113282179A/en
Priority to PCT/CN2022/085944 priority patent/WO2022262389A1/en
Priority to TW111121909A priority patent/TW202301082A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Toxicology (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an interaction method, an interaction apparatus, a computer device, and a storage medium, wherein the method includes: scanning a guide ticket, and starting an augmented reality AR environment; detecting that an AR device reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in an AR environment; acquiring operation result information of the target AR special effect; and displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information. Thus, interactivity can be improved; and through the generation process of the guide card punching record, the card punching record with rich content, more diversity and individuation can be generated, and the use requirements of users are met.

Description

Interaction method, interaction device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an interaction method, an interaction apparatus, a computer device, and a storage medium.
Background
In order to facilitate a user to visit in a scenic spot, two-dimensional codes or guide plates are usually arranged in each scenic spot of the scenic spot; the user can open the introduction page of the scenic spot by scanning the two-dimensional code arranged in the scenic spot to know the scenic spot through the introduction page, or directly read the introduction characters arranged on the guide board of the scenic spot to obtain the guide information of the current guide scenic spot and the related historical stories.
Disclosure of Invention
The embodiment of the disclosure at least provides an interaction method, an interaction device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an interaction method, including: scanning a guide ticket, and starting an augmented reality AR environment; detecting that an AR device reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in an AR environment; acquiring operation result information of the target AR special effect; and displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
Thus, interactivity can be improved; and through the generation process of the guide card punching record, the card punching record with rich content, more diversity and individuation can be generated, and the use requirements of users are met.
In an optional implementation, the interaction method further includes: generating a navigation task in response to a navigation event being triggered; wherein the navigation task comprises at least one navigation task node; each of the navigation task nodes corresponds to a navigation area in the target scene.
Like this, can also set up different guide task nodes for different guide areas respectively, the user can accomplish the tour to the scenic spot through the mode of accomplishing guide task node after triggering the guide incident, and the participation is stronger to can have personally on the scene user experience more.
In an alternative embodiment, the navigation event is triggered, and comprises at least one of: detecting that the position offset between the position of the AR device and the navigation path generated last time is greater than a preset position offset threshold; determining that the AR device is located within the target scene based on the location of the AR device.
Therefore, in the user navigation process, the navigation route is dynamically planned for the user according to the navigation state of the user, and the method has more flexibility.
In an optional embodiment, the generating the navigation task includes: determining a target virtual identity corresponding to the AR equipment from a plurality of alternative virtual identities based on at least one of a first historical navigation card punching record, user attribute information and selection information of the alternative virtual identities; generating the navigation task based on the target virtual identity.
Therefore, by means of determining the target virtual identity for the user, the user can perform immersive navigation with a storyline when performing a navigation task, and the navigation experience of the user is better. Meanwhile, different virtual identities are set for different users, and interaction under the same task scene can be provided for different users, so that interaction among different users can be improved.
In an optional embodiment, the generating the navigation task based on the target virtual identity includes: selecting an alternative navigation area from a plurality of navigation areas of the target scene based on a second historical navigation card punching record; and generating the navigation task based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities and the positions of the alternative navigation areas in the target scene.
Therefore, the alternative navigation area can be determined for the user in a targeted manner by utilizing the second historical navigation card punching record; and generating corresponding navigation tasks according to the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities and the positions of the alternative navigation areas in the target scene, and individually customizing the corresponding navigation tasks for the user.
In an optional embodiment, the target AR special effect comprises: a target AR photographing template; the operation result information includes: an AR special effect image; the acquiring operation result information of the target AR special effect includes: and responding to the triggered photographing operation of the target AR photographing template, and generating the AR special effect image comprising the target AR photographing template.
Therefore, the user can also keep the AR special effect image in the navigation process in a shooting mode.
In an optional embodiment, the method further comprises: and determining the target AR photographing template from a plurality of alternative AR photographing templates based on the third history guide card punching record.
In an optional embodiment, the target AR special effect comprises: task special effects corresponding to the AR interaction tasks; the operation result information includes: performing interactive operation; the acquiring operation result information of the target AR special effect includes: and generating an interactive image corresponding to at least part of the interactive operation in response to at least one interactive operation triggered based on the task special effect.
In an optional embodiment, the presenting, in the AR environment, a navigation card-punching record corresponding to the target navigation task node based on the operation result information includes: and generating a navigation card punching record corresponding to the target navigation task node based on the operation result information and the text input information.
Therefore, the user can also keep the mood record and the experience of mind in the navigation process by acquiring the navigation card punching record, and the user experience degree is improved.
In an optional embodiment, the method further comprises: and responding to the triggering of the navigation ending event, and generating a navigation general record of the navigation based on the navigation card punching records respectively corresponding to the at least one navigation task node.
In an alternative embodiment, the navigation end event includes at least one of: the AR equipment reaches a preset navigation ending area; an end-of-navigation control in the AR device is triggered; the state of the navigation task containing the at least one navigation task node is changed from incomplete to complete.
Therefore, different navigation ending events can be flexibly set, and the method is more flexible.
In an optional implementation, the interaction method further includes: responding to a triggering information sharing event, generating sharing information including the guide card punching record, and sending the sharing information to an information publishing platform corresponding to the information sharing event.
Therefore, the sharing information of the corresponding user can be browsed by other users in an information sharing mode, and the interactivity among the users is improved.
In an optional embodiment, the sharing information includes: an access link for the navigation card punching record; the generating of the sharing information including the guide card punching record comprises the following steps: sending the navigation card punching record to a server corresponding to the AR environment; and receiving the access link generated by the server based on the navigation card punching record.
In an alternative embodiment, the AR environment is implemented by a web-side or applet deployed in the AR device.
In a second aspect, an embodiment of the present disclosure further provides an interaction apparatus, including: the starting module is used for scanning the guide ticket and starting the augmented reality AR environment; the display module is used for detecting that the AR equipment reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in the AR environment; the acquisition module is used for acquiring operation result information of the target AR special effect; and the generating module is used for displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
In an optional implementation manner, the interaction apparatus further includes a first processing module, configured to: generating a navigation task in response to a navigation event being triggered; wherein the navigation task comprises at least one navigation task node; each of the navigation task nodes corresponds to a navigation area in the target scene.
In an alternative embodiment, the navigation event is triggered, and comprises at least one of: detecting that the position offset between the position of the AR device and the navigation path generated last time is greater than a preset position offset threshold; determining that the AR device is located within the target scene based on the location of the AR device.
In an optional embodiment, the first processing module, when generating the navigation task, is configured to: determining a target virtual identity corresponding to the AR equipment from a plurality of alternative virtual identities based on at least one of a first historical navigation card punching record, user attribute information and selection information of the alternative virtual identities; generating the navigation task based on the target virtual identity.
In an optional embodiment, the first processing module, when generating the navigation task based on the target virtual identity, is configured to: selecting an alternative navigation area from a plurality of navigation areas of the target scene based on a second historical navigation card punching record; and generating the navigation task based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities and the positions of the alternative navigation areas in the target scene.
In an optional embodiment, the target AR special effect comprises: a target AR photographing template; the operation result information includes: an AR special effect image; the obtaining module is configured to, when obtaining operation result information on the target AR special effect: and responding to the triggered photographing operation of the target AR photographing template, and generating the AR special effect image comprising the target AR photographing template.
In an optional implementation manner, the interaction apparatus further includes a second processing module, configured to: and determining the target AR photographing template from a plurality of alternative AR photographing templates based on the third history guide card punching record.
In an optional embodiment, the target AR special effect comprises: task special effects corresponding to the AR interaction tasks; the operation result information includes: performing interactive operation; the obtaining module is configured to, when obtaining operation result information on the target AR special effect: and generating an interactive image corresponding to at least part of the interactive operation in response to at least one interactive operation triggered based on the task special effect.
In an optional embodiment, the generating module, when displaying the navigation card-punching record corresponding to the target navigation task node in the AR environment based on the operation result information, is configured to: and generating a navigation card punching record corresponding to the target navigation task node based on the operation result information and the text input information.
In an optional implementation manner, the interaction apparatus further includes a third processing module, configured to: and responding to the triggering of the navigation ending event, and generating a navigation general record of the navigation based on the navigation card punching records respectively corresponding to the at least one navigation task node.
In an alternative embodiment, the navigation end event includes at least one of: the AR equipment reaches a preset navigation ending area; an end-of-navigation control in the AR device is triggered; the state of the navigation task containing the at least one navigation task node is changed from incomplete to complete.
In an optional implementation manner, the interaction apparatus further includes a fourth processing module, configured to: responding to a triggering information sharing event, generating sharing information including the guide card punching record, and sending the sharing information to an information publishing platform corresponding to the information sharing event.
In an optional embodiment, the sharing information includes: an access link for the navigation card punching record; the third processing module is configured to, when generating sharing information including the guide card punching record: sending the navigation card punching record to a server corresponding to the AR environment; and receiving the access link generated by the server based on the navigation card punching record.
In an alternative embodiment, the AR environment is implemented by a web-side or applet deployed in the AR device.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the above interaction apparatus, computer device, and computer-readable storage medium, reference is made to the description of the above interaction method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an interaction method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic view of a navigation area in a target scene provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an AR effect provided by an embodiment of the disclosure;
FIG. 4 is a schematic diagram of an interactive image provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an interaction device provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that in order to facilitate a user to visit in a scenic spot, two-dimensional codes or navigation boards are usually arranged in each scenic spot of the scenic spot; the user can open the introduction page to the sight spot through scanning the two-dimensional code to know the sight spot through the introduction page, or directly read the introduction characters of the guide plate arranged at the sight spot, so as to obtain the guide information of the current guide sight spot and the related historical stories.
In addition, users usually like to take pictures in scenic spots and share the pictures to certain information publishing platforms to perform 'card punching' in the scenic spots, the card punching mode needs the users to edit the contents of the card punching, the users need to create the contents by themselves, the operation is complex, and the shared contents are often single and cannot meet the requirements of the users.
Based on the research, the present disclosure provides an interaction method, which enables a user to show a target navigation area with a target AR special effect corresponding to the target navigation area after the user reaches the target navigation area by starting an Augmented Reality (AR) environment; the user can perform relevant operation on the target AR special effect, and the AR equipment can display the navigation card punching record corresponding to the target navigation task node in the AR environment according to the operation result information of the target AR special effect, so that the interactivity is improved; meanwhile, through the generation process of the guide card punching record, the card punching record with rich content, more diversity and individuation can be generated, and the use requirements of users are met.
In addition, through setting up the mode of guiding the navigation task node and distributing virtual identity to the user, set up different task plots in different navigation regions and supply the user who possesses virtual identity to experience, compare in the mode that utilizes the characters to know the navigation information, the user can be under the condition that possesses virtual identity execute the task node in order to know the navigation information, has more the role and substitutes the sense to also have better immersive experience, consequently have better experience sense.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, an interaction method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the interaction method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: an AR device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the interaction method may be implemented by a processor invoking computer readable instructions stored in a memory.
The interaction method provided by the embodiments of the present disclosure is explained below.
Referring to fig. 1, a flowchart of an interaction method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S104, where:
s101: scanning a guide ticket, and starting an augmented reality AR environment;
s102: detecting that an AR device reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in an AR environment;
s103: acquiring operation result information of the target AR special effect;
s104: and displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
The following describes the details of S101 to S104.
With respect to the above S101, the interaction method provided by the embodiment of the present disclosure may be applied to a scene of a user visiting a scenic spot, for example. In one possible scenario, the navigation ticket may include, for example, a proof of purchase that the user obtained when purchasing the scenic spot ticket. On the navigation ticket, for example, a recognizable two-dimensional code may be printed or pasted, or a specific image that can be recognized and activates the AR environment, such as a landscape map or a landscape map, may be included. After the AR device scans the guide entrance ticket, the two-dimensional code or the specific image can be identified, and the AR environment is started.
In another possible scenario, a two-dimensional code image that can initiate an augmented reality AR environment may be provided at a designated location of a scenic spot, for example, with the associated two-dimensional code image displayed at a navigation tile in the scenic spot. In yet another possible scenario, the user may also be provided with a link directly into the AR environment so that the AR device used by the user can open the AR environment with one touch.
When scanning the guide entrance ticket, for example, a handheld terminal device, such as a mobile phone or a tablet computer, may be used; or, the navigation ticket may also be scanned in a manner that the processing device is connected to the image acquisition device, for example, the unmanned aerial vehicle is connected to the mobile phone, and the navigation ticket is scanned by the unmanned aerial vehicle, and the augmented reality AR environment is started by the mobile phone connected to the unmanned aerial vehicle. Wherein the AR environment is implemented by a web-side or applet deployed in an AR device.
For the above S102, after the AR environment is started, it may be further detected that the AR device reaches a target navigation area corresponding to a target navigation task node in the at least one navigation task node, and a target AR special effect corresponding to the target navigation area is displayed in the AR environment.
In a specific implementation, the target AR effect may for example comprise an AR effect in a related navigation task; still alternatively, AR special effects for the user to record relevant information during navigation may also be included.
Wherein, in case that the target AR special effect includes an AR special effect in a related navigation task, the interaction method further includes: generating a navigation task in response to a navigation event being triggered; wherein the navigation task comprises at least one navigation task node; each of the navigation task nodes corresponds to a navigation area in the target scene.
The target scene may include, for example, a scenic spot visited by the user. During the process of visiting the scenic region, the user may be presented with AR navigation information in the AR environment, such as related text introduction information, or may be provided with voice navigation information. In another possible scenario, the user may also choose to trigger a navigation event to further choose to complete the relevant navigation task. Wherein the navigation task may be composed of, for example, a task node having an associated and storyline. Under each task node, a user can complete related tasks corresponding to the storyline according to related storylines (such as beginning, development, climax and ending in the storyline) under the node so as to promote the further development of the storyline, namely, the tour of the scenic spot can be synchronously completed; the AR special effects associated with different navigation task nodes are, for example, AR special effects associated with tasks corresponding to the navigation task nodes.
In addition, the navigation task can also be a card punching task which is composed of a plurality of card punching task nodes; in different task nodes, a user needs to arrive at different navigation areas, an AR special effect corresponding to the navigation area is triggered in the different navigation areas, and a photo is triggered by the AR special effect to obtain corresponding navigation card punching records. Here, the AR effect may be, for example, various templates for photographing, and specific forms of the templates may be described below and will not be described herein.
In a specific implementation, the navigation event is triggered, including but not limited to at least one of the following (a1) or (a 2):
(A1) the method comprises the following steps Detecting that a position offset between the position of the AR device and the most recently generated navigation path is greater than a preset position offset threshold.
In this case, at least one navigation path may be determined for the user first upon detecting the user entering the target scene.
Specifically, for the target scene, a plurality of areas may be predetermined as selectable navigation areas. The determined plurality of navigation areas may serve as a plurality of selectable navigation path nodes, from which navigation paths may be further determined.
For example, in case of a zoo scenic spot as target scene, it may for example comprise 3 selectable navigation areas, such as an amphibious crawling tour, a bird tour, a marine animal tour. Therefore, when a possible navigation path is determined, the amphibious crawling area, the bird area and the marine animal area can be respectively used as selectable navigation path nodes, and the navigation path is determined through different permutation and combination; the navigation path that can be determined includes, for example, an amphibious crawling tour zone, a bird tour zone, a marine animal tour zone, or an amphibious crawling tour zone, a marine animal tour zone, a bird tour zone.
Alternatively, the navigation path may be determined based on the relative positions between a plurality of selectable navigation areas. For example, the marine animal sightseeing area is located between the amphibious crawling sightseeing area and the bird sightseeing area, so when planning a navigation route, the navigation route can be determined to be the amphibious crawling sightseeing area, the marine animal sightseeing area and the bird sightseeing area, so that a user can not turn back for many times in the scenic area, and the sightseeing experience of the user in the scenic area is improved. Or, the navigation path preference of the user can be determined according to the historical navigation record of the user, so that a more suitable navigation path can be provided for the user in a targeted manner.
Here, the manner of determining the navigation path may be determined according to actual circumstances, and is not limited herein.
In addition, for scenic spots with larger floor space or scenic spots comprising more navigable areas, the number of navigation path nodes determined for the scenic spots can be correspondingly larger. Therefore, when the user visits in the scenic spot, a new navigation path can be dynamically planned according to the current position of the AR device and the movement behavior of the AR device. That is, the determined navigation path may be generated in plurality according to the moving navigation process of the user in the scenic spot. Therefore, the method can get rid of the limitation of a single navigation path and flexibly provide the navigation path for the user.
In the embodiment of the present disclosure, when detecting the position of the AR device, the position may be determined directly by using a Global Positioning System (GPS), for example. Compared with the method adopting the SCAM (simultaneous localization and mapping), the method is simpler, occupies less computing power for the AR equipment, and is easy to deploy in lightweight equipment such as mobile phones.
In addition, when the AR device displays the AR special effect in the AR environment, for example, a pre-generated high-precision map and/or a Simultaneous Localization and Mapping (SLAM) may be used to determine the pose of the AR device in the target scene, and then a display position of the AR special effect in the AR device is determined according to the pose of the AR special effect in the target scene and the pose of the AR device, and the AR special effect is displayed according to the display position.
In addition, after determining the location of the AR device, the location offset between the location of the AR device and the most recently generated navigation path may be determined. When the position offset is greater than a preset position offset threshold, it indicates that the user has deviated from the navigation path planned for him, e.g. the user walks to an unopened tour area, or to a staff office area in a scenic spot, or to other scenic spots not in the planned path. At this time, a navigation task may be generated accordingly to help the user return to a normal navigation path.
The preset offset threshold may be determined according to, for example, a floor area of the scenic region, a number of navigable areas in the scenic region, and the like, which is not limited herein. For example, when the footprint of the scenic spot is large, a position offset threshold corresponding to a large value, for example, 150 meters, may be set. A navigation task is generated when the AR device is located at a position offset more than 150 meters from the most recently generated navigation path.
(A2) The method comprises the following steps Determining that the AR device is located within the target scene based on the location of the AR device.
In this case, the navigation task may be generated upon determining that the AR device is located in the target scene. When the position of the AR device is detected, the position may be determined directly by using GPS, for example.
In this way, for a scenic spot where a plurality of entrances exist, a user can generate a navigation task in response to the entrance from any entrance. In addition, for the user who is visiting in the scenic spot, after the AR environment is started by the AR device, the corresponding navigation task can be generated immediately. This approach is more efficient in generating navigation tasks.
Here, the two ways in which the navigation event is triggered described in the above (a1) and (a2) are only two examples shown, and no limitation is made to the specific way in which the navigation event is determined to be triggered. In this embodiment, the navigation event may be determined to be triggered in various ways, which are not described herein.
Upon determining that a navigation event is triggered, a navigation task may be generated accordingly.
In a specific implementation, when generating the navigation task, for example, the following manner may be adopted: determining a target virtual identity corresponding to the AR equipment from a plurality of alternative virtual identities based on at least one of a first historical navigation card punching record, user attribute information and selection information of the alternative virtual identities; generating the navigation task based on the target virtual identity.
Wherein the first history navigation punch record may comprise, for example, a punch record of the user and/or the AR device while historically navigating the target scene. For example, navigation task information experienced in the target scene, and the degree of completion of different navigation tasks, etc. may be included.
The user attribute information may include, for example, user information that is previously input by the user when registering and/or using AR navigation. Specifically, information such as the age, sex, and preference for the type of navigation task of the user may be included, for example. Illustratively, when the user includes the user a, the corresponding user attribute information may include, for example, "woman", "25 years", and "favorite history type navigation task".
The plurality of alternative virtual identities may for example be determined according to a specific target scenario. For example, where the target scene includes a zoo, the alternative virtual identities may include, for example, a domesticator and a breeder; in case the target scene comprises a museum, the alternative virtual identities may for example comprise cultural relics repairmen, tour guides, and related historical figures, etc.; where the target scenario includes historical people residences, the alternate virtual identities may include, for example, historical people in historical events, such as householders, visiting friends, resident principals residing in the residences, and students professor the resident principals. Under different target scenes, randomly selecting a corresponding virtual identity for a user to determine selection information of a plurality of candidate virtual identities; alternatively, the user may be provided with an option to select a plurality of candidate virtual identities, and the selection information of the plurality of candidate virtual identities may be determined in response to any selection of the plurality of candidate virtual identities by the user.
Specifically, when the target virtual identity corresponding to the AR device is determined from the multiple candidate virtual identities, for example, the target virtual identity specified by the user may be determined preferentially according to the selection information of the user in the multiple candidate identities. In the event that the user does not specify a virtual identity, the first historical navigation punch record may also be presented to the user to determine whether the user continues to complete an incomplete navigation task. Or, in the case that the history navigation open record information of the user is less, a corresponding navigation record may be generated for the user according to the user attribute information, for example, for the user a, a "resume history" type navigation task may be provided for the user a. Or, the target virtual identity can be recommended to the user according to the selection heat of different virtual identities. Here, the specific manner of determining the target virtual identity may be determined according to an actual situation, and is not described herein again.
After the target virtual identity is determined, a navigation task may be generated according to the target virtual identity, specifically, the navigation task may be generated in the following manner: selecting an alternative navigation area from a plurality of navigation areas of the target scene based on a second historical navigation card punching record; and generating the navigation task based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities and the positions of the alternative navigation areas in the target scene.
The second history navigation card-punching record may include the first history navigation card-punching record, or may include a card-punching record in the current navigation.
When an alternative navigation area is selected from a plurality of navigation areas in the target scene based on the second history navigation punch record, for example, a navigation area that the user does not currently reach may be determined based on a punch record in the current navigation in the second history navigation punch record, and the navigation area that is not reached is taken as the alternative navigation area. Alternatively, the navigation area that the user has arrived a plurality of times may be determined based on the first history punch record included in the second history punch record, and the navigation area may be used as the alternative navigation area in consideration of the preference of the user toward navigating the navigation area.
After the alternative navigation areas are determined, the navigation task can be generated based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities, and the positions of the alternative navigation areas in the target scene.
For example, a plurality of different storylines respectively corresponding to the virtual identities may be set for each alternative navigation area in advance, and corresponding AR special effect materials may be determined for each virtual identity under different storylines respectively. After determining the multiple alternative navigation areas, the order between the navigation task nodes corresponding to each alternative navigation area in the generated navigation task may be determined in a manner similar to the above-described manner for determining the navigation path, and details are not repeated herein.
In another embodiment of the present disclosure, a specific example of a navigation task is also provided. In this example, the target scenario includes historical people's lives. In the target scene, four navigation areas are included, which are an office area Y1, a living area Y2, a passenger area Y3, and a teaching area Y4, respectively, left over in the historic person's home residence. Referring to fig. 2, a schematic diagram of a navigation area in a target scene is provided according to an embodiment of the present disclosure. Each navigation area can correspond to one task node in one navigation task, namely, one navigation task is formed by a plurality of task nodes.
In this example, two different navigation tasks are provided, which may, for example, restore the corresponding historical storyline. Illustratively, two examples are listed in the disclosed embodiments, including the following navigation task B1, and navigation task B2:
navigation task B1: including navigation tasks determined for user b. The virtual identity of user B is a resident centrally-located leading person, and the alternative navigation areas determined for it include Y1, Y2, Y3, and Y4, corresponding to navigation task nodes M1_1, M1_2, M1_3, and M1_4 in the navigation task, respectively.
Wherein M1_1 includes: the user b finishes the correction of the office document at the office Y1 and delivers the office document of the correction opinions to friends by the housekeeper.
M1_2 includes: the user B collects the newsstand in the same day in the living area Y2 and reads the news in the same day; an access invitation is sent to the friend.
M1_3 includes: the user B receives a visit friend in the visiting area Y3 and communicates with the visit friend; sending the friend.
M1_4 includes: the lecture hall of the user b in the teaching area Y4 presents a speech to the student and answers the relevant questions posed by the student.
Referring to fig. 2, there is shown navigation task nodes corresponding to different navigation areas, respectively. In addition, in the navigation task B2, the sequence of the navigation task nodes may be set to be M1_1 → M1_2 → M1_3 → M1_4, that is, the direction indicated by the arrow in fig. 2.
Navigation task B2: including navigation tasks determined for the user. The virtual identity of the user C is the visiting friend, and the alternative navigation areas determined for the user C comprise Y1, Y3 and Y4 and navigation task nodes M2_1, M2_2 and M2_3 in the corresponding navigation tasks respectively.
Wherein M2_1 includes: the user C visits the host person's office Y1 and requests the caretaker to hand-off the office file.
M2_2 includes: the user C communicates with the host at the guest area Y3 and returns to the teaching area Y4 after communicating.
M2_3 includes: the user C hears the lecture made by the master official in the teaching area Y4 and supplements the lecture to the student.
Here, the navigation task B1 and the navigation task B2 are only examples of navigation tasks, and may be determined according to actual situations, and are not described herein again.
For each navigation task node, the target AR special effect comprises a task special effect corresponding to the AR interaction task, and the target AR special effect can correspond to a specific AR special effect material.
Illustratively, for the navigation task node M1_1, the specific AR special effect material may include, for example, the relevant AR special effect material for correcting office files on a desk, or the relevant AR special effect material for delivering office files to a manager by a user picking up the office files with hands. For the task node M2_2, the specific AR special effect material may include, for example, AR special effect material when the user paces, and AR special effect material when drinking tea or the like when communicating with the host. The determination may be specifically determined according to actual conditions, and is not limited herein.
Here, since a plurality of different virtual identities set under the same historical story line may have associated navigation task nodes, different users may also complete interaction between two users in a real scene when executing a task corresponding to a navigation task node having an association relationship. For example, for user b and user c in the above example, user a performs M1_3, and user c performs M2_2, user b and user c can have a conversation in a real scene. Therefore, when the user experiences the unique navigation task in an immersive manner, the interaction among different users in the target scene can be improved by using the method besides better immersive experience feeling.
In another embodiment of the present disclosure, in the case that the target AR special effect includes an AR special effect for the relevant information in the user record navigation process, the target AR special effect may include, for example, a target AR photographing template.
Illustratively, the AR photographing template may include, for example, an AR special effect associated with the navigation area. Fig. 3 is a schematic diagram illustrating an AR special effect according to an embodiment of the present disclosure. For example, in the case that the navigation area includes an office area, the associated AR special effects may include, for example, a file cabinet, a penholder, an inkstone, and other virtual special effects. In this case, the corresponding AR photographing template may include, for example, a photographing template corresponding to a virtual special effect including a file cabinet, a pen rack, and an inkstone, which are listed. Referring to fig. 3 (a), a schematic diagram of an AR photographing template provided in an embodiment of the present disclosure is shown; the virtual special effect comprises a virtual special effect 31 corresponding to the file cabinet and a virtual special effect 32 corresponding to the penholder and the inkstone.
Where the navigation area includes a living area, the associated AR special effects may include, for example, virtual special effects such as a teapot to boil tea on a fire, a cup placed on a tea table, and the like. In this case, the corresponding AR photographing template may include, for example, a photographing template corresponding to a virtual special effect including a teapot, a cup, and the like. Referring to fig. 3 (b), a schematic diagram of another AR photographing template provided in the embodiment of the present disclosure is shown; including a virtual special effect 33 corresponding to the teapot and a virtual special effect 34 corresponding to the cup.
In addition, the photographing template may have different frames, or different special effects may be determined to be displayed according to the selection of the user. The determination may be specifically determined according to actual requirements, and is not limited herein.
For the above S103, when the operation result information on the target AR special effect is acquired, the operation result information that can be acquired is different for different target AR special effects.
Next, the operation result information corresponding to the task effect corresponding to the target AR special effect image including the target AR photographing template and the AR interaction task will be described.
(C1) The method comprises the following steps The target AR special effect includes a target AR photographing template.
In this case, the corresponding operation result information includes an AR special effect image.
When obtaining the operation result information on the target AR special effect, for example, the following manner may be adopted: and responding to the triggered photographing operation of the target AR photographing template, and generating the AR special effect image comprising the target AR photographing template.
In this case, for example, a photographing trigger button may be provided to the user. After the user triggers the photographing trigger button, for example, an image may be photographed by a camera, and an AR special effect corresponding to the target AR photographing template may be superimposed on the photographed image. In this way, an AR special effect image including the target AR photographing template can be generated.
In another possible implementation, for example, in response to a triggered photographing operation of the target AR photographing template, a video within a preset time is photographed by the camera, and then an AR special effect corresponding to the target AR photographing template with a dynamic effect is superimposed on the video. Therefore, the obtained AR special effect image can show a target AR photographing template and can also show a dynamic action effect.
When the target AR photographing template is determined, the target AR photographing template can be determined from a plurality of alternative AR photographing templates based on the third history guide card punching record. Wherein the third history navigation punch record may comprise, for example, a historical open record of the user and/or the AR device to the same navigation area.
Specifically, according to the third history card punching record, on one hand, the historical photographing situation of the user in the navigation area, such as the number of historical photographing, can be determined; on the other hand, the user's preference for a plurality of alternative AR photo templates may also be determined.
Therefore, in determining the target AR photographing template, for example, an AR photographing template that the user conventionally uses may be determined as the target AR photographing template; or, the unused AR photographing template of the user may also be determined as the target AR photographing template; or, any one of the alternative AR photographing templates may be randomly pushed to the user; or, determining the current target AR photographing template according to the historical photographing times of the user.
When the current target AR photographing template is determined according to the historical photographing times of the user, for example, a corresponding AR photographing template may be determined for each photographing. For example, at the time of the first shot, a gray static frame is displayed in the AR shooting template; displaying a white static frame in the AR photographing template during the second photographing; displaying a silver dynamic frame in the AR photographing template during the third photographing; and displaying a golden dynamic frame in the AR photographing template during the fourth photographing. Therefore, the use interest of the user can be improved, and the user is guided to continue to use the AR photographing function to guide more navigation areas.
(C2) The method comprises the following steps The target AR special effect comprises a task special effect corresponding to the AR interaction task.
In this case, the corresponding operation result information includes an interactive operation.
When obtaining the operation result information on the target AR special effect, for example, the following manner may be adopted: and generating an interactive image corresponding to at least part of the interactive operation in response to at least one interactive operation triggered based on the task special effect.
In this case, for example, a corresponding interactive operation may be set for the AR special effect. Illustratively, where the AR effect comprises a teapot boiling tea on a fire, the corresponding interactive operation may for example comprise removing the teapot and pouring the tea into a cup on a tea table. For the case that the AR special effect includes a file cabinet, the corresponding interactive operation may include, for example, an operation of taking out a file from the file cabinet and opening the file, and after the file is opened, text information in the file may be displayed for a user to read.
In addition, in the case of determining the interactive operation, an interactive image corresponding to at least part of the interactive operation, such as the interactive image for pouring tea described above, or an interactive image for reading a document, may be generated accordingly. Referring to fig. 4, a schematic view of an interactive image provided in the embodiment of the present disclosure is shown. In which an interactive image of a frame of the tea pot 33 is shown during the dynamic pouring of tea to the cup 34.
For the above S104, after the operation result information is obtained, the navigation card-punching record corresponding to the target navigation task node may also be displayed in the AR environment.
In a specific implementation, for example, a navigation card-punching record corresponding to the target navigation task node may be generated based on the operation result information and the text input information.
The text input information may include, for example, text information input by the user in the task node, or may also include text information input by the user such as a navigation mood and a navigation idea.
Specifically, when the navigation card-punching record corresponding to the target navigation task node is generated, for example, the navigation card-punching record capable of restoring the navigation process of the user at the target navigation task node may be directly generated according to the operation result information. In this way, the user can review the navigation process corresponding to the task node. Or, text input information input by the user can be correspondingly added in the navigation card-punching record so as to enrich the navigation card-punching record.
In another embodiment of the present disclosure, the navigation general record of the current navigation may be generated based on the navigation card punching records respectively corresponding to the at least one navigation task node in response to the trigger of the navigation end event.
Wherein the navigation end event includes, but is not limited to, at least one of the following (D1) to (D3):
(D1) the method comprises the following steps The AR device reaches a preset end navigation area.
In this case, for example, a corresponding end navigation area may be set in advance for the target scene.
For example, in the case that the target scene includes at least one exit, an area where each exit is located may be used as the end navigation area. After determining that the AR device reaches the navigation ending area according to the location of the AR device, the user may be considered to have finished the navigation of the target scene this time, and may end the navigation. In this way, it is possible to determine whether to end the navigation with a simpler judgment logic.
(D2) The method comprises the following steps An end-of-navigation control in the AR device is triggered.
In this case, the user may be provided with a corresponding end navigation control, for example. In particular, the user may select the time and place to end the navigation and trigger the end navigation control accordingly. At this time, the navigation can be finished directly in response to the user finishing the navigation control.
(D3) The method comprises the following steps The state of the navigation task containing the at least one navigation task node is changed from incomplete to complete.
In this case, for example, when the user executes a navigation task corresponding to a different navigation task node, the completion state of the user for the navigation task node may be recorded. If the status of the navigation task corresponding to each navigation task node corresponding to the current user is changed to be completed, it can be determined that the user has completed the navigation task. At this point, the navigation can be ended.
Therefore, the user can actively stop the navigation or determine whether to stop the navigation according to the state of the user by selecting different navigation ending events, and the method has more flexibility.
After the navigation is determined to be finished, the navigation general record of the navigation can be generated based on the navigation card punching records respectively corresponding to at least one navigation task node.
Illustratively, the navigation summary record may include, for example, an album, wherein the album is formed by the navigation card punching records corresponding to each navigation task node. In this way, the user may also be provided with recorded information for the entire course of the navigation.
In specific implementation, when displaying the guide card reading record, for example, the AR special effect may be displayed in combination with an image taken by the AR device; for example, an image taken by the AR device includes a navigation object within a navigation area; the position of the navigation object in the image can be determined according to the image shot by the AR device, and the display position of the AR special effect is determined based on the position.
For another example, when a guide ticket is scanned, an image of the guide ticket is acquired; for example, the position of the navigation ticket in the image may be used as a reference to determine the display position of the AR special effect, where, for example, a display plane or a display space may be determined according to the position of the navigation ticket in the image, and the position of the display plane or the display space is the display position of the AR special effect.
In addition, the position of the AR device in the target scene can be determined according to the image shot by the AR device and a high-precision three-dimensional map of the target scene generated in advance. And then determining the display position of the AR special effect according to the position.
After the display position of the AR special effect is determined based on any method, the guide card punching record is displayed in the AR environment according to the display position.
When the AR special effect is displayed in combination with an image captured by the AR device, for example, the AR special effect may be rendered at the front end of the image captured by the AR device for displaying.
In another embodiment of the present disclosure, sharing information including the guide card punching record may be generated in response to a trigger information sharing event, and the sharing information is sent to an information publishing platform corresponding to the information sharing event.
And sharing information comprises an access link of the guide card punching record.
In a specific implementation, when the shared information including the guide card punching record is generated, for example, the following manner may be adopted: sending the navigation card punching record to a server corresponding to the AR environment; and receiving the access link generated by the server based on the navigation card punching record.
Specifically, after determining the navigation card-punching record, the determined navigation card-punching record may be transmitted to a server corresponding to the AR environment. After receiving the navigation card-punching record, the server may add, for example, related information of the user, such as a user name, a user identity, a user avatar, and the like of the user, and determine a corresponding access link.
After the server determines the corresponding access link, the access link may also be sent to the AR device. By using the access link, the AR device and the devices of other users can view the navigation card-punching record related to the user and the related information of the user. Therefore, related navigation information can be shared among a plurality of users, and interactivity is stronger. In addition, other users can also perform operations such as comment, approval, forwarding and the like on the shared information in the access link, and the interactivity among different users can also be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an interaction device corresponding to the interaction method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the interaction method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not described.
Referring to fig. 5, a schematic diagram of an interaction apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a starting module 51, a display module 52, an obtaining module 53 and a generating module 54; wherein the content of the first and second substances,
a starting module 51, configured to scan a guide ticket and start an augmented reality AR environment; a display module 52, configured to detect that an AR device reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and display a target AR special effect corresponding to the target navigation area in the AR environment; an obtaining module 53, configured to obtain operation result information of the target AR special effect; a generating module 54, configured to display a navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
In an optional implementation, the interaction apparatus further includes a first processing module 55, configured to: generating a navigation task in response to a navigation event being triggered; wherein the navigation task comprises at least one navigation task node; each of the navigation task nodes corresponds to a navigation area in the target scene.
In an alternative embodiment, the navigation event is triggered, and comprises at least one of: detecting that the position offset between the position of the AR device and the navigation path generated last time is greater than a preset position offset threshold; determining that the AR device is located within the target scene based on the location of the AR device.
In an alternative embodiment, the first processing module 55, when generating the navigation task, is configured to: determining a target virtual identity corresponding to the AR equipment from a plurality of alternative virtual identities based on at least one of a first historical navigation card punching record, user attribute information and selection information of the alternative virtual identities; generating the navigation task based on the target virtual identity.
In an alternative embodiment, the first processing module 55, when generating the navigation task based on the target virtual identity, is configured to: selecting an alternative navigation area from a plurality of navigation areas of the target scene based on a second historical navigation card punching record; and generating the navigation task based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities and the positions of the alternative navigation areas in the target scene.
In an optional embodiment, the target AR special effect comprises: a target AR photographing template; the operation result information includes: an AR special effect image; the obtaining module 53, when obtaining the operation result information of the target AR special effect, is configured to: and responding to the triggered photographing operation of the target AR photographing template, and generating the AR special effect image comprising the target AR photographing template.
In an optional implementation, the interaction apparatus further includes a second processing module 56, configured to: and determining the target AR photographing template from a plurality of alternative AR photographing templates based on the third history guide card punching record.
In an optional embodiment, the target AR special effect comprises: task special effects corresponding to the AR interaction tasks; the operation result information includes: performing interactive operation; the obtaining module 53, when obtaining the operation result information of the target AR special effect, is configured to: and generating an interactive image corresponding to at least part of the interactive operation in response to at least one interactive operation triggered based on the task special effect.
In an optional embodiment, the generating module 54, when displaying the navigation card-punching record corresponding to the target navigation task node in the AR environment based on the operation result information, is configured to: and generating a navigation card punching record corresponding to the target navigation task node based on the operation result information and the text input information.
In an optional implementation, the interaction apparatus further includes a third processing module 57, configured to: and responding to the triggering of the navigation ending event, and generating a navigation general record of the navigation based on the navigation card punching records respectively corresponding to the at least one navigation task node.
In an alternative embodiment, the navigation end event includes at least one of: the AR equipment reaches a preset navigation ending area; an end-of-navigation control in the AR device is triggered; the state of the navigation task containing the at least one navigation task node is changed from incomplete to complete.
In an optional embodiment, the interaction device further includes a fourth processing module 58, configured to: responding to a triggering information sharing event, generating sharing information including the guide card punching record, and sending the sharing information to an information publishing platform corresponding to the information sharing event.
In an optional embodiment, the sharing information includes: an access link for the navigation card punching record; the third processing module 57, when generating the sharing information including the guide card punching record, is configured to: sending the navigation card punching record to a server corresponding to the AR environment; and receiving the access link generated by the server based on the navigation card punching record.
In an alternative embodiment, the AR environment is implemented by a web-side or applet deployed in the AR device.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and the computer device includes:
a processor 10 and a memory 20; the memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 being configured to execute the machine-readable instructions stored in the memory 20, the processor 10 performing the following steps when the machine-readable instructions are executed by the processor 10:
scanning a guide ticket, and starting an augmented reality AR environment; detecting that an AR device reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in an AR environment; acquiring operation result information of the target AR special effect; and displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
The storage 20 includes a memory 210 and an external storage 220; the memory 210 is also referred to as an internal memory, and temporarily stores operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 through the memory 210.
The specific execution process of the instruction may refer to the steps of the interaction method described in the embodiments of the present disclosure, and details are not described here.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the interaction method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the interaction method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (17)

1. An interaction method, comprising:
scanning a guide ticket, and starting an augmented reality AR environment;
detecting that an AR device reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in an AR environment;
acquiring operation result information of the target AR special effect;
and displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
2. The interaction method of claim 1, further comprising:
generating a navigation task in response to a navigation event being triggered; wherein the navigation task comprises at least one navigation task node; each of the navigation task nodes corresponds to a navigation area in the target scene.
3. The interaction method according to claim 2, wherein the navigation event is triggered, comprising at least one of:
detecting that the position offset between the position of the AR device and the navigation path generated last time is greater than a preset position offset threshold;
determining that the AR device is located within the target scene based on the location of the AR device.
4. The interaction method according to claim 2 or 3, wherein the generating of the navigation task comprises:
determining a target virtual identity corresponding to the AR equipment from a plurality of alternative virtual identities based on at least one of a first historical navigation card punching record, user attribute information and selection information of the alternative virtual identities;
generating the navigation task based on the target virtual identity.
5. The interaction method of claim 4, wherein generating the navigation task based on the target virtual identity comprises:
selecting an alternative navigation area from a plurality of navigation areas of the target scene based on a second historical navigation card punching record;
and generating the navigation task based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities and the positions of the alternative navigation areas in the target scene.
6. The interaction method according to any of claims 1 to 5, wherein the target AR special effect comprises: a target AR photographing template; the operation result information includes: an AR special effect image;
the acquiring operation result information of the target AR special effect includes:
and responding to the triggered photographing operation of the target AR photographing template, and generating the AR special effect image comprising the target AR photographing template.
7. The interaction method of claim 6, further comprising:
and determining the target AR photographing template from a plurality of alternative AR photographing templates based on the third history guide card punching record.
8. The interaction method according to any of claims 1 to 7, wherein the target AR special effect comprises: task special effects corresponding to the AR interaction tasks; the operation result information includes: performing interactive operation;
the acquiring operation result information of the target AR special effect includes:
and generating an interactive image corresponding to at least part of the interactive operation in response to at least one interactive operation triggered based on the task special effect.
9. The interaction method according to any one of claims 1 to 8, wherein said presenting a navigation card-punching record corresponding to the target navigation task node in the AR environment based on the operation result information comprises:
and generating a navigation card punching record corresponding to the target navigation task node based on the operation result information and the text input information.
10. The interaction method according to any one of claims 1 to 9, further comprising:
and responding to the triggering of the navigation ending event, and generating a navigation general record of the navigation based on the navigation card punching records respectively corresponding to the at least one navigation task node.
11. The interactive method of claim 10, wherein the navigation end event comprises at least one of:
the AR equipment reaches a preset navigation ending area;
an end-of-navigation control in the AR device is triggered;
the state of the navigation task containing the at least one navigation task node is changed from incomplete to complete.
12. The interaction method according to any one of claims 1 to 11, further comprising:
responding to a triggering information sharing event, generating sharing information including the guide card punching record, and sending the sharing information to an information publishing platform corresponding to the information sharing event.
13. The interaction method according to claim 12, wherein the sharing information comprises: an access link for the navigation card punching record;
the generating of the sharing information including the guide card punching record comprises the following steps:
sending the navigation card punching record to a server corresponding to the AR environment;
and receiving the access link generated by the server based on the navigation card punching record.
14. The interaction method according to any of claims 1 to 13, wherein the AR environment is implemented by a web-side or applet deployed in the AR device.
15. An interactive apparatus, comprising:
the starting module is used for scanning the guide ticket and starting the augmented reality AR environment;
the display module is used for detecting that the AR equipment reaches a target navigation area corresponding to a target navigation task node in at least one navigation task node, and displaying a target AR special effect corresponding to the target navigation area in the AR environment;
the acquisition module is used for acquiring operation result information of the target AR special effect;
and the generating module is used for displaying the navigation card punching record corresponding to the target navigation task node in the AR environment based on the operation result information.
16. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the processor performing the steps of the interaction method of any of claims 1 to 14 when the machine-readable instructions are executed by the processor.
17. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the interaction method according to any one of claims 1 to 14.
CN202110681015.9A 2021-06-18 2021-06-18 Interaction method, interaction device, computer equipment and storage medium Pending CN113282179A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110681015.9A CN113282179A (en) 2021-06-18 2021-06-18 Interaction method, interaction device, computer equipment and storage medium
PCT/CN2022/085944 WO2022262389A1 (en) 2021-06-18 2022-04-08 Interaction method and apparatus, computer device and program product, storage medium
TW111121909A TW202301082A (en) 2021-06-18 2022-06-13 Interaction method, computer device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110681015.9A CN113282179A (en) 2021-06-18 2021-06-18 Interaction method, interaction device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113282179A true CN113282179A (en) 2021-08-20

Family

ID=77285061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110681015.9A Pending CN113282179A (en) 2021-06-18 2021-06-18 Interaction method, interaction device, computer equipment and storage medium

Country Status (3)

Country Link
CN (1) CN113282179A (en)
TW (1) TW202301082A (en)
WO (1) WO2022262389A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262389A1 (en) * 2021-06-18 2022-12-22 上海商汤智能科技有限公司 Interaction method and apparatus, computer device and program product, storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180590A1 (en) * 2014-12-23 2016-06-23 Lntel Corporation Systems and methods for contextually augmented video creation and sharing
CN109067839A (en) * 2018-06-29 2018-12-21 北京小米移动软件有限公司 Push visit tutorial message, the method and device for creating sight spot information database
CN111640202A (en) * 2020-06-11 2020-09-08 浙江商汤科技开发有限公司 AR scene special effect generation method and device
CN112948686A (en) * 2021-03-25 2021-06-11 支付宝(杭州)信息技术有限公司 Position recommendation processing method and device
CN112947756A (en) * 2021-03-03 2021-06-11 上海商汤智能科技有限公司 Content navigation method, device, system, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170329394A1 (en) * 2016-05-13 2017-11-16 Benjamin Lloyd Goldstein Virtual and augmented reality systems
CN112927293A (en) * 2021-03-26 2021-06-08 深圳市慧鲤科技有限公司 AR scene display method and device, electronic equipment and storage medium
CN113282179A (en) * 2021-06-18 2021-08-20 北京市商汤科技开发有限公司 Interaction method, interaction device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180590A1 (en) * 2014-12-23 2016-06-23 Lntel Corporation Systems and methods for contextually augmented video creation and sharing
CN109067839A (en) * 2018-06-29 2018-12-21 北京小米移动软件有限公司 Push visit tutorial message, the method and device for creating sight spot information database
CN111640202A (en) * 2020-06-11 2020-09-08 浙江商汤科技开发有限公司 AR scene special effect generation method and device
CN112947756A (en) * 2021-03-03 2021-06-11 上海商汤智能科技有限公司 Content navigation method, device, system, computer equipment and storage medium
CN112948686A (en) * 2021-03-25 2021-06-11 支付宝(杭州)信息技术有限公司 Position recommendation processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262389A1 (en) * 2021-06-18 2022-12-22 上海商汤智能科技有限公司 Interaction method and apparatus, computer device and program product, storage medium

Also Published As

Publication number Publication date
TW202301082A (en) 2023-01-01
WO2022262389A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
EP3474586B1 (en) Place-based information processing method and apparatus
CN102763404B (en) Camera, information acquiring system and program
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN111640171B (en) Historical scene explanation method and device, electronic equipment and storage medium
WO2016144507A1 (en) Apparatus and method for automatically generating an optically machine readable code for a captured image
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111667590B (en) Interactive group photo method and device, electronic equipment and storage medium
CN115857704A (en) Exhibition system based on metauniverse, interaction method and electronic equipment
US20230072463A1 (en) Contact information presentation
CN108697934A (en) Guidance information related with target image
CN111639979A (en) Entertainment item recommendation method and device
TW202314535A (en) Data display method, computer device and computer-readable storage medium
WO2022262389A1 (en) Interaction method and apparatus, computer device and program product, storage medium
KR101620475B1 (en) server for seeking treasures games
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN111639977A (en) Information pushing method and device, computer equipment and storage medium
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
US20230162433A1 (en) Information processing system, information processing method, and information processing program
CN108092950B (en) AR or MR social method based on position
CN113538703A (en) Data display method and device, computer equipment and storage medium
JP2015154218A (en) Server system, and program
JP2023075441A (en) Information processing system, information processing method and information processing program
CN113345110A (en) Special effect display method and device, electronic equipment and storage medium
CN114049467A (en) Display method, display device, display apparatus, storage medium, and program product
Rosenthal Revisioning the city: Public history and locative digital media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051706

Country of ref document: HK