WO2022262389A1 - 交互方法、装置、计算机设备及程序产品、存储介质 - Google Patents

交互方法、装置、计算机设备及程序产品、存储介质 Download PDF

Info

Publication number
WO2022262389A1
WO2022262389A1 PCT/CN2022/085944 CN2022085944W WO2022262389A1 WO 2022262389 A1 WO2022262389 A1 WO 2022262389A1 CN 2022085944 W CN2022085944 W CN 2022085944W WO 2022262389 A1 WO2022262389 A1 WO 2022262389A1
Authority
WO
WIPO (PCT)
Prior art keywords
navigation
target
tour
record
special effect
Prior art date
Application number
PCT/CN2022/085944
Other languages
English (en)
French (fr)
Inventor
田真
李斌
欧华富
刘旭
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022262389A1 publication Critical patent/WO2022262389A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention is based on a Chinese patent application with the application number 202110681015.9, the application date is June 18, 2021, and the application name is "An interactive method, device, computer equipment and storage medium", and claims the priority of the Chinese patent application , the entire content of this Chinese patent application is hereby incorporated into the present invention by way of reference.
  • the present disclosure relates to the field of computer technology, and in particular to an interactive method, device, computer equipment, program product, and storage medium.
  • QR codes or guide boards are usually set up in each scenic spot; users can open the introduction page of the scenic spot by scanning the QR code set in the scenic spot, so as to learn about the scenic spot through the introduction page Understand, or directly read the introductory text of the guide board set in the scenic spot, so as to know the guide information of the current guide attraction and related historical stories. In this way, the user mainly obtains the guide information from the scenic spot in one direction. , which is less interactive.
  • Embodiments of the present disclosure at least provide an interaction method, device, computer equipment, program product, and storage medium.
  • An embodiment of the present disclosure provides an interaction method, including: scanning the guide ticket, starting the augmented reality AR environment; detecting that the AR device arrives at the target guide area corresponding to the target guide task node in at least one guide task node displaying in the AR environment a target AR special effect corresponding to the target navigation area; acquiring operation result information on the target AR special effect; based on the operation result information, displaying in the AR environment the The navigation check-in record corresponding to the target navigation task node.
  • interactivity can be improved; and by navigating the generation process of check-in records, rich-content, more diverse, and personalized check-in records can be generated to meet the needs of users.
  • An embodiment of the present disclosure also provides an interaction device, including: a starting module configured to scan a guide ticket and start an augmented reality AR environment; a display module configured to detect that the AR device arrives at a target in at least one guide task node The target navigation area corresponding to the navigation task node, displaying the target AR special effect corresponding to the target navigation area in the AR environment; the acquisition module is configured to obtain the operation result information on the target AR special effect; the generation module , configured to display, in the AR environment, a navigation check-in record corresponding to the target navigation task node based on the operation result information.
  • An embodiment of the present disclosure also provides a computer device, a processor, and a memory, the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory In a case where the machine-readable instructions are executed by the processor, the steps in the above implementation manners are executed when the machine-readable instructions are executed by the processor.
  • An embodiment of the present disclosure also provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the some or all of the steps.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and the steps in the above implementation manners are executed when the computer program is executed.
  • FIG. 1 is a schematic diagram of an implementation flow of an interaction method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a navigation area in a target scene provided by an embodiment of the present disclosure
  • FIG. 3A is a schematic diagram of an application scenario of an AR special effect provided by an embodiment of the present disclosure
  • FIG. 3B is a schematic diagram of an application scenario of an AR special effect provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of an application scenario of an interactive image provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of the composition and structure of an interaction device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of the composition and structure of a computer device provided by an embodiment of the present disclosure.
  • QR codes or guide boards are usually set up in each scenic spot; users can scan the QR code to open the introduction page of the scenic spot, so as to learn about the scenic spot through the introduction page , or directly read the introductory text of the guide board set at the scenic spot to know the current guide information and related historical stories of the scenic spot.
  • the user mainly obtains the guide information from the scenic spot in one direction. Less interactive.
  • the present disclosure provides an interactive method.
  • an augmented reality (Artificial Intelligence, AR) environment the user can show the target AR special effects corresponding to the target navigation area after reaching the target navigation area.
  • Users can perform relevant operations on the target AR special effects
  • the AR device can display the tour check-in records corresponding to the target tour task nodes in the AR environment according to the operation result information on the target AR special effects, thereby improving interactivity; at the same time,
  • By navigating the generation process of the check-in records it is possible to generate rich-content, more diverse, and personalized check-in records to meet the needs of users.
  • the execution subject of the interaction method provided in the embodiments of the present disclosure is generally a computer device with certain computing capabilities, such as Including: AR devices or servers or other processing devices, AR devices can be user equipment (User Equipment, UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearables, and more.
  • AR devices can be user equipment (User Equipment, UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearables, and more.
  • the interaction method may be implemented by the processor invoking computer-readable instructions stored in the memory.
  • FIG. 1 is a flowchart of an interaction method provided by an embodiment of the present disclosure, the method includes steps S101 to S104, wherein:
  • S101 scan the guide ticket, start the augmented reality AR environment
  • S102 Detect that the AR device arrives at a target navigation area corresponding to a target navigation task node in at least one navigation task node, and display a target AR special effect corresponding to the target navigation area in the AR environment;
  • the interaction method provided by the embodiment of the present disclosure may be applied, for example, to a scene where a user visits a scenic spot.
  • the guide ticket may include, for example, a purchase voucher obtained by the user when purchasing the scenic spot ticket.
  • an identifiable two-dimensional code can be printed or pasted, or a specific image that can be recognized and activate the AR environment, such as a panoramic view of the scenic spot or a map of the scenic spot, can also be included.
  • the AR device scans the guide ticket, it can recognize the QR code or specific image and start the AR environment.
  • a two-dimensional code image that can activate an augmented reality AR environment can be provided at a designated location in the scenic spot, for example, a related two-dimensional code image is displayed on a guide board in the scenic spot.
  • a link to enter the AR environment may also be directly provided for the user, so that the AR device used by the user can open the AR environment with one click.
  • a handheld terminal device such as a mobile phone, or a tablet computer can be used;
  • the mobile phone is connected to the drone, and the guided tour ticket is scanned by the drone, and the augmented reality AR environment is activated by the mobile phone connected to it.
  • the AR environment is implemented through a World Wide Web (World Wide Web, web) terminal or a small program deployed in the AR device.
  • the AR device After starting the AR environment, it may also be detected that the AR device arrives at a target navigation area corresponding to a target navigation task node in at least one navigation task node, and display the target navigation area corresponding to the target navigation task node in the AR environment.
  • the target AR special effect corresponding to the viewing area.
  • the target AR special effect may include, for example, an AR special effect in a related navigation task; or, it may also include an AR special effect for recording relevant information in the navigation process for the user.
  • the interaction method further includes: generating a navigation task in response to a navigation event being triggered; wherein, the navigation task includes at least one of the Each of the navigation task nodes corresponds to a navigation area in the target scene.
  • the target scene may include, for example, scenic spots visited by the user.
  • AR navigation information in the AR environment can be displayed to the user, such as relevant text introduction information, or voice navigation information can also be provided to the user.
  • the user may also choose to trigger a navigation event, so as to choose to complete a related navigation task.
  • the navigation task for example, may have associated task nodes with storylines. Under each task node, the user can complete the relevant tasks corresponding to the storyline according to the relevant storyline under the node (such as the beginning, development, climax, and ending of the storyline) to promote the development of the storyline. That is, the tour of the scenic spot can be completed synchronously; the AR special effects related to different navigation task nodes, for example, the AR special effects related to the task corresponding to the navigation task node.
  • the navigation task can also be a check-in task, for example, which is composed of multiple check-in task nodes; in different task nodes, the user needs to reach different navigation areas, and in different navigation areas trigger AR special effects, and use AR special effects to trigger photos, and get corresponding tour check-in records.
  • the AR special effects can be, for example, various templates for taking pictures, and the form of the templates can be referred to below.
  • a navigation task is generated in response to a navigation event being triggered; wherein, the navigation task includes at least one navigation task node; each navigation task node corresponds to a node in the target scene A navigation area.
  • different navigation task nodes can also be set for different navigation areas.
  • the navigation event is triggered, including but not limited to at least one of the following (A1) or (A2):
  • At least one navigation path may be first determined for the user.
  • a plurality of areas may be predetermined as selectable navigation areas.
  • the multiple determined navigation areas can be used as multiple selectable navigation path nodes, and the navigation path can be determined from the multiple determined navigation path nodes.
  • the target scene may include, for example, three selectable guide areas, such as the amphibious and reptile tourist area, the bird tourist area, and the marine animal tourist area. Therefore, in the case of determining a possible guide path, the amphibious and crawling tourist area, the bird tourist area, and the marine animal tourist area can be used as selectable guide path nodes respectively, and the guide path can be determined through different permutations and combinations;
  • the determined guide path includes, for example, an amphibian and reptile tour area, a bird tour area, and a marine animal tour area, or an amphibious and reptile tour area, a sea animal tour area, and a bird tour area.
  • the navigation path may also be determined according to the relative positions among multiple selectable navigation areas.
  • the marine animal tourist area is located between the amphibian and reptile tourist area and the bird tourist area, so in the case of planning the guide route, it can be determined as the amphibian and reptile tourist area, the marine animal tourist area, and the bird tourist area, so as to The user does not have to turn back and forth in the scenic spot many times, so as to improve the user's tour experience in the scenic spot.
  • the user's navigation route preferences can be determined, so as to provide a more suitable navigation route for the user.
  • the manner of determining the navigation route may be determined according to actual conditions, and no limitation is made here.
  • the number of navigation path nodes determined for it may also be relatively large. Therefore, when a user is visiting a scenic spot, a new navigation route can be dynamically planned according to the current location of the AR device and the mobile behavior of the AR device. That is to say, a plurality of determined navigation routes may be correspondingly generated according to the user's mobile navigation process in the scenic spot. In this way, it is possible to get rid of the limitation of a single navigation path, and flexibly provide a navigation path for users.
  • the location of the AR device in the case of detecting the location of the AR device, for example, it may be directly determined by using a global positioning system (Gcobac Positioning System, GPS).
  • Gcobac Positioning System GPS
  • SCAM simultaneous cocacization and mapping
  • an AR device displays AR special effects in an AR environment
  • a pre-generated high-precision map can be used, and/or, Simultaneous Localization and Mapping (SLAM) can be used to determine the location of the AR device in the target scene. Then, according to the pose of the AR special effect in the target scene and the pose of the AR device, determine the display position of the AR special effect in the AR device, and display the AR special effect according to the display position.
  • SLAM Simultaneous Localization and Mapping
  • the position offset between the position of the AR device and the last generated navigation path can be determined. If the position deviation is greater than the preset position deviation threshold, it indicates that the user has deviated from the planned navigation route, for example, the user walks to an unopened tourist area, or to a staff office area in a scenic spot, or to a Other attractions not on the planned route. At this point, a navigation task may be generated accordingly to help the user return to a normal navigation path.
  • the preset offset threshold may be determined according to, for example, the area occupied by the scenic spot, the number of navigable areas in the scenic spot, etc., and is not limited here.
  • a location offset threshold corresponding to a large value may be set, for example, 150 meters. In the case where the location of the AR device is offset from the last generated navigation path by more than 150 meters, a navigation task is generated.
  • the navigation task may be generated when it is determined that the AR device is located in the target scene.
  • the position of the AR device for example, it may be determined directly by using GPS.
  • a navigation task can be generated in response.
  • the corresponding tour task can also be generated immediately. This way is more efficient in the case of generating guided tasks.
  • a navigation task can be generated accordingly.
  • a target virtual identity corresponding to the AR device is determined from the plurality of candidate virtual identities; and the navigation task is generated based on the target virtual identity.
  • the first historical navigation check-in record may include: for example, the check-in record of the user and/or the AR device when historically navigating the target scene. For example, it may also include information on the navigation tasks experienced in the target scene, and the degree of completion of different navigation tasks.
  • User attribute information may include user information pre-input by the user when registering and/or using the AR tour.
  • information such as the user's age, gender, and preferences for navigation task types may be included.
  • the user in the case that the user includes user A, its corresponding user attribute information may include, for example, "female", “25 years old", and "favorite history type navigation task".
  • the alternative virtual identities may include, for example, animal trainers and breeders; Historical figures, etc.; when the target scene includes the former residence of the historical figure, the alternative virtual identity may include, for example, the historical figure in the historical event, such as a housekeeper, a visiting friend, a protagonist living in the former residence, and a student of the protagonist’s professor.
  • the corresponding virtual identity can be randomly selected for the user to determine the selection information of multiple alternative virtual identities; or, the user can be provided with the option of selecting multiple alternative virtual identities, and respond The user chooses any of the options of multiple virtual identities to determine the selection information of the multiple virtual identities.
  • the first historical tour check-in record may also be displayed to the user to determine whether the user continues to complete unfinished tour tasks. Or, in the case that the user's historical navigation opening record information is less, corresponding navigation records can also be generated for him according to the user attribute information. For example, for user A, a "return to history" type of navigation can be provided Task. Or, it is also possible to recommend target virtual identities to the user according to the selection popularity of different virtual identities.
  • the manner of determining the virtual identity of the target may be determined according to actual conditions.
  • the AR device based on at least one of the first historical tour check-in record, user attribute information, and selection information of multiple virtual identities, it is determined from the multiple virtual identities that the AR device corresponds to.
  • the target virtual identity based on the target virtual identity, generate the navigation task. In this way, by determining the target virtual identity for the user, the user can perform an immersive tour with a more storyline when performing the tour task, so the user's tour experience will be better.
  • setting different virtual identities for different users can also provide different users with interaction in the same task scenario, so it can also improve the interaction between different users.
  • the navigation task can also be generated correspondingly according to the target virtual identity.
  • the navigation task can be generated in the following way: Among the multiple navigation areas of the target scene, select an alternative navigation area; based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities, and each alternative The positions of the navigation areas in the target scene are respectively used to generate the navigation task.
  • the second historical tour check-in record may include, for example, the above-mentioned first historical tour check-in record, or may also include the check-in record in the current tour.
  • an alternative navigation area from multiple navigation areas in the target scene based on the second historical tour check-in record for example, it can be based on the second historical tour check-in record in the current tour. , determine the navigation area that the user has not reached currently, and use the unreached navigation area as a candidate navigation area. Alternatively, it is also possible to determine the navigation area that the user has visited multiple times based on the first historical navigation punch-in record contained in the second historical navigation punch-in record, and consider that the user's preferences are biased towards navigating the navigation area, so the This navigation area acts as an alternate navigation area.
  • the candidate navigation area After the candidate navigation area is determined, it can also be based on the target virtual identity, the AR special effect materials of the candidate navigation areas under various alternative virtual identities, and the respective alternative navigation areas in the The location in the target scene, generating the tour task.
  • storylines corresponding to multiple different virtual identities may be set in advance for each candidate navigation area, and corresponding AR special effect materials may be determined for each virtual identity under different storylines.
  • the order of the navigation task nodes corresponding to each candidate navigation area in the generated navigation task may be determined in a manner similar to that of determining the navigation path described above.
  • FIG. 2 is a schematic diagram of a navigation area in a target scene provided by an embodiment of the present disclosure.
  • each navigation area can correspond to a task node in a navigation task, that is, a navigation task is composed of multiple task nodes, and the target scene shown in Figure 2 includes former residences of historical figures.
  • four guide areas are included, which are the office area Y1 left in the former residence of the historical figure, the living area Y2, the reception area Y3, and the teaching area Y4.
  • two different guided tour tasks are provided, which can restore the corresponding historical storyline, for example.
  • two examples are listed in the embodiment of the present disclosure, including the following navigation task B1 and navigation task B2:
  • Navigation task B1 includes the navigation task determined for user B. As shown in Figure 2, the virtual identity of user B is the protagonist living in the former residence, and the alternative navigation areas determined for him include Y1, Y2, Y3, and Y4, which correspond to the navigation task node M1_1 in the navigation task respectively , M1_2, M1_3, M1_4.
  • M1_1 includes: User B finishes correcting office documents in the office area Y1, and delivers the office documents with correction opinions to friends by the housekeeper.
  • M1_2 includes: User B receives daily newspapers and reads daily news in living area Y2; sends visit invitations to friends.
  • M1_3 includes: user B receives visiting friends in the reception area Y3, and communicates current affairs with visiting friends; sees off friends.
  • M1_4 includes: User B delivers a speech to the students in the lecture hall in the teaching area Y4, and answers the relevant questions raised by the students.
  • FIG. 2 it shows navigation task nodes corresponding to different navigation areas.
  • the order of the navigation task nodes can also be set as M1_1 ⁇ M1_2 ⁇ M1_3 ⁇ M1_4, that is, the direction indicated by the arrow in FIG. 2 .
  • Navigation task B2 includes the navigation task determined for user C.
  • User C's virtual identity is a visiting friend, and the candidate navigation areas determined for him include Y1, Y3, and Y4, which respectively correspond to navigation task nodes M2_1, M2_2, and M2_3 in the navigation task.
  • M2_1 includes: user C visits the protagonist's office area Y1, and requests the housekeeper to hand over office documents.
  • M2_2 includes: user C communicates current affairs with the protagonist in the reception area Y3, and returns to the teaching area Y4 after the communication.
  • M2_3 includes: user C listens to the speech delivered by the protagonist in the teaching area Y4, and supplements the speech to the students.
  • navigation task B1 and navigation task B2 are only listed examples of navigation tasks, and may be determined according to actual conditions in an implementable manner.
  • the target AR special effect includes a task special effect corresponding to the AR interactive task, which may correspond to a specific AR special effect material.
  • the specific AR special effect materials may include, for example, related AR special effect materials of correcting office documents on a desk, or related AR materials of users picking up office documents and delivering them to the housekeeper. special effects material.
  • the specific AR special effect materials may include, for example, AR special effect materials when the user is pacing, and AR special effect materials such as drinking tea when communicating with the protagonist about current events. In an implementable manner, it may also be determined according to actual conditions, which is not limited here.
  • multiple different virtual identities set under the same historical storyline may have associated navigation task nodes
  • when different users perform tasks corresponding to navigation task nodes with associated relationships It can also be done through the interaction of two users in a real-world scenario.
  • user B and user C when user A executes M1_3 and user C executes M2_2, user B and user C can have a conversation in a real scene.
  • using this method can also improve the interaction between different users in the target scene.
  • an alternative navigation area is selected from multiple navigation areas of the target scene; based on the target virtual identity, the alternative navigation areas are respectively The AR special effect material under multiple alternative virtual identities and the respective positions of each candidate navigation area in the target scene are used to generate the navigation task.
  • the alternative navigation area can be determined for the user in a targeted manner; and, according to the target virtual identity and the alternative navigation area, various alternative virtual identities can be selected.
  • the following AR special effect materials and each candidate navigation area respectively generate corresponding navigation tasks in the target scene, and customize the corresponding navigation tasks for the user.
  • the target AR special effect when the target AR special effect includes an AR special effect for the user to record relevant information in the navigation process, the target AR special effect may include, for example, a target AR photographing template.
  • the AR photographing template may include AR special effects related to the navigation area, for example.
  • FIG. 3A it is a schematic diagram of an AR special effect provided by an embodiment of the present disclosure.
  • the related AR special effects may include, for example, virtual special effects such as file cabinets, pen holders, and inkstones.
  • the corresponding AR photographing template may include, for example, a photographing template corresponding to virtual special effects such as the listed file cabinet, pen holder, and inkstone.
  • FIG. 3A it is a schematic diagram of an AR photographing template provided by an embodiment of the present disclosure; it includes a virtual special effect 31 corresponding to a filing cabinet, and a virtual special effect 32 corresponding to a pen holder and an inkstone.
  • the relevant AR special effects may include, for example, virtual special effects such as a teapot boiling tea on a fire, and teacups placed on a tea table.
  • the corresponding AR photographing template may include, for example, a photographing template corresponding to a virtual special effect including a teapot, a teacup, and the like.
  • FIG. 3B it is a schematic diagram of another AR photographing template provided by an embodiment of the present disclosure; wherein, it includes a virtual special effect 33 corresponding to a teapot and a virtual special effect 34 corresponding to a teacup.
  • the photo template may also have different borders, or determine to display different special effects according to the user's selection. In an implementable manner, it may be determined according to actual requirements, and no limitation is made here.
  • the obtained operation result information is different for different target AR special effects.
  • the target AR special effect image includes the target AR photographing template, and the operation result information corresponding to the task special effect corresponding to the AR interactive task will be described respectively.
  • Target AR effects include target AR photo templates.
  • the corresponding operation result information includes an AR special effect image.
  • the following manner may be adopted: in response to a triggering photographing operation on the target AR photographing template, generate the AR special effect including the target AR photographing template image.
  • a camera trigger button may be provided to the user.
  • the camera can take an image, and superimpose the AR special effect corresponding to the target AR photo-taking template on the captured image. In this way, an AR special effect image including a target AR photographing template can be generated.
  • the AR special effect image including the target AR photographing template is generated. In this way, users can also retain the AR special effect images during the tour by taking pictures.
  • the camera in response to a triggering photographing operation on the target AR photographing template, the camera captures a video within a preset time, and then superimposes the target AR photographing template with a dynamic effect on the video Corresponding AR special effects.
  • the obtained AR special effect image can not only present the target AR photo template, but also show dynamic action effects.
  • the target AR photographing template in the case of determining the target AR photographing template, can also be determined from a plurality of candidate AR photographing templates based on the third historical tour check-in record.
  • the third historical navigation check-in record may include, for example, a historical opening record of the same navigation area by the user and/or the AR device.
  • the third historical tour check-in record it is possible to determine the user's historical photographing situation in the navigation area, such as the number of historical photographs; on the other hand, it is also possible to determine the user's The degree of preference for the optional AR photo template.
  • the AR photographing template commonly used by the user may be determined as the target AR photographing template; or, the AR photographing template not used by the user may also be determined as the target AR photographing template; Alternatively, any AR photographing template among the alternative AR photographing templates may be randomly pushed to the user; or, the target AR photographing template for the current time is determined according to the user's historical photographing times.
  • a corresponding AR photographing template may be determined for each photographing. For example, in the case of the first shot, a gray static frame is displayed in the AR photo template; in the case of the second shot, a white static frame is displayed in the AR photo template; in the case of the third shot, Display a silver dynamic border in the AR photo template; and in the case of the fourth shot, display a gold dynamic border in the AR photo template.
  • the user's interest in use can also be enhanced, and the user can be guided to continue to use the AR camera function to navigate more navigation areas.
  • Target AR special effects include task special effects corresponding to AR interactive tasks.
  • the corresponding operation result information includes interactive operations.
  • the following manner may be adopted: in response to at least one interactive operation triggered based on the task special effect, an interactive image corresponding to at least part of the interactive operations is generated.
  • a corresponding interactive operation can be set for the AR special effect.
  • the corresponding interactive operation may include, for example, removing the teapot and pouring tea into a teacup on the tea table.
  • the corresponding interactive operation may include, for example, taking out a file from the filing cabinet and opening the file. After opening the file, the text information in the file may also be displayed for the user to read.
  • FIG. 4 it is a schematic diagram of an interactive image provided by an embodiment of the present disclosure. Wherein, it shows a frame of interactive image in the process of the teapot 33 dynamically pouring tea into the teacup 34 .
  • the navigation check-in record corresponding to the target navigation task node may also be displayed in the AR environment.
  • a navigation check-in record corresponding to the target navigation task node can be generated.
  • the text input information may include, for example, the text information input by the user in the task node, or may also include text information such as the user inputting the mood of the tour, and the impression of the tour.
  • a navigation check-in record corresponding to the target navigation task node is generated.
  • the user can also retain the mood record and experience during the tour by obtaining the tour check-in record, so as to improve the user experience.
  • a record that can restore the user's navigation process at the target navigation task node can be directly generated based on the operation result information. Navigate to check-in records. In this way, the user can also look back at the navigation process corresponding to the task node.
  • corresponding text input information input by the user may also be added to the tour check-in record to enrich the tour check-in record.
  • the total tour record of the current tour in response to the triggering of the tour end event, based on the tour check-in records corresponding to the at least one tour task node, the total tour record of the current tour may be generated.
  • the tour end event includes but is not limited to at least one of the following (D1)-(D3):
  • a corresponding end navigation area may be set in advance for the target scene.
  • the area where each exit is located may be used as the end navigation area. After it is determined that the AR device has reached the navigation end area according to the position of the AR device, it can be considered that the user has completed the navigation of the target scene and can end the navigation. In this way, it can be determined whether to end the tour with relatively simple judgment logic.
  • the user may be provided with a corresponding end-navigation control.
  • the user can select the time and place to end the tour, and correspondingly trigger the control to end the tour.
  • the navigation may be ended in response to the user ending the navigation control directly.
  • (D3) The status of the navigation task including the at least one navigation task node is changed from unfinished to completed.
  • the user's completion status of the navigation task node can be recorded. If the status of the navigation task corresponding to each navigation task node is changed to completed for the current navigation task being performed by the user, it can be determined that the user has completed the navigation task. At this point, you can end the tour.
  • the user can actively stop the navigation, or determine whether to stop the navigation according to the user's status, which is more flexible.
  • a total tour record for this tour may also be generated based on the tour check-in records corresponding to at least one tour task node.
  • the total navigation record may include, for example, an album, wherein the album is composed of a tour check-in record corresponding to each navigation task node.
  • the recorded information of the whole process of the tour can also be provided to the user.
  • AR special effects can be combined with images captured by AR equipment; for example, the images captured by AR equipment include the navigation objects in the navigation area; The position of the navigation object in the image can be determined according to the image captured by the AR device, and based on the position, the display position of the AR special effect can be determined.
  • the image of the guide ticket will be obtained; for example, the position of the guide ticket in the image can be used as a reference to determine the display position of the AR special effect.
  • the position of the ticket in the image determines the display plane or display space, and the position of the display plane or display space is the display position of the AR special effect.
  • the position of the AR device in the target scene can also be determined based on the images taken by the AR device and the pre-generated high-precision three-dimensional map of the target scene. Then, according to the position, determine the display position of the AR special effect.
  • the AR special effect may be rendered on the front end of the image captured by the AR device for display.
  • the sharing information including the check-in record of the tour may be generated, and the sharing information may be sent to an information publishing platform corresponding to the information sharing event.
  • the sharing information corresponding to the user can also be browsed by other users, which improves the interactivity between users.
  • the shared information includes the access link of the tour check-in record.
  • the following manner may be adopted: sending the tour check-in record to the server corresponding to the AR environment; receiving the tour check-in record; The access link generated by the server based on the tour check-in record.
  • the determined tour check-in record may be sent to a server corresponding to the AR environment.
  • the server receives the tour check-in record, for example, it can also add relevant information of the user, such as the user name, user ID, user avatar and other information, and determine the corresponding access link.
  • the server may also send the access link to the AR device.
  • the AR device and other user's devices can view the user's related tour check-in records and the user's related information.
  • relevant navigation information can also be shared among multiple users, and the interactivity is stronger.
  • other users can also perform operations such as commenting, liking, forwarding, etc. on the shared information in the access link, which can also improve the interaction between different users.
  • the embodiment of the disclosure also provides an interaction device corresponding to the interaction method. Since the problem-solving principle of the device in the embodiment of the disclosure is similar to the above-mentioned interaction method of the embodiment of the disclosure, the implementation of the device can refer to the method implementation.
  • FIG. 5 it is a schematic structural diagram of an interactive device provided by an embodiment of the present disclosure.
  • the device includes: a startup module 51 , a display module 52 , an acquisition module 53 , and a generation module 54 ; wherein,
  • the start module 51 is configured to scan the guide ticket, and start the augmented reality AR environment;
  • the display module 52 is configured to detect that the AR device arrives at the target guide area corresponding to the target guide task node in at least one guide task node, Display the target AR special effect corresponding to the target navigation area in the AR environment;
  • the obtaining module 53 is configured to obtain operation result information on the target AR special effect;
  • the generation module 54 is configured to be based on the operation result information , displaying the navigation check-in record corresponding to the target navigation task node in the AR environment.
  • the interaction device further includes a first processing module 55 configured to: generate a navigation task in response to a navigation event being triggered; wherein, the navigation task includes at least one of the navigation Each navigation task node corresponds to a navigation area in the target scene.
  • the triggering of the navigation event includes at least one of the following: it is detected that the position of the AR device is located, and the position deviation from the last generated navigation path is greater than A preset position deviation threshold; based on the position of the AR device, it is determined that the AR device is located in the target scene.
  • the first processing module 55 is configured to: based on the first historical navigation check-in record, user attribute information, selection information of multiple alternative virtual identities At least one of the options is to determine a target virtual identity corresponding to the AR device from the plurality of candidate virtual identities; and generate the navigation task based on the target virtual identity.
  • the first processing module 55 when the first processing module 55 generates the navigation task based on the virtual identity of the target, it is configured to: Among the multiple navigation areas of the scene, select an alternative navigation area; based on the target virtual identity, the AR special effect materials of the alternative navigation areas under various alternative virtual identities, and each alternative navigation area The locations of the regions in the target scene are used to generate the navigation task.
  • the target AR special effect includes: a target AR photographing template;
  • the operation result information includes: an AR special effect image;
  • it is configured to: generate the AR special effect image including the target AR photographing template in response to a triggering photographing operation on the target AR photographing template.
  • the interaction device further includes a second processing module 56 configured to: determine the target AR photographing template from a plurality of candidate AR photographing templates based on the third historical tour check-in record .
  • the target AR special effect includes: a task special effect corresponding to an AR interactive task;
  • the operation result information includes: an interactive operation;
  • the acquisition module 53 obtains the operation result of the target AR special effect In the case of information, it is configured to: generate an interactive image corresponding to at least some of the interactive operations in response to at least one interactive operation triggered based on the task special effect.
  • the generation module 54 is configured to: Based on the operation result information and text input information, a navigation check-in record corresponding to the target navigation task node is generated.
  • the interaction device further includes a third processing module 57 configured to: in response to the triggering of a tour end event, based on the tour check-in records corresponding to the at least one tour task node, Generate a tour summary record for this tour.
  • the navigation end event includes at least one of the following: the AR device arrives at a preset navigation end area; the end navigation control in the AR device is triggered; includes the The status of the navigation task of at least one navigation task node is changed from incomplete to completed.
  • the interaction device further includes a fourth processing module 58 configured to: generate sharing information including the check-in record of the tour guide in response to triggering an information sharing event, and send the sharing information to to the information publishing platform corresponding to the information sharing event.
  • the shared information includes: an access link of the tour check-in record; when the third processing module 57 generates the sharing information including the tour check-in record, it is configured to : sending the tour check-in record to a server corresponding to the AR environment; receiving the access link generated by the server based on the tour check-in record.
  • the AR environment is implemented through a web terminal or an applet deployed in the AR device.
  • the embodiment of the present disclosure also provides a computer device, as shown in FIG. 6 , which is a schematic diagram of the composition and structure of the computer device provided by the embodiment of the present disclosure, including:
  • processor 10 and memory 20; the memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 is configured to execute the machine-readable instructions stored in the memory 20, and the machine-readable instructions are executed by the processor 10 During execution, the processor 10 performs the following steps:
  • the above-mentioned memory 20 includes a memory 210 and an external memory 220; the memory 210 here is also called an internal memory, and is configured to temporarily store computing data in the processor 10 and data exchanged with an external memory 220 such as a hard disk.
  • the external memory 220 performs data exchange.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the interaction method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure also provide a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the interaction method described in the above method embodiment, refer to the above method embodiment .
  • the above-mentioned computer program product may be realized by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Oncy Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • the user by starting the augmented reality (Artificial Intelligence, AR) environment, the user can show the target AR special effect corresponding to the target navigation area after reaching the target navigation area; the user can set the target AR special effect
  • the AR device can display the tour check-in record corresponding to the target tour task node in the AR environment according to the operation result information of the target AR special effect, thereby improving interactivity; at the same time, through the generation of the tour check-in record
  • the process can generate rich content, more diverse, and personalized check-in records to meet the needs of users.

Abstract

本公开实施例公开了一种交互方法、装置、计算机设备及程序产品、存储介质,其中,该方法包括:扫描到导览门票,启动增强现实AR环境;检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;获取对所述目标AR特效的操作结果信息;基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。

Description

交互方法、装置、计算机设备及程序产品、存储介质
相关申请的交叉引用
本发明基于申请号为202110681015.9、申请日为2021年06月18日、申请名称为“一种交互方法、装置、计算机设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本发明。
技术领域
本公开涉及计算机技术领域,尤其涉及一种交互方法、装置、计算机设备及程序产品、存储介质。
背景技术
为了方便用户在景区中游览,通常会在景区的各个景点设置二维码或者导览牌;用户可以通过扫描设置在景点中的二维码打开对景点的介绍页面,以通过介绍页面对景点进行了解,或者直接阅读设置在景点的导览牌的介绍文字,以获知当前导览景点的导览信息以及相关的历史故事,在这种方式下,主要是用户单向的向景区获取导览信息,交互性较差。
发明内容
本公开实施例至少提供一种交互方法、装置、计算机设备及程序产品、存储介质。
本公开实施例提供了一种交互方法,包括:扫描到导览门票,启动增强现实AR环境;检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;获取对所述目标AR特效的操作结果信息;基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。这样,可以提升互动性;并且通过导览打卡记录的生成过程,可以产生内容丰富、且更具有多样性、和个性化的打卡记录,满足用户的使用需求。
本公开实施例还提供一种交互装置,包括:启动模块,配置为扫描到导览门票,启动增强现实AR环境;展示模块,配置为检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;获取模块,配置为获取对所述目标AR特效的操作结果信息;生成模块,配置为基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
本公开实施例还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器配置为执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行的情况下,所述机器可读指令被所述处理器执行时执行上述实施方式中的步骤。
本公开实施例还提供一种计算机程序产品,其中,上述计算机程序产品包括存储了 计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本公开实施例中所描述的部分或全部步骤。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时执行上述实施方式中的步骤。
关于上述交互装置、计算机设备、及计算机可读存储介质的效果描述参见上述交互方法的说明。
本公开实施例提供的技术方案带来的有益效果至少包括:
为使本公开的上述目的、特征和优点能更明显易懂,下文说明部分示例性实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本公开实施例所提供的一种交互方法的实现流程示意图;
图2为本公开实施例所提供的一种目标场景中的导览区域的示意图;
图3A为本公开实施例所提供的一种AR特效的应用场景示意图;
图3B为本公开实施例所提供的一种AR特效的应用场景示意图;
图4为本公开实施例所提供的一种互动图像的应用场景示意图;
图5为本公开实施例所提供的一种交互装置的组成结构示意图;
图6为本公开实施例所提供的一种计算机设备的组成结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
经研究发现,为了方便用户在景区中游览,通常会在景区的各个景点设置二维码或者导览牌;用户可以通过扫描二维码打开对景点的介绍页面,以通过介绍页面对景点进 行了解,或者直接阅读设置在景点的导览牌的介绍文字,以获知当前导览景点的导览信息以及相关的历史故事,在这种方式下,主要是用户单向的向景区获取导览信息,交互性较差。
另外,用户通常喜欢在景点拍照、并分享至某些信息发布平台的方式在景点进行“打卡”,该种打卡方式需要用户自己编辑打卡的内容,需要自行创作,操作较为繁琐,且分享的内容往往较为单一,无法满足用户需求。
基于上述研究,本公开提供了一种交互方法,通过启动增强现实(Artificial Intelligence,AR)环境,能够使用户在到达目标导览区域后,向其展示与该目标导览区域对应的目标AR特效;用户可以对目标AR特效进行相关的操作,AR设备可以根据对目标AR特效的操作结果信息,在AR环境中展示与目标导览任务节点对应的导览打卡记录,从而提升互动性;同时,通过导览打卡记录的生成过程,可以产生内容丰富、且更具有多样性、和个性化的打卡记录,满足用户的使用需求。
另外,通过设置导览任务节点、以及向用户分配虚拟身份的方式,在不同的导览区域设置不同的任务情节供拥有虚拟身份的用户进行体验,相较于利用文字了解导览信息的方式,用户可以在拥有虚拟身份的情况下执行任务节点以了解导览信息,更具有角色代入感,并且也具有更好的沉浸式体验,因此具有更好的体验感。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,在随后的附图中不需要对其进行定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种交互方法进行详细介绍,本公开实施例所提供的交互方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:AR设备或服务器或其它处理设备,AR设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该交互方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面对本公开实施例提供的交互方法加以说明。
参见图1所示,为本公开实施例提供的一种交互方法的流程图,所述方法包括步骤S101~S104,其中:
S101:扫描到导览门票,启动增强现实AR环境;
S102:检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;
S103:获取对所述目标AR特效的操作结果信息;
S104:基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
下面对上述S101~S104加以详细说明。
针对上述S101,本公开实施例提供的交互方法例如可以应用于用户游览景区的场景中。在一种可能的情况下,导览门票例如可以包括用户在购买景区门票时获取的购买凭证。在该导览门票上,例如可以印刷有或粘贴有可识别的二维码,或者也可以包括有能够被识别、并且启动AR环境的特定图像,例如景区的全貌图或者景区地图等。AR设备在扫描导览门票后,可以对二维码或者特定图像进行识别,启动AR环境。
在另一种可能的情况下,可以在景区的指定位置提供可以启动增强现实AR环境的二维码图像,例如在景区中的导览牌处显示相关的二维码图像。在又一种可能的情况下,也可以为用户直接提供进入AR环境的链接,以使用户使用的AR设备可以一键开启AR环境。
在对导览门票进行扫描的情况下,例如可以采用手持终端设备,例如移动手机,或者平板电脑;又或者,也可以采用处理设备连接图像采集设备的方式对导览门票进行扫描,例如通过移动手机连接无人机,并由无人机扫描导览门票,并由与其连接的移动手机启动增强现实AR环境。其中,所述AR环境通过部署在AR设备中的万维网(World Wide Web,web)端或者小程序实现。
针对上述S102,在启动AR环境后,还可以检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效。
在一种可以实现的方式中,目标AR特效例如可以包括在相关的导览任务中的AR特效;又或者,还可以包括针对用户记录导览过程中相关信息的AR特效。
其中,在目标AR特效包括在相关的导览任务中的AR特效的情况下,交互方法还包括:响应于导览事件被触发,生成导览任务;其中,所述导览任务包括至少一个所述导览任务节点;每个所述导览任务节点对应于目标场景中的一个导览区域。
其中,目标场景例如可以包括用户游览的景区。用户在对景区进行游览的过程中,可以向用户展示在AR环境中的AR导览信息,例如相关的文字介绍信息,或者也可以向用户提供语音导航信息。在另一种可能的情况下,用户也可以选择触发导览事件,以选择完成相关的导览任务。其中,导览任务例如可以由具有相关联的、且具备故事情节的任务节点。在每个任务节点下,用户均可以根据在该节点下的相关故事情节(例如故事情节中的开端、发展、高潮、结局),完成与故事情节对应的相关任务,以推动故事情节的发展,也即可以同步的完成对景区的游览;与不同导览任务节点的AR特效,例如是与该导览任务节点对应的任务相关的AR特效。
另外,导览任务例如还可以是打卡任务,其由多个打卡任务节点构成;在不同任务节点中,用户需要到达不同的导览区域,并在不同的导览区域触发与该导览区域对应的AR特效,并利用AR特效触发拍照,得到对应的导览打卡记录。此处,AR特效例如可以是拍照用的各种模板,模板的形式可以参见下文所述。
在上述实施例中,响应于导览事件被触发,生成导览任务;其中,所述导览任务包括至少一个所述导览任务节点;每个所述导览任务节点对应于目标场景中的一个导览区域。这样,还可以分别为不同的导览区域设置不同的导览任务节点,用户在触发导览事件后,可以通过完成导览任务节点的方式完成对景区的游览,参与性更强,并且会更加有身临其境的用户体验。
在一种可以实现的方式中,导览事件被触发,包括但不限于下述(A1)或者(A2)中至少一种:
(A1):检测到所述AR设备所处的位置、与最近一次生成的导览路径之间的位置偏移大于预设的位置偏移阈值。
在该种情况下,在检测到用户进入目标场景中的情况下,可以先为用户确定至少一个导览路径。
在一种可以实现的方式中,针对该目标场景,可以预先确定多个区域作为可以选取的导览区域。确定的多个导览区域可以作为多个可以选择的导览路径节点,从确定的多个导览路径节点中,可以确定导览路径。
例如,在动物园景区作为目标场景的情况下,其例如可以包括3个可以选择的导览区域,例如两栖爬行游览区、鸟类游览区、海洋动物游览区。因此在确定可能的导览路径的情况下,可以将两栖爬行游览区、鸟类游览区、海洋动物游览区分别作为可以选择的导览路径节点,并通过不同的排列组合确定导览路径;可以确定的导览路径例如包括两栖爬行游览区、鸟类游览区、海洋动物游览区,或者两栖爬行游览区、海洋动物游览区、鸟类游览区。
或者,也可以根据多个可以选取的导览区域之间的相对位置确定导览路径。例如,海洋动物游览区位于两栖爬行游览区与鸟类游览区之间,因此在规划导览路线的情况下,可以将其确定为两栖爬行游览区、海洋动物游览区、鸟类游览区,以使用户可以不必在景区中多次折返,以提高用户在景区中的游览体验。又或者,也可以根据用户的历史游览记录,确定用户的导览路径喜好,以针对性的为其提供更适合的导览路径。
此处,确定导览路径的方式可以根据实际情况确定,在此不做出限定。
另外,对于占地面积较大的景区,或者包括较多可游览区域的景区而言,为其确定的导览路径节点相应的也可以数量较大。因此用户在景区中游览的情况下,可以根据AR设备的当前位置,以及AR设备的移动行为,动态规划新的导览路径。也即,确定的导览路径可以根据用户在景区中的移动导览过程相应的产生多个。这样,可以摆脱单 一导览路径的限制,灵活的为用户提供导览路径。
在本公开实施例中,在检测AR设备所处的位置的情况下,例如可以直接利用全球定位系统(Gcobac Positioning System,GPS)确定。这种方式相较于采用即时定位与地图构建(simuctaneous cocacization and mapping,SCAM)的方式更为简单,并且对AR设备的算力占用较少,也易于部署在移动手机等轻量化设备中。
另外,AR设备在AR环境下展示AR特效的情况下,例如可以利用预先生成高精地图、和/或,采用同时定位与建图(Simultaneous Localization and Mapping,SLAM)确定AR设备在目标场景中的位姿,然后根据AR特效在目标场景中的位姿、以及AR设备的位姿,确定AR特效在AR设备中的的展示位置,并根据该展示位置展示AR特效。
另外,在确定AR设备所处的位置后,即可以确定AR设备所处的位置、与最近一次生成的导览路径之间的位置偏移。在该位置偏移大于预设的位置偏移阈值的情况下,表明用户已经偏离为其规划的导览路径,例如用户走向未开放的游览区域,或者在景区中的工作人员办公区域,或者走向并未在规划路径中的其他景点。此时,可以相应的以生成导览任务,以帮助用户返回至正常的导览路径。
其中,预设的偏移阈值例如可以根据景区的占地面积大小、景区中可导览区域的数量等确定,在此不做限定。示例性的,在景区的占地面积较大的情况下,可以设置对应数值较大的位置偏移阈值,例如150米。在AR设备所处的位置与最近一次生成的导览路径之间的位置偏移超过150米的情况下,生成导览任务。
(A2):基于所述AR设备所处的位置,确定所述AR设备位于所述目标场景内。
在该种情况下,可以在确定AR设备位于目标场景中的情况下,即生成导览任务。其中,在检测AR设备所处的位置的情况下,例如也可以直接利用GPS确定。
这样,对于存在多个入口的景区,用户在从任一入口进入的情况下,均可以响应的生成导览任务。另外,对于在景区中正在进行游览的用户,在利用AR设备启动AR环境后,也可以立即生成相应的导览任务。这种方式在生成导览任务的情况下效率更高。
在上述实施例中,通过上述(A1)和(A2)说明的两种导览事件被触发的方式,能够在用户导览过程中,根据用户的导览状态,动态为用户规划导览路线,更具有灵活性。
此处,上述(A1)和(A2)说明的两种导览事件被触发的方式,仅是示出的两种示例,并不对确定导览事件被触发的方式做出限定。在一种可以实现的方式中也可以选用其他不同的方式确定导览事件被触发。
在确定导览事件被触发后,可以相应的生成导览任务。
在一种可以实现的方式中,在生成导览任务的情况下,例如可以采用下述方式:基于第一历史导览打卡记录、用户属性信息、多个备选虚拟身份的选择信息中至少一种, 从多个所述备选虚拟身份中,确定所述AR设备对应的目标虚拟身份;基于所述目标虚拟身份,生成所述导览任务。
其中,第一历史导览打卡记录可以包括:例如,用户和/或AR设备在历史导览该目标场景中的情况下的打卡记录。例如,还可以包括在该目标场景中体验过的导览任务信息、以及对不同的导览任务的完成度等。
用户属性信息,例如可以包括用户在注册和/或使用AR导览的情况下预先输入的用户信息。在一种可以实现的方式中,例如可以包括用户的年龄、性别、以及对导览任务类型的喜好等信息。示例性的,在用户包括用户甲的情况下,其对应的用户属性信息例如可以包括“女性”、“25岁”、以及“喜好历史类型导览任务”。
多个备选虚拟身份,例如可以根据实施过程中的目标场景确定。示例性的,在目标场景包括动物园的情况下,备选虚拟身份例如可以包括驯兽师以及饲养员;在目标场景包括博物馆的情况下,备选虚拟身份例如可以包括文物修复师、导游以及相关历史人物等;在目标场景包括历史人物故居的情况下,备选虚拟身份例如可以包括历史事件中的历史人物,如管家、到访友人、居住在故居中的主人公、以及主人公教授的学生。在不同的目标场景下,可以为用户随机选择对应的虚拟身份,以确定多个备选虚拟身份的选择信息;又或者,也可以为用户提供选择多个备选虚拟身份的选项,并响应于用户在多个备选虚拟身份的选项中的任一选择,确定多个备选虚拟身份的选择信息。
在一种可以实现的方式中,在从多个备选虚拟身份中确定所述AR设备对应的目标虚拟身份的情况下,例如可以优先依据用户在多个备选身份的选择信息,确定用户指定的目标虚拟身份。在用户未指定虚拟身份的情况下,也可以向用户展示第一历史导览打卡记录,以确定用户是否继续完成未完成的导览任务。或者,在用户的历史导览打开记录信息较少的情况下,也可以根据用户属性信息为其生成对应的导览记录,例如对于用户甲,可以为其提供“重回历史”类型的导览任务。又或者,也可以根据不同虚拟身份的选择热度,向用户推荐目标虚拟身份。此处,确定目标虚拟身份的方式可以根据实际情况确定。
在上述实施例中,基于第一历史导览打卡记录、用户属性信息、多个备选虚拟身份的选择信息中至少一种,从多个所述备选虚拟身份中,确定所述AR设备对应的目标虚拟身份;基于所述目标虚拟身份,生成所述导览任务。这样,通过为用户确定目标虚拟身份的方式,可以使用户在执行导览任务的情况下,能够更有故事情节的进行沉浸式导览,因此用户的导览体验感会更好。同时,为不同的用户设置不同的虚拟身份,也可以向不同的用户提供在同一任务场景下的交互,因此还可以提升不同用户之间的交互。
在确定目标虚拟身份后,还可以根据目标虚拟身份相应的生成导览任务,在一种可以实现的方式中,可以采用下述方式生成导览任务:基于第二历史导览打卡记录,从所述目标场景的多个导览区域中,选择备选导览区域;基于所述目标虚拟身份、所述备选 导览区域分别在多种备选虚拟身份下的AR特效素材、以及各备选导览区域分别在所述目标场景中的位置,生成所述导览任务。
其中,第二历史导览打卡记录例如可以包括上述第一历史导览打卡记录,或者,也可以包括在当前导览中的打卡记录。
在基于第二历史导览打卡记录,从目标场景中的多个导览区域中选择备选导览区域的情况下,例如可以基于第二历史导览打卡记录中的当前导览中的打卡记录,确定用户当前未到达的导览区域,并将未到达的导览区域作为备选导览区域。又或者,也可以基于第二历史导览打卡记录中包含的第一历史导览打卡记录,确定用户多次到达的导览区域,并认为用户的喜好偏向为导览该导览区域,因此将该导览区域作为备选导览区域。
在确定备选导览区域后,还可以基于所述目标虚拟身份、所述备选导览区域分别在多种备选虚拟身份下的AR特效素材、以及各备选导览区域分别在所述目标场景中的位置,生成所述导览任务。
示例性的,可以预先为每个备选导览区域设置多个不同的虚拟身份分别对应的故事情节、以及为每个虚拟身份在不同的故事情节下分别确定对应的AR特效素材。在确定多个备选导览区域后,可以利用与上述确定导览路径相似的方式确定在生成的导览任务中,每个备选导览区域对应的导览任务节点之间的先后顺序。
在本公开另一实施例中,还提供了一种导览任务的示例。图2为本公开实施例所提供的一种目标场景中的导览区域的示意图。参见图2所示,每个导览区域均可以在一个导览任务中对应一个任务节点,也即由多个任务节点构成一个导览任务,图2所示的目标场景包括历史人物故居。在该目标场景中,包括四个导览区域,分别是在该历史人物故居中遗留的办公区Y1、生活区Y2、会客区Y3、以及教学区Y4。
在该示例中,提供了两种不同的导览任务,该导览任务例如可以还原相应的历史故事情节。示例性的,在本公开实施例中列举了两个示例,包括下述导览任务B1、以及导览任务B2:
导览任务B1:包括为用户乙确定的导览任务。参见图2所示,用户乙的虚拟身份为居住在故居中的主人公,为其确定的备选导览区域包括Y1、Y2、Y3、以及Y4,分别对应导览任务中的导览任务节点M1_1、M1_2、M1_3、M1_4。
其中,M1_1包括:用户乙完成在办公区Y1批改办公文件,并将批改意见的办公文件交由管家递送至友人。
M1_2包括:用户乙在生活区Y2收取当日报刊,读取当日新闻;向友人发送访问邀请。
M1_3包括:用户乙在会客区Y3接待到访友人,并与到访友人沟通时事;送别友人。
M1_4包括:用户乙在教学区Y4中的讲堂向学生发表演讲,并回答学生提出的相 关问题。
参见图2所示,其中示出了对应于不同导览区域,其分别对应的导览任务节点。另外,在该导览任务B2中,还可以设置导览任务节点的先后顺序为M1_1→M1_2→M1_3→M1_4,也即图2中箭头指示的方向。
导览任务B2:包括为用户丙确定的导览任务。用户丙的虚拟身份为到访友人,为其确定的备选导览区域包括Y1、Y3、以及Y4,分别对应的导览任务中的导览任务节点M2_1、M2_2、以及M2_3。
其中,M2_1包括:用户丙到访主人公的办公区Y1,并向管家请求转交办公文件。
M2_2包括:用户丙在会客区Y3与主人公沟通时事,并在沟通后返回教学区Y4。
M2_3包括:用户丙在教学区Y4听主人公发表演讲,并向学生补充演讲。
此处,上述导览任务B1和导览任务B2仅为列举出的导览任务示例,在一种可以实现的方式中可以根据实际情况确定。
其中,对于每个导览任务节点而言,目标AR特效包括AR互动任务对应的任务特效,其可以对应特定的AR特效素材。
示例性的,针对导览任务节点M1_1而言,特定的AR特效素材例如可以包括在办公桌上批改办公文件的相关AR特效素材,也可以包括用户用手拿起办公文件交付给管家的相关AR特效素材。针对任务节点M2_2而言,特定的AR特效素材例如可以包括用户踱步时的AR特效素材,以及与主人公沟通时事时饮茶等的AR特效素材。在一种可以实现的方式中也可以根据实际情况确定,在此并不做出限定。
此处,由于设定在同一历史故事情节下的多个不同的虚拟身份可以具有相关联的导览任务节点,因此不同的用户在执行具有关联关系的导览任务节点对应的任务的情况下,还可以通过两个用户在现实场景中的交互完成。例如对于上述示例中的用户乙和用户丙而言,用户甲执行M1_3、以及用户丙在执行M2_2的情况下,用户乙和用户丙可以在真实场景中进行会话。这样,在用户沉浸式体验自己独有的导览任务的情况下,除有更好的沉浸体验感外,利用这种方式还可以提升目标场景中的不同用户之间的互动性。
在上述实施例中,基于第二历史导览打卡记录,从所述目标场景的多个导览区域中,选择备选导览区域;基于所述目标虚拟身份、所述备选导览区域分别在多种备选虚拟身份下的AR特效素材、以及各备选导览区域分别在所述目标场景中的位置,生成所述导览任务。这样,利用第二历史导览打卡记录,可以为用户有针对性的确定备选导览区域;并且,可以根据所述目标虚拟身份、所述备选导览区域分别在多种备选虚拟身份下的AR特效素材、以及各备选导览区域分别在所述目标场景中的位置生成对应的导览任务,为该用户个性化的定制对应的导览任务。
在本公开另一实施例中,在目标AR特效包括针对用户记录导览过程中相关信息的 AR特效的情况下,目标AR特效例如可以包括目标AR拍照模板。
示例性的,AR拍照模板例如可以包括与导览区域相关的AR特效。参见图3A所示,为本公开实施例提供的一种AR特效的示意图。示例性的,在导览区域包括办公区的情况下,相关的AR特效例如可以包括文件柜、笔架、砚台等虚拟特效。在这种情况下,对应的AR拍照模板例如可以包括包含有列举出的文件柜、笔架、砚台等虚拟特效对应的拍照模板。参见图3A所示,为本公开实施例提供的一种AR拍照模板的示意图;其中,包括文件柜对应的虚拟特效31、以及笔架、砚台对应的虚拟特效32。
在导览区域包括生活区的情况下,相关的AR特效例如可以包括在火上熬煮茶的茶壶、茶桌上放置的茶杯等虚拟特效。在这种情况下,对应的AR拍照模板例如可以包括包含有列举出茶壶、茶杯等的虚拟特效对应的拍照模板。参见图3B所示,为本公开实施例提供的另一种AR拍照模板的示意图;其中,包括茶壶对应的虚拟特效33、以及茶杯对应的虚拟特效34。
另外,拍照模板也可以具有不同的边框,或者根据用户的选择,确定显示不同的特效。在一种可以实现的方式中可以根据实际需求确定,在此不做出限定。
针对上述S103,在获取对目标AR特效的操作结果信息的情况下,针对不同的目标AR特效,可以获取的操作结果信息不同。
下面,分别对目标AR特效图像包括目标AR拍照模板、以及AR互动任务对应的任务特效对应的操作结果信息进行说明。
(C1):目标AR特效包括目标AR拍照模板。
在该种情况下,对应的操作结果信息包括AR特效图像。
在获取对所述目标AR特效的操作结果信息的情况下,例如可以采用下述方式:响应于对所述目标AR拍照模板的触发拍照操作,生成包括所述目标AR拍照模板的所述AR特效图像。
在该种情况下,例如可以向用户提供拍照触发按钮。用户在触发该拍照触发按钮后,例如可以由摄像头拍摄一张图像,并在拍摄得到的该张图像上叠加与目标AR拍照模板对应的AR特效。这样,即可以生成包括目标AR拍照模板的AR特效图像。
在上述实施例中,响应于对所述目标AR拍照模板的触发拍照操作,生成包括所述目标AR拍照模板的所述AR特效图像。这样,用户还可以通过拍照的方式保留导览过程中的AR特效图像。
在另一种可能的实施方式中,例如还可以响应于对目标AR拍照模板的触发拍照操作,由摄像头拍摄预设时间内的视频,然后在该视频上叠加具有动态效果的与目标AR拍照模板对应的AR特效。这样,得到的AR特效图像除可以提现目标AR拍照模板外,还能展现出动态的动作效果。
其中,在确定目标AR拍照模板的情况下,还可以基于第三历史导览打卡记录,从 多个备选AR拍照模板中,确定所述目标AR拍照模板。其中,第三历史导览打卡记录例如可以包括用户和/或AR设备对同一导览区域的历史打开记录。
在一种可以实现的方式中,根据第三历史导览打卡记录,一方面可以确定用户在该导览区域的历史拍照情况,例如历史拍摄的次数;另一方面,也可以确定用户对于多个备选AR拍照模板的喜好程度。
因此,在确定目标AR拍照模板的情况下,例如可以将用户惯常使用的AR拍照模板确定为目标AR拍照模板;或者,也可以将用户未使用过的AR拍照模板确定为目标AR拍照模板;又或者,也可以随机的向用户推送备选AR拍照模板中的任一AR拍照模板;又或者,根据用户的历史拍摄次数确定当前次的目标AR拍照模板。
其中,在根据用户的历史拍摄次数确定当前次的目标AR拍照模板的情况下,例如可以为每次拍摄确定对应的AR拍照模板。例如在第一次拍摄的情况下,在AR拍照模板中显示灰色的静态边框;在第二次拍摄的情况下,在AR拍照模板中显示白色的静态边框;在第三次拍摄的情况下,在AR拍照模板中显示银色的动态边框;以及在第四次拍摄的情况下,在AR拍照模板中显示金色的动态边框。这样,还可以提升用户的使用兴趣,并引导用户继续使用AR拍照功能导览更多的导览区域。
(C2):目标AR特效包括AR互动任务对应的任务特效。
在该种情况下,对应的操作结果信息包括互动操作。
在获取对所述目标AR特效的操作结果信息的情况下,例如可以采用下述方式:响应于基于所述任务特效触发的至少一项互动操作,生成与至少部分互动操作对应的互动图像。
在该种情况下,例如可以为AR特效设置对应的互动操作。示例性的,针对AR特效包括在火上熬煮茶的茶壶的情况下,对应的互动操作例如可以包括将茶壶取下,并向茶桌上的茶杯中倒茶。针对AR特效包括文件柜的情况下,对应的互动操作例如可以包括从文件柜中取出文件、以及打开文件的操作,在打开文件后,还可以展示文件中的文字信息,以供用户阅读。
另外,在确定互动操作的情况下,还可以相应的生成与至少部分互动操作对应的互动图像,例如上述说明的倒茶的互动图像,或者读取文件的互动图像。参见图4所示,为本公开实施例提供的一种互动图像的示意图。其中,示出了茶壶33动态的向茶杯34倒茶的过程中的一帧互动图像。
针对上述S104,在获取操作结果信息后,还可以在AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
在一种可以实现的方式中,例如可以基于所述操作结果信息、以及文本输入信息,生成与所述目标导览任务节点对应的导览打卡记录。
其中,文本输入信息例如可以包括用户在任务节点中输入的文本信息,或者,也可 以包括用户输入导览心情、以及导览感想等文字信息。
在上述实施例中,基于所述操作结果信息、以及文本输入信息,生成与所述目标导览任务节点对应的导览打卡记录。这样,用户还可以通过获取导览打卡记录的方式保留导览过程中的心情记录以及心得体会,提升用户体验度。
在一种可以实现的方式中,在生成与目标导览任务节点对应的导览打卡记录的情况下,例如可以直接根据操作结果信息生成可以还原用户在该目标导览任务节点的导览过程的导览打卡记录。这样,用户还可以回看该任务节点对应的导览过程。或者,在导览打卡记录中也可以相应的添加用户输入的文本输入信息,以丰富导览打卡记录。
在本公开另一实施例中,还可以响应于导览结束事件的触发,基于所述至少一个导览任务节点分别对应的导览打卡记录,生成本次导览的导览总记录。
其中,导览结束事件包括但不限于下述(D1)~(D3)中至少一种:
(D1):所述AR设备到达预设的结束导览区域。
在该种情况下,例如可以预先为目标场景设置对应的结束导览区域。
示例性的,在目标场景包括至少一个出口的情况下,可以将每个出口所在的区域作为结束导览区域。在根据AR设备的位置,确定AR设备到达结束导览区域后,可以认为用户完成了该次对目标场景的导览,并可以结束导览。这样,可以以较为简单的判断逻辑确定是否结束导览。
(D2):所述AR设备中结束导览控件被触发。
在该种情况下,例如可以为用户提供相应的结束导览控件。在一种可以实现的方式中,用户可以选择结束导览的时间和地点,并相应的触发结束导览控件。此时,可以直接响应于用户结束导览控件,对应的结束导览。
(D3):包含所述至少一个导览任务节点的导览任务的状态由未完成变更为已完成。
在该种情况下,例如可以在用户在执行不同的导览任务节点对应的导览任务的情况下,记录用户对该导览任务节点的完成状态。若对于当前用户正在进行的导览任务而言,其对应的各个导览任务节点对应的导览任务的状态俊变更为已完成,可以确定用户完成了该导览任务。此时,即可以结束该次导览。
这样,通过不同的导览结束事件的选取,可以使用户可以主动停止导览,或者根据用户的状态确定是否停止导览,更具有灵活性。
在确定结束导览后,还可以基于至少一个导览任务节点分别对应的导览打卡记录,生成本次导览的导览总记录。
示例性的,导览总记录例如可以包括一个影集,其中,该影集由每个导览任务节点对应的导览打卡记录构成。这样,还可以为用户提供对于该次导览的整个过程的记录信息。
在一种可以实现的方式中,在展示导览打卡记录的情况下,例如可以将AR特效结 合AR设备拍摄的图像进行展示;例如,AR设备拍摄的图像包括导览区域内的导览对象;可以根据AR设备拍摄的图像,确定导览对象在图像中的位置,并基于该位置,确定AR特效的展示位置。
又例如,在扫描导览门票的情况下,会获取到导览门票的图像;例如可以将导览门票在图像中的位置作为参考,确定AR特效的展示位置,此处,例如可以根据导览门票在图像中的位置,确定展示平面或者展示空间,该展示平面或者展示空间的位置,即为AR特效的展示位置。
另外,还可以根据AR设备拍摄的图像、以及预先生成的目标场景的高精三维地图,确定AR设备在目标场景中的位置。然后根据该位置,确定AR特效的展示位置。
在基于上述任一种方法确定了AR特效的展示位置后,根据该展示位置,在AR环境中展示导览打卡记录。
在将AR特效结合AR设备拍摄的图像进行展示的的情况下,例如可以将AR特效渲染在AR设备拍摄的图像的前端进行展示。
在本公开另一实施例中,还可以响应于触发信息分享事件,生成包括所述导览打卡记录的分享信息,并将所述分享信息发送至与所述信息分享事件对应的信息发布平台。这样,通过信息分享的方式,还可以将对应用户的分享信息供其他用户浏览,提升了用户之间的交互性。
其中,分享信息包括所述导览打卡记录的访问链接。
在一种可以实现的方式中,在生成包括所述导览打卡记录的分享信息的情况下,例如可以采用下述方式:向所述AR环境对应的服务器发送所述导览打卡记录;接收所述服务器基于所述导览打卡记录生成的所述访问链接。
在一种可以实现的方式中,在确定导览打卡记录后,可以将确定的导览打卡记录发送至与AR环境对应的服务器。服务器在接收到导览打卡记录后,例如还可以添加用户的相关信息,例如用户的用户名称、用户身份标识、用户头像等信息,并确定对应的访问链接。
在服务器确定对应的访问链接后,还可以向AR设备发送访问链接。利用该访问链接,AR设备以及其他用户的设备,均可以查看到该用户相关的导览打卡记录、以及该用户的相关信息。这样,在多个用户之间也可以分享相关的导览信息,互动性更强。并且,其他用户也可以针对访问链接中的分享信息进行例如评论、点赞、转发等操作,也可以提升不同用户之间的互动性。
本领域技术人员可以理解,在上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与交互方法对应的交互装置,由于本 公开实施例中的装置解决问题的原理与本公开实施例上述交互方法相似,因此装置的实施可以参见方法的实施。
参照图5所示,为本公开实施例所提供的一种交互装置的组成结构示意图,所述装置包括:启动模块51、展示模块52、获取模块53、以及生成模块54;其中,
启动模块51,配置为扫描到导览门票,启动增强现实AR环境;展示模块52,配置为检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;获取模块53,配置为获取对所述目标AR特效的操作结果信息;生成模块54,配置为基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
一种可选的实施方式中,所述交互装置还包括第一处理模块55,配置为:响应于导览事件被触发,生成导览任务;其中,所述导览任务包括至少一个所述导览任务节点;每个所述导览任务节点对应于目标场景中的一个导览区域。
一种可选的实施方式中,所述导览事件被触发,包括下述至少一种:检测到所述AR设备所处的位置、与最近一次生成的导览路径之间的位置偏移大于预设的位置偏移阈值;基于所述AR设备所处的位置,确定所述AR设备位于所述目标场景内。
一种可选的实施方式中,所述第一处理模块55在生成导览任务的情况下,配置为:基于第一历史导览打卡记录、用户属性信息、多个备选虚拟身份的选择信息中至少一种,从多个所述备选虚拟身份中,确定所述AR设备对应的目标虚拟身份;基于所述目标虚拟身份,生成所述导览任务。
一种可选的实施方式中,所述第一处理模块55在基于所述目标虚拟身份,生成所述导览任务的情况下,配置为:基于第二历史导览打卡记录,从所述目标场景的多个导览区域中,选择备选导览区域;基于所述目标虚拟身份、所述备选导览区域分别在多种备选虚拟身份下的AR特效素材、以及各备选导览区域分别在所述目标场景中的位置,生成所述导览任务。
一种可选的实施方式中,所述目标AR特效包括:目标AR拍照模板;所述操作结果信息包括:AR特效图像;所述获取模块53在获取对所述目标AR特效的操作结果信息的情况下,配置为:响应于对所述目标AR拍照模板的触发拍照操作,生成包括所述目标AR拍照模板的所述AR特效图像。
一种可选的实施方式中,所述交互装置还包括第二处理模块56,配置为:基于第三历史导览打卡记录,从多个备选AR拍照模板中,确定所述目标AR拍照模板。
一种可选的实施方式中,所述目标AR特效包括:AR互动任务对应的任务特效;所述操作结果信息包括:互动操作;所述获取模块53在获取对所述目标AR特效的操作结果信息的情况下,配置为:响应于基于所述任务特效触发的至少一项互动操作,生成与至少部分互动操作对应的互动图像。
一种可选的实施方式中,所述生成模块54在基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录的情况下,配置为:基于所述操作结果信息、以及文本输入信息,生成与所述目标导览任务节点对应的导览打卡记录。
一种可选的实施方式中,所述交互装置还包括第三处理模块57,配置为:响应于导览结束事件的触发,基于所述至少一个导览任务节点分别对应的导览打卡记录,生成本次导览的导览总记录。
一种可选的实施方式中,所述导览结束事件包括下述至少一种:所述AR设备到达预设的结束导览区域;所述AR设备中结束导览控件被触发;包含所述至少一个导览任务节点的导览任务的状态由未完成变更为已完成。
一种可选的实施方式中,所述交互装置还包括第四处理模块58,配置为:响应于触发信息分享事件,生成包括所述导览打卡记录的分享信息,并将所述分享信息发送至与所述信息分享事件对应的信息发布平台。
一种可选的实施方式中,所述分享信息包括:所述导览打卡记录的访问链接;所述第三处理模块57在生成包括所述导览打卡记录的分享信息的情况下,配置为:向所述AR环境对应的服务器发送所述导览打卡记录;接收所述服务器基于所述导览打卡记录生成的所述访问链接。
一种可选的实施方式中,所述AR环境通过部署在AR设备中的web端或者小程序实现。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明。
本公开实施例还提供了一种计算机设备,如图6所示,为本公开实施例所提供的计算机设备的组成结构示意图,包括:
处理器10和存储器20;所述存储器20存储有处理器10可执行的机器可读指令,处理器10配置为执行存储器20中存储的机器可读指令,所述机器可读指令被处理器10执行时,处理器10执行下述步骤:
扫描到导览门票,启动增强现实AR环境;检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;获取对所述目标AR特效的操作结果信息;基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
上述存储器20包括内存210和外部存储器220;这里的内存210也称内存储器,配置为暂时存放处理器10中的运算数据,以及与硬盘等外部存储器220交换的数据,处理器10通过内存210与外部存储器220进行数据交换。
上述指令的执行过程可以参考本公开实施例中所述的交互方法的步骤。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计 算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的交互方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的交互方法的步骤,可参见上述方法实施例。
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Devecopment Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Oncy Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或 可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。
工业实用性
本公开实施例中,通过启动增强现实(Artificial Intelligence,AR)环境,能够使用户在到达目标导览区域后,向其展示与该目标导览区域对应的目标AR特效;用户可以对目标AR特效进行相关的操作,AR设备可以根据对目标AR特效的操作结果信息,在AR环境中展示与目标导览任务节点对应的导览打卡记录,从而提升互动性;同时,通过导览打卡记录的生成过程,可以产生内容丰富、且更具有多样性、和个性化的打卡记录,满足用户的使用需求。

Claims (18)

  1. 一种交互方法,包括:
    扫描到导览门票,启动增强现实AR环境;
    检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;
    获取对所述目标AR特效的操作结果信息;
    基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
  2. 根据权利要求1所述的交互方法,还包括:响应于导览事件被触发,生成导览任务;其中,所述导览任务包括至少一个所述导览任务节点;每个所述导览任务节点对应于目标场景中的一个导览区域。
  3. 根据权利要求2所述的交互方法,其中,所述响应于导览事件被触发,包括下述至少一种:
    检测到所述AR设备所处的位置、与最近一次生成的导览路径之间的位置偏移大于预设的位置偏移阈值;
    基于所述AR设备所处的位置,确定所述AR设备位于所述目标场景内。
  4. 根据权利要求2或3所述的交互方法,其中,所述生成导览任务,包括:
    基于第一历史导览打卡记录、用户属性信息、多个备选虚拟身份的选择信息中至少一种,从多个所述备选虚拟身份中,确定所述AR设备对应的目标虚拟身份;
    基于所述目标虚拟身份,生成所述导览任务。
  5. 根据权利要求4所述的交互方法,其中,所述基于所述目标虚拟身份,生成所述导览任务,包括:
    基于第二历史导览打卡记录,从所述目标场景的多个导览区域中,选择备选导览区域;
    基于所述目标虚拟身份、所述备选导览区域分别在多种备选虚拟身份下的AR特效素材、以及各备选导览区域分别在所述目标场景中的位置,生成所述导览任务。
  6. 根据权利要求1-5任一项所述的交互方法,其中,所述目标AR特效包括:目标AR拍照模板;所述操作结果信息包括:AR特效图像;
    所述获取对所述目标AR特效的操作结果信息,包括:
    响应于对所述目标AR拍照模板的触发拍照操作,生成包括所述目标AR拍照模板的所述AR特效图像。
  7. 根据权利要求6所述的交互方法,还包括:基于第三历史导览打卡记录,从多个备选AR拍照模板中,确定所述目标AR拍照模板。
  8. 根据权利要求1-7任一项所述的交互方法,其中,所述目标AR特效包括:AR 互动任务对应的任务特效;所述操作结果信息包括:互动操作;
    所述获取对所述目标AR特效的操作结果信息,包括:响应于基于所述任务特效触发的至少一项互动操作,生成与至少部分互动操作对应的互动图像。
  9. 根据权利要求1-8任一项所述的交互方法,其中,所述基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录,包括:
    基于所述操作结果信息、以及文本输入信息,生成与所述目标导览任务节点对应的导览打卡记录。
  10. 根据权利要求1-9任一项所述的交互方法,还包括:
    响应于导览结束事件的触发,基于所述至少一个导览任务节点分别对应的导览打卡记录,生成本次导览的导览总记录。
  11. 根据权利要求10所述的交互方法,其中,所述导览结束事件包括下述至少一种:
    所述AR设备到达预设的结束导览区域;
    所述AR设备中结束导览控件被触发;
    包含所述至少一个导览任务节点的导览任务的状态由未完成变更为已完成。
  12. 根据权利要求1-11任一项所述的交互方法,还包括:
    响应于触发信息分享事件,生成包括所述导览打卡记录的分享信息,并将所述分享信息发送至与所述信息分享事件对应的信息发布平台。
  13. 根据权利要求12所述的交互方法,其中,所述分享信息包括:所述导览打卡记录的访问链接;
    所述生成包括所述导览打卡记录的分享信息,包括:
    向所述AR环境对应的服务器发送所述导览打卡记录;
    接收所述服务器基于所述导览打卡记录生成的所述访问链接。
  14. 根据权利要求1-13任一项所述的交互方法,其中,所述AR环境通过部署在AR设备中的网页端或者小程序实现。
  15. 一种交互装置,包括:
    启动模块,配置为扫描到导览门票,启动增强现实AR环境;
    展示模块,配置为检测到AR设备到达与至少一个导览任务节点中的目标导览任务节点对应的目标导览区域,在所述AR环境中展示与所述目标导览区域对应的目标AR特效;
    获取模块,配置为获取对所述目标AR特效的操作结果信息;
    生成模块,配置为基于所述操作结果信息,在所述AR环境中展示与所述目标导览任务节点对应的导览打卡记录。
  16. 一种计算机设备,包括:处理器、存储器,所述存储器存储有所述处理器可执 行的机器可读指令,所述处理器配置为执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至14任一项所述的交互方法的步骤。
  17. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至14任一项所述的交互方法的步骤。
  18. 一种计算机程序产品,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行如权利要求1至14任意一项所述的交互方法。
PCT/CN2022/085944 2021-06-18 2022-04-08 交互方法、装置、计算机设备及程序产品、存储介质 WO2022262389A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110681015.9A CN113282179A (zh) 2021-06-18 2021-06-18 一种交互方法、装置、计算机设备及存储介质
CN202110681015.9 2021-06-18

Publications (1)

Publication Number Publication Date
WO2022262389A1 true WO2022262389A1 (zh) 2022-12-22

Family

ID=77285061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085944 WO2022262389A1 (zh) 2021-06-18 2022-04-08 交互方法、装置、计算机设备及程序产品、存储介质

Country Status (3)

Country Link
CN (1) CN113282179A (zh)
TW (1) TW202301082A (zh)
WO (1) WO2022262389A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282179A (zh) * 2021-06-18 2021-08-20 北京市商汤科技开发有限公司 一种交互方法、装置、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170329394A1 (en) * 2016-05-13 2017-11-16 Benjamin Lloyd Goldstein Virtual and augmented reality systems
CN112927293A (zh) * 2021-03-26 2021-06-08 深圳市慧鲤科技有限公司 Ar场景展示方法及装置、电子设备和存储介质
CN112947756A (zh) * 2021-03-03 2021-06-11 上海商汤智能科技有限公司 内容导览方法、装置、系统、计算机设备及存储介质
CN113282179A (zh) * 2021-06-18 2021-08-20 北京市商汤科技开发有限公司 一种交互方法、装置、计算机设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754416B2 (en) * 2014-12-23 2017-09-05 Intel Corporation Systems and methods for contextually augmented video creation and sharing
CN109067839B (zh) * 2018-06-29 2021-10-26 北京小米移动软件有限公司 推送游览指导信息、创建景点信息数据库的方法及装置
CN111640202B (zh) * 2020-06-11 2024-01-09 浙江商汤科技开发有限公司 一种ar场景特效生成的方法及装置
CN116595259A (zh) * 2021-03-25 2023-08-15 支付宝(杭州)信息技术有限公司 位置推荐处理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170329394A1 (en) * 2016-05-13 2017-11-16 Benjamin Lloyd Goldstein Virtual and augmented reality systems
CN112947756A (zh) * 2021-03-03 2021-06-11 上海商汤智能科技有限公司 内容导览方法、装置、系统、计算机设备及存储介质
CN112927293A (zh) * 2021-03-26 2021-06-08 深圳市慧鲤科技有限公司 Ar场景展示方法及装置、电子设备和存储介质
CN113282179A (zh) * 2021-06-18 2021-08-20 北京市商汤科技开发有限公司 一种交互方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
TW202301082A (zh) 2023-01-01
CN113282179A (zh) 2021-08-20

Similar Documents

Publication Publication Date Title
Manovich Instagram and contemporary image
EP3152948B1 (en) Place-based information processing method and apparatus
CN111638796A (zh) 虚拟对象的展示方法、装置、计算机设备及存储介质
KR20230096043A (ko) 실시간 3d 신체 모션 캡처로부터의 사이드-바이-사이드 캐릭터 애니메이션
CN103620600A (zh) 用于实现虚拟标记的方法及设备
CN106982240A (zh) 信息的显示方法和装置
Sekhavat KioskAR: an augmented reality game as a new business model to present artworks
CN111667590B (zh) 一种互动合影方法、装置、电子设备及存储介质
WO2022262389A1 (zh) 交互方法、装置、计算机设备及程序产品、存储介质
JP2009176032A (ja) 情報処理装置および方法、並びにプログラム
CN109074680A (zh) 基于通信的增强现实中实时图像和信号处理方法和系统
Kaplan et al. Mapping ararat: an augmented reality walking tour for an imaginary Jewish homeland
WO2022262521A1 (zh) 数据展示方法、装置、计算机设备、存储介质、计算机程序产品及计算机程序
CN111652986B (zh) 舞台效果呈现方法、装置、电子设备及存储介质
US20190012834A1 (en) Augmented Content System and Method
CN109510752A (zh) 信息展示方法及装置
JP2015154218A (ja) サーバシステム及びプログラム
JP2023075879A (ja) 情報処理システム、情報処理方法、情報処理プログラム
JP2023075441A (ja) 情報処理システム、情報処理方法、情報処理プログラム
CN114051168A (zh) 一种显示方法、装置、设备、存储介质及程序产品
Bovcon et al. “Atlas 2012” Augmented Reality: A Case Study in the Domain of Fine Arts
CN113538703A (zh) 数据展示方法、装置、计算机设备及存储介质
CN113345110A (zh) 特效展示方法、装置、电子设备及存储介质
CN111626521A (zh) 一种游览路线生成的方法及装置
Ajibola et al. Using augmented reality to enhance printed magazine articles about Namibian lodges

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE