WO2022062442A1 - Ar场景下的引导方法、装置、计算机设备及存储介质 - Google Patents

Ar场景下的引导方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022062442A1
WO2022062442A1 PCT/CN2021/095853 CN2021095853W WO2022062442A1 WO 2022062442 A1 WO2022062442 A1 WO 2022062442A1 CN 2021095853 W CN2021095853 W CN 2021095853W WO 2022062442 A1 WO2022062442 A1 WO 2022062442A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
virtual guide
state information
current state
information
Prior art date
Application number
PCT/CN2021/095853
Other languages
English (en)
French (fr)
Inventor
侯欣如
栾青
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2022062442A1 publication Critical patent/WO2022062442A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • the present disclosure relates to the technical field of augmented reality, and in particular, to a guidance method, apparatus, computer device, and storage medium in an AR scene.
  • the embodiments of the present disclosure provide at least one guidance method, apparatus, computer device, and storage medium in an AR scenario.
  • an embodiment of the present disclosure provides a guidance method in an augmented reality AR scene, including:
  • the AR special effect of the virtual guide is displayed in the AR device.
  • the target state information of the virtual guide is determined, and according to the target state information of the virtual guide,
  • the AR special effects of the virtual guide are displayed in the AR device, so that users can be provided with richer and more intuitive guidance information through the AR special effects of the virtual guide, and the guide efficiency can be improved.
  • the current state information of the AR device includes current pose information of the AR device
  • the target state information includes target pose information of the virtual guide
  • the determining the target state information of the virtual guide based on the current state information and the relative pose relationship between the virtual guide displayed in the AR device and the AR device includes:
  • the current relative pose relationship does not conform to the target relative pose relationship
  • determine the updated target position of the virtual guide based on the current state information of the AR device and the target relative pose relationship posture information.
  • the updated target pose relationship of the virtual guide is determined, so that the virtual guide can continuously change from The current position is moved to a new position that satisfies the relative pose relationship of the target, which improves the AR rendering effect during the guidance process.
  • the current relative pose relationship does not conform to the target relative pose relationship includes at least one of the following:
  • the relative distance between the AR device and the virtual guide is less than a first distance threshold
  • the relative distance between the AR device and the virtual guide is greater than a second distance threshold
  • the included angle between the current orientation of the AR device and the connection direction of the AR device to the virtual guide is greater than the set angle.
  • the relative pose between the virtual guide and the AR device is constrained by distance and/or angle, so that the virtual guide can present a better guiding effect in the AR screen.
  • the target of the virtual guide is determined based on the current state information and the target relative pose relationship between the virtual guide displayed in the AR device and the AR device Status information, including:
  • the current state information of the AR device, and the relative pose relationship of the target, target state information of the virtual guide is determined.
  • the target state information of the virtual guide is determined through the guide route, so that the virtual guide can guide the user according to the guide route, so that the user can reach the destination more quickly.
  • the target state information of the virtual guide is determined based on a predetermined guidance route, the current state information of the AR device, and the relative pose relationship of the target, including:
  • determining the target state information of the virtual guide includes: the virtual guide is located opposite to the AR device from the target distance, and the orientation of the virtual guide is opposite to the direction in which the AR device is located;
  • determining the target state information of the virtual guide includes: the virtual guide is located at a relative distance from the AR device from the target position, and the orientation of the virtual guide is the direction of the predetermined guide route;
  • determining the target state information of the virtual guide includes: the virtual guide is located at a distance from a target of the AR device at a relative distance, and the virtual guide faces the direction where the AR device is located.
  • AR special effects of the virtual guide in the AR device including:
  • an AR special effect for the virtual guide to transition from the current state to the target state is displayed in the AR device.
  • an AR special effect of the virtual guide transitioning from the current state to the target state is displayed in the AR device, including: :
  • the AR device After a preset time period after the AR device is changed from the historical state to the current state, based on the target state information and the current state information of the virtual guide, the AR device displays the virtual guide from The AR effect of the current state transitioning to the target state.
  • the movement of the virtual guide is later than the movement of the user, so that the effect of the virtual guide moving according to the movement of the user is formed, which has a better guiding effect.
  • the guiding method further includes:
  • the determining the target state information of the virtual guide based on the current state information and the target relative pose relationship between the virtual guide displayed in the AR device and the AR device includes:
  • the displaying AR special effects of the virtual guide in the AR device based on the target state information including:
  • the AR special effect of the virtual guide is displayed in the AR device.
  • the AR effect of the virtual guide avoiding obstacles can be presented in the AR screen to achieve a more realistic display effect.
  • the virtual guidance is determined based on the current state information after the AR device avoids the target obstacle, the obstacle position information of the target obstacle, and the relative pose relationship of the target.
  • target status information for members including:
  • the initial target position of the virtual guide is within the position range corresponding to the target obstacle
  • the target position in the target state information of the virtual guide is outside the range of positions corresponding to the target obstacle and is consistent with the initial target position.
  • the target location is the closest location.
  • an embodiment of the present disclosure further provides a guidance device in an AR scenario, including:
  • the acquisition part is configured to acquire the current state information of the AR device
  • a determining part configured to determine the target state information of the virtual guide based on the current state information and the target relative pose relationship between the virtual guide displayed in the AR device and the AR device;
  • the display part is configured to display AR special effects of the virtual guide in the AR device based on the target state information.
  • the current state information of the AR device includes the current pose information of the AR device, and the target state information includes the target pose information of the virtual guide;
  • the determining part is further configured to determine the current relative pose relationship between the virtual guide and the AR device based on the current state information of the AR device and the current pose information of the virtual guide; If the current relative pose relationship does not conform to the target relative pose relationship, determine the updated target pose of the virtual guide based on the current state information of the AR device and the target relative pose relationship information.
  • the current relative pose relationship does not conform to the target relative pose relationship includes at least one of the following:
  • the relative distance between the AR device and the virtual guide is less than a first distance threshold
  • the relative distance between the AR device and the virtual guide is greater than a second distance threshold
  • the included angle between the current orientation of the AR device and the connection direction of the AR device to the virtual guide is greater than the set angle.
  • the determining part is further configured to determine the target of the virtual guide based on the predetermined guidance route, the current state information of the AR device, and the relative pose relationship of the target status information.
  • the determining part is further configured to determine the target state of the virtual guide under the condition that the current state information of the AR device indicates that the AR device is in the AR booting start state
  • the information includes: the virtual guide is located at a relative distance from the AR device and the target, and the virtual guide is oriented opposite to the direction in which the AR device is located;
  • determining the target state information of the virtual guide includes: the virtual guide is located at a relative distance from the AR device from the target position, and the orientation of the virtual guide is the direction of the predetermined guide route;
  • determining the target state information of the virtual guide includes: the virtual guide is located at a distance from a target of the AR device at a relative distance, and the virtual guide faces the direction where the AR device is located.
  • the display part is further configured to display the virtual guide from the current state in the AR device based on the target state information and the current state information of the virtual guide. AR effects that transition to the target state.
  • the display part is further configured to, after the AR device changes from the historical state to the current state for a preset period of time, based on the target state information and the virtual guide's Current state information, displaying AR special effects of the virtual guide from the current state to the target state in the AR device.
  • the obtaining part is further configured to obtain the obstacle position information of the obstacle relative to the AR device;
  • the determining part is further configured to, in the case that there is a target obstacle in the target image captured by the AR device, based on the current state information after the AR device avoids the target obstacle, the obstacle of the target obstacle the position information and the relative pose relationship of the target to determine the target state information of the virtual guide;
  • the display part is further configured to display the AR special effect of the virtual guide in the AR device based on the target state information and the obstacle position information of the target obstacle.
  • the determining part is further configured to determine that the initial target position of the virtual guide is located at the position corresponding to the target obstacle based on the current state information and the target relative pose relationship. within the range;
  • the target position in the target state information of the virtual guide is outside the range of positions corresponding to the target obstacle and is consistent with the initial target position.
  • the target location is the closest location.
  • an optional implementation manner of an embodiment of the present disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the memory machine-readable instructions stored in the processor, when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the above-mentioned first aspect, or any one of the possible first aspects. steps in an embodiment of .
  • an optional implementation manner of the embodiments of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program executes the first aspect or the first aspect when the computer program is run. steps in any of the possible implementations.
  • an optional implementation manner of an embodiment of the present disclosure further provides a computer program, including computer-readable code, when the computer-readable code is executed in a computer device, the processor in the computer device executes the program to implement the first aspect guide method in .
  • FIG. 1 shows a schematic diagram of an application scenario of an AR device provided by an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a guidance method in an AR scene provided by an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a specific method for determining target information in the guidance method provided by an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of a booting method provided by an embodiment of the present disclosure
  • Fig. 5a shows a schematic diagram of a graphical display interface scene provided by an embodiment of the present disclosure
  • FIG. 5b shows a schematic diagram of another graphical display interface scenario provided by an embodiment of the present disclosure
  • FIG. 6 shows a flowchart of another guidance method provided by an embodiment of the present disclosure
  • FIG. 7a shows an example of the mutual positional relationship among obstacles, AR devices, and virtual AR special effects in another guidance method provided by an embodiment of the present disclosure
  • FIG. 7b shows an example of the mutual positional relationship among obstacles, AR devices, and virtual AR special effects in yet another guidance method provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a guiding device provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
  • this guidance method also has the problem of poor intuition, and the user is likely to miss some guidance information during the pathfinding process, so that the corresponding destination cannot be found for a long time, which also causes the problem of low guidance efficiency. .
  • the present disclosure provides a guidance method in an Augmented Reality (AR) scene.
  • the AR special effect of a virtual guide is determined through the current state information of the AR device, and the AR special effect is used to guide the user.
  • the execution subject of the bootstrap method provided by the embodiment of the present disclosure is generally a computer device with a certain computing capability, such as a computer device.
  • AR device can be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, Personal Digital Assistant (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the bootstrapping method may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • FIG. 1 is a schematic diagram of an application scenario of an AR device according to an embodiment of the present disclosure.
  • the AR device 10 may be set in a target scene, and provide guidance information for a user in the target scene.
  • the AR device is further configured with a camera 11, which can be used to collect images in the target scene, so that the AR device can construct a virtual scene corresponding to the target scene.
  • the AR device may be configured with a graphical interactive interface 12 that can display a virtual guide and AR special effects of the virtual guide.
  • the AR special effect of the virtual guide may be the effect of the virtual guide moving in the virtual scene, so as to guide the user in the target scene.
  • the method includes steps S101 to S103 , wherein:
  • S102 Determine target state information of the virtual guide based on the current state information and the target relative pose relationship between the virtual guide displayed in the AR device and the AR device;
  • S103 Based on the target state information, display the AR special effect of the virtual guide in the AR device.
  • the target state information of the virtual guide is determined according to the current state information of the AR device and the target relative pose relationship between the virtual guide and the AR device displayed in the AR device, and the target state information of the virtual guide is determined according to the target of the virtual guide.
  • Status information, the AR special effects of the virtual guide are displayed in the AR device, so that the AR special effects of the virtual guide can provide users with richer and more intuitive guidance information and improve the guidance efficiency.
  • the current state information of the AR device includes: current pose information of the AR device.
  • the current pose information of the AR device in the target scene can be determined in the following manner:
  • the current pose information of the AR device is determined.
  • the current pose information of the AR device including the 3D coordinate value of the optical center of the camera deployed on the AR device in the scene coordinate system corresponding to the 3D scene map established by the target scene, and the optical axis orientation information of the camera; the optical axis orientation
  • the information may include, for example, the deflection angle and the pitch angle of the optical axis of the camera on the AR device in the scene coordinate system; or the orientation information of the optical axis, such as a vector in the scene coordinate system.
  • the target image can be identified by feature points.
  • the second feature points in the three-dimensional scene map of the target scene are matched, and from the second feature points, the target second feature points that can be matched with the first feature points are determined.
  • the object represented by the second feature point of the target is the same object as the object represented by the first feature point.
  • the three-dimensional scene map of the target scene can be obtained, for example, by any of the following methods: Simultaneous Localization and Mapping (SLAM) modeling, Structure-From-Motion (SFM) ) modeling.
  • SLAM modeling refers to the AR device moving from an unknown position in an unknown environment, positioning itself according to the position and the scene images collected in real time during the movement process, and constructing an incremental map based on its own positioning.
  • SFM modeling is to determine the spatial and geometric relationship of the target scene through the scene images collected by the camera during the moving process.
  • a three-dimensional coordinate system is established with a preset coordinate point as the origin; wherein, the preset coordinate point may be a building coordinate point in the target scene or the location where the camera is located when the camera collects the target scene. Coordinate points;
  • the camera collects video images, and builds a three-dimensional scene map of the target scene by tracking a sufficient number of feature points in the camera video frame; wherein, the feature points in the constructed three-dimensional scene map of the target scene also include the feature point information of the above objects.
  • the target second feature point determines the target second feature point, and read the target second feature point in the 3D scene of the target scene 3D coordinate values in the map. Then, based on the three-dimensional coordinate value of the second feature point of the target, the current pose information of the AR device in the scene coordinate system is determined.
  • the target pixel point corresponding to the first feature point in the target image may be determined; based on the target pixel point
  • the two-dimensional coordinate value in the image coordinate system and the three-dimensional coordinate value of the second feature point of the target in the scene coordinate system determine the current pose information of the AR device in the scene coordinate system.
  • an AR device can be used to construct a camera coordinate system; wherein, the origin of the camera coordinate system is the point where the optical center of the camera in the AR device is located; the z-axis is the straight line where the optical axis of the camera is located; perpendicular to the optical axis, and the light
  • the plane where the center is located is the plane where the x-axis and the y-axis are located; the depth detection algorithm can be used to determine the depth value corresponding to each pixel in the target image; after the target pixel is determined in the target image, the target pixel can be obtained.
  • the depth value in the camera coordinate system that is, the three-dimensional coordinate value of the first feature point corresponding to the target pixel point in the camera coordinate system can be obtained; then, the three-dimensional coordinate value of the first feature point in the camera coordinate system and the first feature point can be obtained.
  • the three-dimensional coordinate value of the feature point in the scene coordinate system restore the coordinate value of the origin of the camera coordinate system in the scene coordinate system, that is, the position information of the AR device in the current pose information in the scene coordinate system, and use the camera coordinate
  • the z-axis of the system is determined, and the angle of the z-axis in the scene coordinate system relative to each coordinate axis of the scene coordinate system is determined, and the pose information of the current pose information of the AR device in the scene coordinate system is obtained.
  • the SLAM space can be constructed based on the multi-frame video frame images obtained by the AR device, and then the coordinate system corresponding to the SLAM space and the coordinate system corresponding to the 3D scene map of the target scene Alignment, because the specific pose information of the AR device in the SLAM space can be determined based on SLAM, and the actual position of the AR device in the 3D scene map can be determined based on the alignment relationship between the SLAM coordinate system and the 3D scene map.
  • the 3D scene map usually corresponds to the actual space of the target scene according to a certain scale.
  • the current pose information of the AR device in the target scene can be determined based on the above-mentioned correspondence.
  • the target state information of the virtual guide includes target pose information of the virtual guide, that is, the virtual guide transitions from the current state to another state represented by the target pose information.
  • the target relative pose relationship between the virtual guide and the AR device may be a preset relative pose relationship to be maintained between the virtual guide and the AR device.
  • the relative pose relationship of the target includes at least one of the following:
  • the distance relationship between the AR device and the virtual guide includes, for example: the relative distance between the AR device and the virtual guide is greater than the first distance threshold, and/or the relative distance between the AR device and the virtual guide is smaller than the second distance threshold. Wherein, the first distance threshold is smaller than the second distance threshold.
  • the second distance threshold ensures that the distance between the AR device and the virtual guide will not be too large.
  • the angular relationship between the AR device and the virtual guide includes, for example: the angle between the current orientation of the AR device and the direction of the connection between the AR device and the virtual guide is smaller than the set angle; and/or, a certain angle of the virtual guide side, facing the direction of the AR device.
  • the current orientation of the AR device is, for example, the direction of the optical axis of the camera in the AR device;
  • the direction of the connection between the AR device and the virtual guide is, for example, the connection between the optical center of the camera in the AR device and the center of the virtual guide direction.
  • the height relationship between the AR device and the virtual guide includes, for example: the virtual guide, the distance between at least one side of the space corresponding to the shooting field of view of the AR device is greater than the preset distance; and/or the height of the virtual guide from the ground equal to the preset height.
  • the virtual guide is, for example, an animated image that can fly in the air, such as "Dumbo”, “Flower Fairy”, etc., which can change the pose according to the pose change of the AR device.
  • the animated character is located within the shooting field of view of the AR device, and the distance from the top side of the space corresponding to the shooting field of view is greater than the preset distance, and the distance from the bottom side of the space corresponding to the shooting field of view is greater than the preset distance.
  • the virtual guide can also be an animated image walking on the ground, such as a "robot guide", etc., no matter how the shooting angle of the AR device changes, it is always consistent with the height of the ground.
  • an embodiment of the present disclosure provides a specific method for determining target state information of a virtual guide, including:
  • S201 Based on the current state information of the AR device and the current pose information of the virtual guide, determine the current relative pose relationship between the virtual guide and the AR device.
  • the initial pose of the virtual guide is, for example, in the Unity coordinate system.
  • Unity is the 3D scene map that builds the target scene, and the 3D engine for the virtual guide. Therefore, when determining the current relative pose relationship between the virtual guide and the AR device, the initial pose of the virtual guide in the Unity coordinate system can be determined first; and then according to the relationship between the Unity coordinate system and the coordinate system corresponding to the 3D scene map The conversion relationship can convert the initial pose of the virtual guide in the Unity coordinate system into the current pose information in the coordinate system corresponding to the three-dimensional scene map; through the above S101, the current position of the AR device in the three-dimensional scene map can be determined.
  • the relative pose relationship of the target includes the distance relationship between the AR device and the virtual guide
  • the relative distance between the AR device and the virtual guide is less than the first distance threshold, and/or, If the relative distance between the AR device and the virtual guide is greater than the second distance threshold, it is determined that the current relative pose relationship does not conform to the target relative pose relationship.
  • the relative pose relationship of the target includes the angular relationship between the AR device and the virtual guide
  • the current orientation of the AR device and the direction of the connection between the AR device and the virtual guide are sandwiched between If the angle is greater than the set angle, and/or a certain side of the virtual guide does not face the direction of the AR device, it is determined that the current relative pose relationship does not conform to the target relative pose relationship.
  • the relative pose relationship of the target includes the height relationship between the AR device and the virtual guide
  • the virtual guide is the virtual guide
  • the distance between at least one side of the space corresponding to the shooting field of view of the AR device is less than or equal to the preset distance; And/or, when the height of the virtual guide from the ground is not equal to the preset height, it is determined that the current relative pose relationship does not conform to the target relative pose relationship.
  • the updated target pose information of the virtual guide includes the target position that the virtual guide will reach and the posture of the virtual guide relative to the AR device after reaching the target position.
  • the closest position with the virtual guide and satisfying the relative pose relationship of the target can be determined as the target position to be reached by the virtual guide, and after the virtual guide reaches the target position, the virtual guide
  • the target surface of the guide faces the AR device.
  • the target surface of the virtual guide is the side preset by the virtual guide, for example, the virtual guide is a humanoid guide, and the target surface is, for example, the side where the human face is located.
  • target state information of the virtual guide may also be determined based on a predetermined guide route, current state information of the AR device, and the relative pose relationship of the target.
  • the user can pre-determine a departure place and a destination
  • the AR device can plan a guiding route for the user based on the user's pre-determined departure place, destination, and a three-dimensional scene map.
  • the target state information of the virtual guide may be constrained by the guide route.
  • the current relative pose relationship between the virtual guide and the AR device may not conform to
  • the position that is closest to the guidance route and satisfies the target relative pose relationship is used as the virtual guide to descend to the target position, and the virtual guide reaches the target position after reaching the target position.
  • the AR special effect corresponding to the virtual guide can instruct the user to travel in the direction guided by the guide route.
  • the humanoid guide shows the special effect that the arm points to the direction guided by the guide route.
  • an embodiment of the present disclosure further provides a specific method for determining target state information of the virtual guide based on the current state information of the AR device and the relative pose relationship of the target, including:
  • determining the target state information of the virtual guide includes: the virtual guide is located in the same direction as the AR The device is located at a relative distance from the target, and the virtual guide faces the direction of the AR device.
  • determining the target state information of the virtual guide includes: the virtual guide is located at a distance from the AR device The position of the target relative distance, and the orientation of the virtual guide is the direction of the predetermined guide route.
  • determining the target state information of the virtual guide includes: the virtual guide is located at The AR device is at a relative distance from the target, and the virtual guide faces the direction in which the AR device is located.
  • an embodiment of the present disclosure further provides a specific method for displaying the AR special effect of the virtual guide in the AR device based on the target state information, including:
  • an AR special effect for the virtual guide to transition from the current state to the target state is displayed in the AR device.
  • the virtual guide when the virtual guide is displayed in the AR device based on the target state information and the current state information of the virtual guide, for example, the state indicated by the current state information of the virtual guide can be displayed to transition to the target state
  • the change process of the state indicated by the information for example, in the current state, the virtual guide is facing the direction of the AR device and is located at the first position of the target scene; in the process of controlling the virtual guide to transition from the current state to the target state , control the virtual guide to turn to the direction of the second position point corresponding to the target state, and move to the second position point along the path from the first position point to the second position point;
  • the virtual guide presents special effects of walking or flying; after the virtual guide reaches the second position, it controls the virtual guide to turn to the direction of the AR device, and completes the journey from the current state to the target AR effects for state transitions.
  • the guiding method provided by the embodiment of the present disclosure may include the following steps:
  • step a the virtual guide displayed in the AR device faces the user.
  • the virtual guide displayed in the GUI 12 of the AR device may face the user, that is, the direction of the AR device.
  • Step b The AR device determines whether to receive an instruction to trigger the bootstrap task.
  • the AR device may detect the user's instruction to trigger the guidance task.
  • the trigger instruction may be a click operation of the user clicking on the graphical interactive interface of the AR device, or a touch operation instruction of the user on the graphical interactive interface, or a voice command, etc., which are not limited in this embodiment of the present disclosure.
  • step c is performed. If no instruction to trigger the guiding task is received, step a is continued to keep the virtual guide displayed in the AR device facing the user.
  • step c the AR device controls the virtual guide toward the direction of the predetermined guide route.
  • the orientation of the virtual guide is the direction of the predetermined guidance route.
  • Step d the AR device controls the virtual guide to move according to the guide route.
  • the AR device can control the virtual guide to move along the guiding route, and the virtual guide presents special effects of walking or flying.
  • Step e When the virtual guide moves to the end point of the guide route, control the virtual guide to turn from the direction toward the guide route to the user.
  • an AR special effect of the virtual guide transitioning from the current state to the target state is displayed in the AR device
  • the AR device may display the AR device. The AR effect of the virtual guide transitioning from the current state to the target state.
  • the virtual guide is made to move later than the user, so that the effect of the virtual guide moving according to the user's movement is formed, which has a better guiding effect.
  • another embodiment of the present disclosure further provides another guidance method, including:
  • S302 In the case where there is a target obstacle in the target image captured by the AR device, based on the current state information after the AR device avoids the target obstacle, the obstacle position information of the target obstacle, and the target relative pose relationship, to determine the target state information of the virtual guide;
  • S303 Based on the target state information and the obstacle position information of the target obstacle, display the AR special effect of the virtual guide in the AR device.
  • a pre-trained obstacle detection model can be used to perform obstacle detection on the target image, and the pose information of the target obstacle relative to the AR device can be determined; then based on the pose information of the target obstacle relative to the AR device, And the current pose information of the AR device in the target scene to determine the obstacle position information of the target obstacle.
  • obstacles can also be pre-marked in the 3D scene map. After the current pose information of the AR device in the target scene is determined, the target obstacles that may affect AR guidance are determined based on the position information of the pre-marked obstacles. , and determine the obstacle position information of the target obstacle.
  • the following methods can be used to determine the target state information of the virtual guide:
  • the initial target position of the virtual guide is within the position range corresponding to the target obstacle
  • the target position in the target state information of the virtual guide is outside the range of positions corresponding to the target obstacle and is consistent with the initial target position.
  • the target location is the closest location.
  • the position where the virtual guide is to avoid the obstacle is preferentially selected, and the virtual guide cannot collide with the obstacle or exceed the occluder of the target scene.
  • the relative pose relationship is preferentially selected; if there is currently no position that satisfies the relative pose relationship, for example
  • the target position of the virtual guide can be determined as a position where the AR device can fall within the shooting field of view by turning the AR device.
  • the obstacle A is 41, and the AR device is 42; the area of the shooting field of view is M1, and the area that satisfies the relative pose relationship of the target is M2; 43 represents the current position of the virtual guide, and 44 represents the virtual guide's current position.
  • Target position in this example, the target state information determined for the virtual guide can not only avoid the position of the obstacle, but also satisfy the relative pose relationship of the target.
  • the obstacle B is 45
  • the shooting field of view of the AR device 42 is the area between the dotted lines
  • 46 represents the current position of the virtual guide
  • 47 represents the target position of the virtual guide; in this example, the virtual guide is the virtual guide.
  • the target state information determined by the guide can avoid the position of the obstacle, but it cannot satisfy the relative pose relationship of the target. Instead, the target position of the virtual guide is determined to fall into the field of view of the AR device by turning the AR device. location within.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides a guidance device in the AR scene corresponding to the guidance method in the AR scene.
  • the guiding methods are similar to those of the above, so the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
  • the device includes: an acquisition module 51 , a determination module 52 , and a display module 53 ; wherein,
  • the acquiring part 51 is configured to acquire the current state information of the AR device
  • a determination part 52 configured to determine target state information of the virtual guide based on the current state information and the target relative pose relationship between the virtual guide displayed in the AR device and the AR device;
  • the display part 53 is configured to display the AR special effect of the virtual guide in the AR device based on the target state information.
  • the current state information of the AR device includes current pose information of the AR device
  • the target state information includes target pose information of the virtual guide
  • the determining part 52 is further configured to determine the current relative pose relationship between the virtual guide and the AR device based on the current state information of the AR device and the current pose information of the virtual guide;
  • the current relative pose relationship does not conform to the target relative pose relationship
  • determine the updated target position of the virtual guide based on the current state information of the AR device and the target relative pose relationship posture information.
  • the current relative pose relationship does not conform to the target relative pose relationship includes at least one of the following:
  • the relative distance between the AR device and the virtual guide is less than a first distance threshold
  • the relative distance between the AR device and the virtual guide is greater than a second distance threshold
  • the included angle between the current orientation of the AR device and the connection direction of the AR device to the virtual guide is greater than the set angle.
  • the determining part 52 is further configured to determine the virtual guide's position based on the predetermined guide route, the current state information of the AR device, and the relative pose relationship of the target. Target status information.
  • the determining part 52 is further configured to determine the target of the virtual guide in the case that the current state information of the AR device indicates that the AR device is in the AR booting start state.
  • the status information includes: the virtual guide is located at a relative distance from the AR device from the target, and the virtual guide is oriented opposite to the direction in which the AR device is located;
  • determining the target state information of the virtual guide includes: the virtual guide is located at a relative distance from the AR device from the target position, and the orientation of the virtual guide is the direction of the predetermined guide route;
  • determining the target state information of the virtual guide includes: the virtual guide is located at a distance from a target of the AR device at a relative distance, and the virtual guide faces the direction where the AR device is located.
  • the presentation part 53 is further configured to, based on the target state information and the current state information of the virtual guide, display the virtual guide from the current state in the AR device. The AR effect of the state transition to the target state.
  • the display part 53 is further configured to, after a preset time period after the AR device changes from the historical state to the current state, based on the target state information, and the virtual guide
  • the current state information of the virtual guide is displayed in the AR device, and the AR special effect of the virtual guide transitioning from the current state to the target state is displayed in the AR device.
  • the obtaining part 51 is further configured to obtain the obstacle position information of the obstacle relative to the AR device;
  • the determining part 52 is further configured to, in the case where there is a target obstacle in the target image captured by the AR device, based on the current state information after the AR device avoids the target obstacle, the obstacle of the target obstacle object position information and the relative pose relationship of the target to determine the target state information of the virtual guide;
  • the display part 53 is further configured to display the AR special effect of the virtual guide in the AR device based on the target state information and the obstacle position information of the target obstacle.
  • the determining part 52 is further configured to determine that the initial target position of the virtual guide is located in the corresponding position of the target obstacle based on the current state information and the relative pose relationship of the target. within the location;
  • the target position in the target state information of the virtual guide is outside the range of positions corresponding to the target obstacle and is consistent with the initial target position.
  • the target location is the closest location.
  • An embodiment of the present disclosure further provides a computer device.
  • the schematic structural diagram of the computer device provided by the embodiment of the present disclosure includes: acquiring current state information of the AR device;
  • the AR special effect of the virtual guide is displayed in the AR device.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the booting method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the booting method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the steps of the booting method described in the above method embodiments, specifically Reference may be made to the foregoing method embodiments, and details are not described herein again.
  • Embodiments of the present disclosure also provide a computer program, which implements any one of the methods in the foregoing embodiments when the computer program is executed by a processor.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种AR场景下的引导方法、装置、计算机设备及存储介质,其中,该方法包括:获取AR设备的当前状态信息(S101);基于当前状态信息、以及AR设备中展示的虚拟引导员与AR设备之间的目标相对位姿关系,确定虚拟引导员的目标状态信息(S102);基于目标状态信息,在AR设备中展示虚拟引导员的AR特效(S103)。从而能够通过虚拟引导员的AR特效为用户提供更为丰富、更为直观的引导信息,提升引导效率。

Description

AR场景下的引导方法、装置、计算机设备及存储介质
相关申请的交叉引用
本公开要求在2020年09月23日提交中国专利局、申请号为202011012419.0、申请名称为“AR场景下的引导方法、装置、计算机设备及存储介质”的中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本公开。
技术领域
本公开涉及增强现实技术领域,具体而言,涉及一种AR场景下的引导方法、装置、计算机设备及存储介质。
背景技术
用户在游览某一景点,或是要去某一目的地时,一般是借助于语音导游或是设置在目标场景中的物理或者电子指示牌,上述方式存在直观性差、提供的引导信息有限等缺陷,导致引导效率低。
发明内容
本公开实施例至少提供一种AR场景下的引导方法、装置、计算机设备及存储介质。
第一方面,本公开实施例提供了一种增强现实AR场景下的引导方法,包括:
获取AR设备的当前状态信息;
基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
这样,通过AR设备的当前状态信息、以及AR设备中展示的虚拟引导员与AR设备之间的目标相对位姿关系,确定虚拟引导员的目标状态信息,并根据虚拟引导员的目标状态信息,在AR设备中展示虚拟引导员的AR特效,从而能够通过虚拟引导员的AR特效为用户提供更为丰富、更为直观的引导信息,提升引导效率。
一种可能的实施方式中,所述AR设备的当前状态信息包括所述AR设备的当前位姿信息,所述目标状态信息包括所述虚拟引导员的目标位姿信息;
所述基于所述当前状态信息,以及所述AR设备中展示的虚拟引导员与所述AR设备之间的相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
基于所述AR设备的当前状态信息、以及虚拟引导员的当前位姿信息,确定所述虚拟引导员与所述AR设备之间的当前相对位姿关系;
在所述当前相对位姿关系不符合所述目标相对位姿关系的情况下,基于所述AR设备的当前状态信息以及所述目标相对位姿关系,确定所述虚拟引导员更新后的目标位姿信息。
这样,通过确定虚拟引导员与AR设备之间的当前相对位姿关系、以及目标相对位 姿关系,确定虚拟引导员更新后的目标位姿关系,使得虚拟引导员在引导过程中能够不断的从当前位置移动到新的满足目标相对位姿关系的位置,提升引导过程中的AR呈现效果。
一种可能的实施方式中,所述当前相对位姿关系不符合所述目标相对位姿关系包括以下至少一项:
所述AR设备与虚拟引导员之间的相对距离小于第一距离阈值;
所述AR设备与虚拟引导员之间的相对距离大于第二距离阈值;
所述AR设备的当前朝向,和所述AR设备到所述虚拟引导员的连线方向之间的夹角大于设定角度。
这样,通过距离和/或角度对虚拟引导员与AR设备之间相对位姿进行约束,能够让虚拟引导员在AR画面中呈现更好的引导效果。
一种可能的实施方式中,所述基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息。
这样,通过引导路线,确定虚拟引导员的目标状态信息,从而使得虚拟引导员能够按照引导路线实现对用户的引导,使得用户更快速到达目的地。
一种可能的实施方式中,基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
在所述AR设备的当前状态信息指示所述AR设备处于AR引导起始状态的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向与所述AR设备所在的方向相反;
在所述AR设备的当前状态信息指示所述AR设备处于AR引导过程的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向为预先确定的引导路线的方向;
在所述AR设备的当前状态信息指示所述AR设备到达所述引导路线的终点的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员朝向所述AR设备所在的方向。
这样,为引导过程提供了更丰富的展示信息,具有更好的引导效果。
一种可能的实施方式中,基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效,包括:
基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
这样,通过为用户展示虚拟引导员从当前位置到目标位置状态的变化,能够使AR画面中的虚拟引导员的引导效果更为逼真。
一种可能的实施方式中,基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效,包括:
在所述AR设备从历史状态变更为当前状态后的预设时长后,基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
这样,让虚拟引导员的移动晚于用户的移动,形成虚拟引导员根据用户的移动而移动的效果,具有更好的引导效果。
一种可能的实施方式中,所述引导方法还包括:
获取障碍物相对于所述AR设备的障碍物位置信息;
所述基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
在AR设备拍摄的目标图像中存在目标障碍物的情况下,基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
所述基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效,包括:
基于所述目标状态信息和所述目标障碍物的障碍物位置信息,在所述AR设备中展示所述虚拟引导员的AR特效。
这样,可以在AR画面中呈现虚拟引导员避开障碍物的AR效果,达到更为逼真的展示效果。
一种可能的实施方式中,基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
基于所述当前状态信息和所述目标相对位姿关系,确定出虚拟引导员的初始目标位置位于所述目标障碍物对应的位置范围内;
根据确定出的初始目标位置,和所述目标障碍物位置信息,确定所述虚拟引导员的目标状态信息中的目标位置为在所述目标障碍物对应的位置范围之外、且与所述初始目标位置距离最近的位置。
第二方面,本公开实施例还提供一种AR场景下的引导装置,包括:
获取部分,被配置为获取AR设备的当前状态信息;
确定部分,被配置为基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
展示部分,被配置为基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
一种可能的实施方式中,所述AR设备的当前状态信息包括所述AR设备的当前位 姿信息,所述目标状态信息包括所述虚拟引导员的目标位姿信息;
所述确定部分,还被配置为基于所述AR设备的当前状态信息、以及虚拟引导员的当前位姿信息,确定所述虚拟引导员与所述AR设备之间的当前相对位姿关系;在所述当前相对位姿关系不符合所述目标相对位姿关系的情况下,基于所述AR设备的当前状态信息以及所述目标相对位姿关系,确定所述虚拟引导员更新后的目标位姿信息。
一种可能的实施方式中,所述当前相对位姿关系不符合所述目标相对位姿关系包括以下至少一项:
所述AR设备与虚拟引导员之间的相对距离小于第一距离阈值;
所述AR设备与虚拟引导员之间的相对距离大于第二距离阈值;
所述AR设备的当前朝向,和所述AR设备到所述虚拟引导员的连线方向之间的夹角大于设定角度。
一种可能的实施方式中,所述确定部分,还被配置为基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息。
一种可能的实施方式中,所述确定部分,还被配置为在所述AR设备的当前状态信息指示所述AR设备处于AR引导起始状态的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向与所述AR设备所在的方向相反;
在所述AR设备的当前状态信息指示所述AR设备处于AR引导过程的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向为预先确定的引导路线的方向;
在所述AR设备的当前状态信息指示所述AR设备到达所述引导路线的终点的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员朝向所述AR设备所在的方向。
一种可能的实施方式中,所述展示部分,还被配置为基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
一种可能的实施方式中,所述展示部分,还被配置为在所述AR设备从历史状态变更为当前状态后的预设时长后,基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
一种可能的实施方式中,所述获取部分,还被配置为获取障碍物相对于所述AR设备的障碍物位置信息;
所述确定部分,还被配置为在AR设备拍摄的目标图像中存在目标障碍物的情况下,基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
所述展示部分,还被配置为基于所述目标状态信息和所述目标障碍物的障碍物位置 信息,在所述AR设备中展示所述虚拟引导员的AR特效。
一种可能的实施方式中,所述确定部分,还被配置为基于所述当前状态信息和所述目标相对位姿关系,确定出虚拟引导员的初始目标位置位于所述目标障碍物对应的位置范围内;
根据确定出的初始目标位置,和所述目标障碍物位置信息,确定所述虚拟引导员的目标状态信息中的目标位置为在所述目标障碍物对应的位置范围之外、且与所述初始目标位置距离最近的位置。
第三方面,本公开实施例可选实现方式还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第四方面,本公开实施例可选实现方式还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第五方面,本公开实施例可选实现方式还提供一种计算机程序,包括计算机可读代码,当计算机可读代码在计算机设备中运行时,计算机设备中的处理器执行用于实现第一方面中的引导方法。
关于上述引导装置、计算机设备、计算机可读存储介质、及计算机程序的效果描述参见上述引导方法的说明,这里不再赘述。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种AR设备应用场景示意图;
图2示出了本公开实施例所提供的一种AR场景中的引导方法的流程图;
图3示出了本公开实施例所提供的引导方法中,确定目标信息的具体方法的流程图;
图4示出了本公开实施例所提供的一种引导方法的流程图;
图5a示出了本公开实施例所提供的一种图形显示界面场景示意图;
图5b示出了本公开实施例所提供的另一种图形显示界面场景示意图;
图6示出了本公开实施例所提供的另一种引导方法的流程图;
图7a示出了本公开实施例所提供的另一种引导方法中,障碍物、AR设备、虚拟AR特效之间相互位置关系的示例;
图7b示出了本公开实施例所提供的又一种引导方法中,障碍物、AR设备、虚拟AR特效之间相互位置关系的示例;
图8示出了本公开实施例所提供的一种引导装置的示意图;
图9示出了本公开实施例所提供的一种计算机设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
经研究发现,在很多场景都会为用户提供用于引导用户的引导标识,例如在商场、景区、展览馆、车站、机场等,都会在场景内设置对应的物理或者电子指示牌,用于引导用户到达其想要的目的地。这种引导方式所提供的引导信息有限,用户想要到达某个目的地时,需要多次查看设置在不同位置的引导标识、以及寻路过程,导致要到达某个目的地需要耗费较多的时间,引导效率低;同时该种引导方式还存在直观性差的问题,用户在寻路过程中很可能会错过某些引导信息,从而迟迟无法找到对应的目的地,同样造成引导效率低的问题。
基于上述研究,本公开提供了一种增强现实(Augmented Reality,AR)场景下的引导方法,通过AR设备的当前状态信息,确定虚拟引导员的AR特效,并利用AR特效引导用户,能够为用户提供更为丰富、更为直观的引导信息,提升引导效率。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种引导方法进行详细介绍,本公开实施例所提供的引导方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:AR设备或服务器或其它处理设备,AR设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该引导方法可以通过处理器调用存储器中存储的 计算机可读指令的方式来实现。
示例性的,下面对本公开实施例可适用的应用场景进行示例性说明。
图1为本公开实施例提供的一种AR设备的应用场景的示意图,如图1所示,AR设备10可以设置于目标场景中,在目标场景中为用户提供引导信息。另外,AR设备还配置有摄像头11,可以用于采集目标场景中的图像,以便于AR设备构建该目标场景对应的虚拟场景。AR设备可以配置有图形交互界面12,该图形交互界面12可以显示虚拟引导员以及虚拟引导员的AR特效。其中,虚拟引导员的AR特效可以是虚拟引导员在虚拟场景中进行移动,以便于对目标场景中的用户进行引导的效果。
下面对本公开实施例提供的引导方法加以说明。
参见图2所示,为本公开实施例提供的引导方法的流程图,所述方法包括步骤S101~S103,其中:
S101:获取AR设备的当前状态信息;
S102:基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
S103:基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
本公开实施例通过根据AR设备的当前状态信息、以及AR设备中展示的虚拟引导员与AR设备之间的目标相对位姿关系,确定虚拟引导员的目标状态信息,并根据虚拟引导员的目标状态信息,在AR设备中展示虚拟引导员的AR特效,从而能够通过虚拟引导员的AR特效为用户提供更为丰富、更为直观的引导信息,提升引导效率。
下面对上述S101~S103加以详细说明。
针对上述S101,AR设备的当前状态信息,例如包括:AR设备的当前位姿信息。
示例性的,可以采用下述方式确定AR设备在目标场景中的当前位姿信息:
获取所述AR设备对目标场景进行图像采集得到的目标图像;
基于所述目标图像,以及预先构建的所述目标场景的三维场景地图,确定所述AR设备的当前位姿信息。
AR设备的当前位姿信息,包括部署在AR设备上的摄像头的光心在目标场景建立的三维场景地图所对应场景坐标系中的三维坐标值、以及摄像头的光轴朝向信息;该光轴朝向信息例如可以包括:AR设备上的摄像头的光轴在场景坐标系中的偏转角以及俯仰角;或者该光轴朝向信息,例如为在场景坐标系中的向量。
在基于目标图像以及预先构建的目标场景的三维场景地图,确定AR设备的当前位姿信息时,例如可以对目标图像进行特征点识别,得到目标图像中的第一特征点后,会与预先构建的目标场景的三维场景地图中的第二特征点进行匹配,从第二特征点中,确定能够与第一特征点匹配的目标第二特征点。此时,目标第二特征点所表征的物体,与第一特征点所表征的物体为同一物体。
此处,目标场景的三维场景地图,例如可以采用下述方法中任一种方法获得:同步定位与建图(Simultaneous Localization and Mapping,SLAM)建模、运动恢复结构 (Structure-From-Motion,SFM)建模。其中,SLAM建模是指AR设备在未知环境中从一个未知位置开始移动,在移动过程中根据位置和实时采集的场景图像进行自身定位,同时在自身定位的基础上建造增量式地图。SFM建模是通过摄像头在移动过程中采集的场景图像来确定目标场景的空间和几何关系。
示例性的,构建目标场景的三维场景地图时,以预设坐标点为原点建立三维坐标系;其中,预设坐标点可以为目标场景中的建筑物坐标点或者摄像头采集目标场景时摄像头所在的坐标点;
摄像头采集视频图像,通过跟踪摄像头视频帧中足够数量的特征点,构建目标场景的三维场景地图;其中,构建的目标场景的三维场景地图中的特征点同样包括了上述物体的特征点信息。
将第一特征点与所述目标场景的三维场景地图中足够数量的第二特征点进行匹配,确定目标第二特征点,并读取所述目标第二特征点在所述目标场景的三维场景地图中的三维坐标值。然后,基于目标第二特征点的三维坐标值,确定AR设备在场景坐标系下的当前位姿信息。
具体地,在基于目标第二特征点的三维坐标值,确定AR设备在场景坐标系下的当前位姿信息时,例如利用相机成像原理,根据目标第二特征点在三维场景地图中的三维坐标值,恢复AR设备在三维场景地图中的当前位姿信息。
此处,在利用相机成像原理恢复AR设备在三维场景地图中的当前位姿信息时,例如可以确定所述第一特征点在所述目标图像中对应的目标像素点;基于所述目标像素点在图像坐标系下的二维坐标值、以及所述目标第二特征点在所述场景坐标系下的三维坐标值,确定所述AR设备在所述场景坐标系下的当前位姿信息。
具体地,可以利用AR设备构建一相机坐标系;其中,相机坐标系的原点为AR设备中摄像头的光心所在的点;z轴为摄像头的光轴所在的直线;垂直于光轴,且光心所在的平面为x轴和y轴所在的平面;能够利用深度检测算法确定目标图像中各个像素点分别对应的深度值;在目标图像中确定了目标像素点后,即能够得到目标像素点在相机坐标系下的深度值;即能够得到目标像素点对应的第一特征点在相机坐标系下的三维坐标值;然后,利用第一特征点在相机坐标系下的三维坐标值、以及第一特征点在场景坐标系下的三维坐标值,恢复相机坐标系的原点在场景坐标系下的坐标值,也即AR设备在场景坐标系下的当前位姿信息中的位置信息,并利用相机坐标系的z轴,确定该z轴在场景坐标系中相对于场景坐标系的各个坐标轴的角度,得到AR设备在场景坐标系下的当前位姿信息中的姿态信息。
另外,还可以基于SLAM对AR设备进行定位时,基于AR设备获取的多帧视频帧图像,可以构建SLAM空间,然后将SLAM空间对应的坐标系,与目标场景的三维场景地图对应的坐标系进行对齐,由于能够基于SLAM确定AR设备在SLAM空间中的具体位姿信息,进而能够基于SLAM坐标系与三维场景地图之间的对齐关系,确定AR设备在三维场景地图中的实际位置。
这里需要注意的是,针对目标场景建立的三维场景地图,该三维场景地图通常按照一定的比例尺与目标场景的实际空间对应。
因此,在确定了AR设备在基于目标场景建立的三维场景地图中的位姿后,即能够基于上述对应关系,确定AR设备在目标场景中的当前位姿信息。
针对上述S102:虚拟引导员的目标状态信息,包括虚拟引导员的目标位姿信息,也即虚拟引导员从当前状态转换至目标位姿信息所表征的另一状态。
虚拟引导员与AR设备之间的目标相对位姿关系,例如可以是预先设置的虚拟引导员与AR设备之间要保持的相对位姿关系。
示例性的,目标相对位姿关系包括下述至少一项:
AR设备与虚拟引导员之间的距离关系,AR设备与虚拟引导员之间的角度关系、AR设备与虚拟引导员之间的高度关系等。
AR设备与虚拟引导员之间的距离关系例如包括:AR设备与虚拟引导员之间的相对距离大于第一距离阈值,和/或,AR设备与虚拟引导员之间的相对距离小于第二距离阈值。其中,第一距离阈值小于第二距离阈值。
通过上述第一距离阈值,保证了AR设备和虚拟引导员之间的距离不会过小,避免虚拟引导员的AR特效在AR设备的图形交互界面中的显示面积占比过大;通过上述第二距离阈值,保证了AR设备和虚拟引导员之间的距离不会过大。
AR设备与虚拟引导员之间的角度关系例如包括:AR设备的当前朝向,与AR设备到虚拟引导员的连线方向之间的夹角小于设定角度;和/或,虚拟引导员的某个侧面,朝向AR设备所在的方向。
此处,AR设备的当前朝向,例如为AR设备中摄像头的光轴朝向的方向;AR设备到虚拟引导员的连线方向,例如为AR设备中摄像头的光心到虚拟引导员中心的连线方向。通过第一预设角度,控制虚拟引导员处于AR设备的视野范围内,对用户具有更好的引导作用。
AR设备与虚拟引导员之间的高度关系例如包括:虚拟引导员,与AR设备的拍摄视野对应空间的至少一侧之间的距离大于预设距离;和/或,虚拟引导员距离地面的高度等于预设高度。
此处,虚拟引导员例如为可以在空中飞翔的动画形象,如“小飞象”、“花仙子”等,可以根据AR设备的位姿变化而随之进行位姿的改变。动画人物位于AR设备的拍摄视野范围内,并与拍摄视野对应空间的顶侧之间距离大于预设距离,与拍摄视野对应空间的底侧之间的距离大于预设距离。
虚拟引导员例如还可以是在地面行走的动画形象,如“机器人引导员”等,无论AR设备的拍摄角度如何变化,与地面的高度始终保持一致。
示例性的,参见图3所示,本公开实施例提供一种确定虚拟引导员的目标状态信息的具体方法,包括:
S201:基于所述AR设备的当前状态信息、以及虚拟引导员的当前位姿信息,确定 所述虚拟引导员与所述AR设备之间的当前相对位姿关系。
在具体实施中,虚拟引导员的初始位姿例如是在Unity坐标系下的。Unity是构建目标场景的三维场景地图、以及虚拟引导员的三维引擎。因此在确定虚拟引导员与AR设备之间的当前相对位姿关系时,首先能够确定虚拟引导员在Unity坐标系下的初始位姿;进而根据Unity坐标系与三维场景地图对应的坐标系之间的转换关系,能够将虚拟引导员在Unity坐标系下的初始位姿转换为在三维场景地图对应的坐标系下的当前位姿信息;通过上述S101,能够确定AR设备在三维场景地图中的当前位姿信息,进而,能够基于虚拟引导员在在三维场景地图对应的坐标系下的当前位姿信息、以及AR设备在三维场景地图中的当前位姿信息,确定虚拟引导员与AR设备之间的当前相对位姿关系。
S202:在所述当前相对位姿关系不符合所述目标相对位姿关系的情况下,基于所述AR设备的当前状态信息以及所述目标相对位姿关系,确定所述虚拟引导员更新后的目标位姿信息。
示例性的,在目标相对位姿关系包括AR设备与虚拟引导员之间的距离关系的情况下,若所述AR设备与虚拟引导员之间的相对距离小于第一距离阈值,和/或,所述AR设备与虚拟引导员之间的相对距离大于第二距离阈值,则确定当前相对位姿关系不符合所述目标相对位姿关系。
在目标相对位姿关系包括AR设备与虚拟引导员之间的角度关系的情况下,若所述AR设备的当前朝向,和所述AR设备到所述虚拟引导员的连线方向之间的夹角大于设定角度,和/或,虚拟引导员的某个侧面,并未朝向AR设备所在的方向,则确定当前相对位姿关系不符合所述目标相对位姿关系。
在目标相对位姿关系包括AR设备与虚拟引导员之间的高度关系的情况下,若虚拟引导员,与AR设备的拍摄视野对应空间的至少一侧之间的距离小于或者等于预设距离;和/或,虚拟引导员距离地面的高度不等于预设高度的情况下,则确定当前相对位姿关系不符合所述目标相对位姿关系。
虚拟引导员更新后的目标位姿信息,包括了虚拟引导员将要到达的目标位置,以及虚拟引导员到达目标位置后相对于AR设备的姿态。
示例性的,可以将与虚拟引导员之间距离最近,且满足目标相对位姿关系的最近一个位置点,确定为虚拟引导员将要到达的目标位置,且虚拟引导员在到达目标位置后,虚拟引导员的目标面朝向AR设备,此处,虚拟引导员的目标面为虚拟引导员预设的侧面,例如虚拟引导员为人形引导员,目标面例如为人脸所在的侧面。
在本公开另一实施例中,还可以基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息。
示例性的,用户可以预先确定出发地以及目的地,AR设备能够基于用户预先确定的出发地、目的地、以及三维场景地图,为用户规划一条引导路线。
在为虚拟引导员确定目标状态信息时,虚拟引导员的目标状态信息可以受到引导路线的约束,示例性的,可以在虚拟引导员与所述AR设备之间的当前相对位姿关系,不 符合所述目标相对位姿关系的情况下,将距离引导路线最近的位置,且满足目标相对位姿关系的位置点,作为虚拟引导员降到到达的目标位置,且虚拟引导员在到达目标位置后,虚拟引导员对应的AR特效能够指示用户沿着引导路线所引导的方向行进。如人形引导员展示胳膊指向引导路线所引导方向的特效。
示例性的,本公开实施例还提供一种基于所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息的具体方法,包括:
(1):在所述AR设备的当前状态信息指示所述AR设备处于AR引导起始状态的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员朝向所述AR设备所在的方向。
(2):在所述AR设备的当前状态信息指示所述AR设备处于AR引导过程的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向为预先确定的引导路线的方向。
(3):在所述AR设备的当前状态信息指示所述AR设备到达所述引导路线的终点的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向所述AR设备的所在的方向。
通过上述过程,在AR设备的不同状态,为用户提供姿态不同的虚拟引导员,从而能够通过虚拟引导员在不同状态下为用户提供更丰富的引导信息。
针对上述S103,本公开实施例还提供一种基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效的具体方法,包括:
基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
在具体实施中,在基于目标状态信息、以及虚拟引导员的当前状态信息,在AR设备中展示虚拟引导员时,例如可以展现从虚拟引导员的当前状态信息所指示的状态,转换至目标状态信息所指示的状态的变化过程,例如在当前状态下,虚拟引导员朝向AR设备所在的方向,且位于目标场景得第一位置点;在控制虚拟引导员从当前状态转换到目标状态的过程中,控制虚拟引导员转向目标状态对应的第二位置点所在的方向,并沿着从第一位置点到达第二位置点的路径,移动至第二位置点;在虚拟引导员从第一位置点移动至第二位置点的过程中,虚拟引导员呈现走路的特效或者飞翔的特效等;虚拟引导员到达第二位置点后,控制虚拟引导员转向AR设备所在的方向,完成从当前状态到达目标状态转换的AR特效。
示例性的,参考图4所示的流程示意图,本公开实施例提供的引导方法,可以包括以下步骤:
步骤a、AR设备中展示的虚拟引导员朝向用户。
参考图5a所示,当目标场景中的用户未触发虚拟引导员执行引导任务时,AR设备的图形交互界面12中展示的虚拟引导员可以朝向用户,即朝向AR设备的方向。
步骤b、AR设备判断是否接收触发引导任务的指令。
本公开实施例中,AR设备可以检测用户的触发引导任务的指令。例如,触发指令可以是用户点击AR设备的图形交互界面的点击操作,还可以是用户针对图形交互界面的触摸操作指令,还可以是语音指令等,本公开实施例对此不做限定。
若接收到触发引导任务的指令,则执行步骤c。若未接收到触发引导任务的指令,则继续执行步骤a,保持AR设备中展示的虚拟引导员朝向用户。
步骤c、AR设备控制虚拟引导员朝向预先确定的引导路线的方向。
可以理解的是,AR设备当前处于AR引导过程的情况下,参考图5b所示,虚拟引导员的朝向为预先确定的引导路线的方向。
步骤d、AR设备控制虚拟引导员按照引导路线移动。
可以理解的是,AR设备可以控制虚拟引导员随着引导路线移动,虚拟引导员呈现走路的特效或者飞翔的特效等。
步骤e、虚拟引导员移动至引导路线的终点时,控制虚拟引导员从朝向引导路线的方向转为朝向用户。
在本公开另一实施例中,在基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效时,还可以在所述AR设备从历史状态变更为当前状态后的预设时长后,基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
这样,让虚拟引导员晚于用户移动,形成虚拟引导员根据用户的移动而移动的效果,具有更好的引导效果。
参见图6所示,本公开另一实施例还提供另外一种引导方法,包括:
S301:获取障碍物相对于所述AR设备的障碍物位置信息。
S302:在AR设备拍摄的目标图像中存在目标障碍物的情况下,基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
S303:基于所述目标状态信息和所述目标障碍物的障碍物位置信息,在所述AR设备中展示所述虚拟引导员的AR特效。
在具体实施中,例如可以利用预先训练的障碍物检测模型对目标图像进行障碍物检测,确定目标障碍物相对于AR设备的位姿信息;然后基于目标障碍物相对于AR设备的位姿信息、以及AR设备在目标场景中的当前位姿信息,确定目标障碍物的障碍物位置信息。
另外,也可以在三维场景地图中预先标注障碍物,在确定了AR设备在目标场景中的当前位姿信息后,基于预先标注的障碍物的位置信息,确定可能会影响AR引导的目标障碍物,并确定目标障碍物的障碍物位置信息。
在确定障碍物位置信息后,例如可以采用下述方式,确定虚拟引导员的目标状态信息:
基于所述当前状态信息和所述目标相对位姿关系,确定出虚拟引导员的初始目标位置位于所述目标障碍物对应的位置范围内;
根据确定出的初始目标位置,和所述目标障碍物位置信息,确定所述虚拟引导员的目标状态信息中的目标位置为在所述目标障碍物对应的位置范围之外、且与所述初始目标位置距离最近的位置。
此处,在确定与目标状态信息时,优先选择让虚拟引导员避开障碍物所在的位置,不能使得虚拟引导员碰撞到障碍物内部,或者超过目标场景的遮挡体。在满足避开障碍物的前提下,若能够为虚拟引导员确定一满足标相对位姿关系的位置,则优先选择满足相对位姿关系;若当前并不存在满足相对位姿关系的位置,例如可以将虚拟引导员的目标位置确定为转动AR设备即可落入到AR设备拍摄视野内的位置。
如图7a所示,障碍物A为41,AR设备为42;拍摄视野的区域为M1,满足目标相对位姿关系的区域为M2;43表示虚拟引导员的当前位置,44表示虚拟引导员的目标位置;在该示例中,为虚拟引导员确定的目标状态信息,既避开了障碍物所在的位置,又能够满足目标相对位姿关系。
如图7b所示,障碍物B为45,AR设备42的拍摄视野为虚线之间的区域,46表示虚拟引导员的当前位置,47表示虚拟引导员的目标位置;在该示例中,为虚拟引导员确定的目标状态信息,虽然避开了障碍物所在的位置,但无法满足目标相对位姿关系,而是将虚拟引导员的目标位置确定为转动AR设备即可落入到AR设备拍摄视野内的位置。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与AR场景下的引导方法对应的AR场景下的引导装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述AR场景下的引导方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图8所示,为本公开实施例提供的一种AR场景下的引导装置的示意图,所述装置包括:获取模块51、确定模块52、展示模块53;其中,
获取部分51,被配置为获取AR设备的当前状态信息;
确定部分52,被配置为基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
展示部分53,被配置为基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
一种可能的实施方式中,所述AR设备的当前状态信息包括所述AR设备的当前位姿信息,所述目标状态信息包括所述虚拟引导员的目标位姿信息;
所述确定部分52,还被配置为基于所述AR设备的当前状态信息、以及虚拟引导员的当前位姿信息,确定所述虚拟引导员与所述AR设备之间的当前相对位姿关系;
在所述当前相对位姿关系不符合所述目标相对位姿关系的情况下,基于所述AR设备的当前状态信息以及所述目标相对位姿关系,确定所述虚拟引导员更新后的目标位姿信息。
一种可能的实施方式中,所述当前相对位姿关系不符合所述目标相对位姿关系包括以下至少一项:
所述AR设备与虚拟引导员之间的相对距离小于第一距离阈值;
所述AR设备与虚拟引导员之间的相对距离大于第二距离阈值;
所述AR设备的当前朝向,和所述AR设备到所述虚拟引导员的连线方向之间的夹角大于设定角度。
一种可能的实施方式中,所述确定部分52,还被配置为基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息。
一种可能的实施方式中,所述确定部分52,还被配置为在所述AR设备的当前状态信息指示所述AR设备处于AR引导起始状态的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向与所述AR设备所在的方向相反;
在所述AR设备的当前状态信息指示所述AR设备处于AR引导过程的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向为预先确定的引导路线的方向;
在所述AR设备的当前状态信息指示所述AR设备到达所述引导路线的终点的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员朝向所述AR设备所在的方向。
一种可能的实施方式中,所述展示部分53,还被配置为基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
一种可能的实施方式中,所述展示部分53,还被配置为在所述AR设备从历史状态变更为当前状态后的预设时长后,基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
一种可能的实施方式中,所述获取部分51,还被配置为获取障碍物相对于所述AR设备的障碍物位置信息;
所述确定部分52,还被配置为在AR设备拍摄的目标图像中存在目标障碍物的情况下,基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
所述展示部分53,还被配置为基于所述目标状态信息和所述目标障碍物的障碍物位置信息,在所述AR设备中展示所述虚拟引导员的AR特效。
一种可能的实施方式中,所述确定部分52,还被配置为基于所述当前状态信息和所 述目标相对位姿关系,确定出虚拟引导员的初始目标位置位于所述目标障碍物对应的位置范围内;
根据确定出的初始目标位置,和所述目标障碍物位置信息,确定所述虚拟引导员的目标状态信息中的目标位置为在所述目标障碍物对应的位置范围之外、且与所述初始目标位置距离最近的位置。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
本公开实施例还提供了一种计算机设备,如图9所示,为本公开实施例提供的计算机设备结构示意图,包括:获取AR设备的当前状态信息;
基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
处理器11和存储器12;所述存储器12存储有所述处理器11可执行的机器可读指令,当计算机设备运行时,所述机器可读指令被所述处理器执行以实现下述步骤:
上述指令的具体执行过程可以参考本公开实施例中所述的引导方法的步骤,此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的引导方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的引导方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的引导方法的步骤,具体可参见上述方法实施例,在此不再赘述。
本公开实施例还提供一种计算机程序,该计算机程序被处理器执行时实现前述实施例的任意一种方法。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示 的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (13)

  1. 一种增强现实AR场景下的引导方法,包括:
    获取AR设备的当前状态信息;
    基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
    基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
  2. 根据权利要求1所述的引导方法,其中,所述AR设备的当前状态信息包括所述AR设备的当前位姿信息,所述目标状态信息包括所述虚拟引导员的目标位姿信息;
    所述基于所述当前状态信息,以及所述AR设备中展示的虚拟引导员与所述AR设备之间的相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
    基于所述AR设备的当前状态信息、以及虚拟引导员的当前位姿信息,确定所述虚拟引导员与所述AR设备之间的当前相对位姿关系;
    在所述当前相对位姿关系不符合所述目标相对位姿关系的情况下,基于所述AR设备的当前状态信息以及所述目标相对位姿关系,确定所述虚拟引导员更新后的目标位姿信息。
  3. 根据权利要求2所述的引导方法,其中,所述当前相对位姿关系不符合所述目标相对位姿关系包括以下至少一项:
    所述AR设备与所述虚拟引导员之间的相对距离小于第一距离阈值;
    所述AR设备与所述虚拟引导员之间的相对距离大于第二距离阈值;
    所述AR设备的当前朝向,和所述AR设备到所述虚拟引导员的连线方向之间的夹角大于设定角度。
  4. 根据权利要求1-3任一项所述的引导方法,其中,所述基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
    基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息。
  5. 根据权利要求4所述的引导方法,其中,基于预先确定的引导路线、所述AR设备的当前状态信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
    在所述AR设备的当前状态信息指示所述AR设备处于AR引导起始状态的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员的朝向与所述AR设备所在的方向相反;
    在所述AR设备的当前状态信息指示所述AR设备处于AR引导过程的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标 相对距离的位置处,且所述虚拟引导员的朝向为预先确定的引导路线的方向;
    在所述AR设备的当前状态信息指示所述AR设备到达所述引导路线的终点的情况下,确定所述虚拟引导员的目标状态信息包括:所述虚拟引导员位于与所述AR设备相距目标相对距离的位置处,且所述虚拟引导员朝向所述AR设备所在的方向。
  6. 根据权利要求1-5任一所述的引导方法,其中,基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效,包括:
    基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
  7. 根据权利要求6所述的引导方法,其中,所述基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效,包括:
    在所述AR设备从历史状态变更为当前状态后的预设时长后,基于所述目标状态信息,和所述虚拟引导员的当前状态信息,在所述AR设备中展示所述虚拟引导员从当前状态转换到目标状态的AR特效。
  8. 根据权利要求1-7任一项所述的引导方法,其中,所述引导方法还包括:
    获取障碍物相对于所述AR设备的障碍物位置信息;
    所述基于所述当前状态信息、以及所述AR设备中展示的虚拟引导员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
    在所述AR设备拍摄的目标图像中存在目标障碍物的情况下,基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
    所述基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效,包括:
    基于所述目标状态信息和所述目标障碍物的障碍物位置信息,在所述AR设备中展示所述虚拟引导员的AR特效。
  9. 根据权利要求8所述的引导方法,其中,所述基于所述AR设备躲避所述目标障碍物后的当前状态信息、所述目标障碍物的障碍物位置信息、以及所述目标相对位姿关系,确定所述虚拟引导员的目标状态信息,包括:
    基于所述当前状态信息和所述目标相对位姿关系,确定出虚拟引导员的初始目标位置位于所述目标障碍物对应的位置范围内;
    根据确定出的初始目标位置,和所述目标障碍物位置信息,确定所述虚拟引导员的目标状态信息中的目标位置为在所述目标障碍物对应的位置范围之外、且与所述初始目标位置距离最近的位置。
  10. 一种AR场景下的引导装置,包括:
    获取部分,被配置为获取AR设备的当前状态信息;
    确定部分,被配置为基于所述当前状态信息、以及所述AR设备中展示的虚拟引导 员与所述AR设备之间的目标相对位姿关系,确定所述虚拟引导员的目标状态信息;
    展示部分,被配置为基于所述目标状态信息,在所述AR设备中展示所述虚拟引导员的AR特效。
  11. 一种计算机设备,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至9任一项所述的引导方法的步骤。
  12. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至9任意一项所述的引导方法的步骤。
  13. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述计算机设备中的处理器执行用于实现权利要求1至9任一项所述的引导方法的步骤。
PCT/CN2021/095853 2020-09-23 2021-05-25 Ar场景下的引导方法、装置、计算机设备及存储介质 WO2022062442A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011012419.0A CN112212865B (zh) 2020-09-23 2020-09-23 Ar场景下的引导方法、装置、计算机设备及存储介质
CN202011012419.0 2020-09-23

Publications (1)

Publication Number Publication Date
WO2022062442A1 true WO2022062442A1 (zh) 2022-03-31

Family

ID=74051071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095853 WO2022062442A1 (zh) 2020-09-23 2021-05-25 Ar场景下的引导方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112212865B (zh)
WO (1) WO2022062442A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112212865B (zh) * 2020-09-23 2023-07-25 北京市商汤科技开发有限公司 Ar场景下的引导方法、装置、计算机设备及存储介质
CN112987934B (zh) * 2021-04-20 2021-08-03 杭州宇泛智能科技有限公司 一种人像识别交互方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013190255A (ja) * 2012-03-13 2013-09-26 Alpine Electronics Inc 拡張現実システム
CN104596523A (zh) * 2014-06-05 2015-05-06 腾讯科技(深圳)有限公司 一种街景目的地引导方法和设备
CN105005970A (zh) * 2015-06-26 2015-10-28 广东欧珀移动通信有限公司 一种增强现实的实现方法及装置
CN111595346A (zh) * 2020-06-02 2020-08-28 浙江商汤科技开发有限公司 导航提醒方法、装置、电子设备及存储介质
CN111650953A (zh) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 飞行器避障处理方法、装置、电子设备及存储介质
CN111693063A (zh) * 2020-06-12 2020-09-22 浙江商汤科技开发有限公司 导航互动展示方法、装置、电子设备及存储介质
CN112212865A (zh) * 2020-09-23 2021-01-12 北京市商汤科技开发有限公司 Ar场景下的引导方法、装置、计算机设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018207440A1 (de) * 2018-05-14 2019-11-14 Volkswagen Aktiengesellschaft Verfahren zur Berechnung einer "augmented reality"-Einblendung für die Darstellung einer Navigationsroute auf einer AR-Anzeigeeinheit, Vorrichtung zur Durchführung des Verfahrens sowie Kraftfahrzeug und Computerprogramm
CN111311758A (zh) * 2020-02-24 2020-06-19 Oppo广东移动通信有限公司 增强现实处理方法及装置、存储介质和电子设备
CN111698646B (zh) * 2020-06-08 2022-10-18 浙江商汤科技开发有限公司 一种定位方法及装置
CN111653175B (zh) * 2020-06-09 2022-08-16 浙江商汤科技开发有限公司 一种虚拟沙盘展示方法及装置
CN111595349A (zh) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 导航方法及装置、电子设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013190255A (ja) * 2012-03-13 2013-09-26 Alpine Electronics Inc 拡張現実システム
CN104596523A (zh) * 2014-06-05 2015-05-06 腾讯科技(深圳)有限公司 一种街景目的地引导方法和设备
CN105005970A (zh) * 2015-06-26 2015-10-28 广东欧珀移动通信有限公司 一种增强现实的实现方法及装置
CN111595346A (zh) * 2020-06-02 2020-08-28 浙江商汤科技开发有限公司 导航提醒方法、装置、电子设备及存储介质
CN111650953A (zh) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 飞行器避障处理方法、装置、电子设备及存储介质
CN111693063A (zh) * 2020-06-12 2020-09-22 浙江商汤科技开发有限公司 导航互动展示方法、装置、电子设备及存储介质
CN112212865A (zh) * 2020-09-23 2021-01-12 北京市商汤科技开发有限公司 Ar场景下的引导方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN112212865B (zh) 2023-07-25
CN112212865A (zh) 2021-01-12

Similar Documents

Publication Publication Date Title
CN109643127B (zh) 构建地图、定位、导航、控制方法及系统、移动机器人
US11532102B1 (en) Scene interactions in a previsualization environment
CN112146649B (zh) Ar场景下的导航方法、装置、计算机设备及存储介质
KR102444658B1 (ko) 훈련된 경로를 자율주행하도록 로봇을 초기화하기 위한 시스템 및 방법
WO2022183775A1 (zh) 一种混合增强教学场景中多移动机制融合方法
US11494995B2 (en) Systems and methods for virtual and augmented reality
US11024079B1 (en) Three-dimensional room model generation using panorama paths and photogrammetry
US10937247B1 (en) Three-dimensional room model generation using ring paths and photogrammetry
TWI442311B (zh) 在遊戲中使用三維環境模型
US10185463B2 (en) Method and apparatus for providing model-centered rotation in a three-dimensional user interface
WO2022062442A1 (zh) Ar场景下的引导方法、装置、计算机设备及存储介质
US9578076B2 (en) Visual communication using a robotic device
CN104035760A (zh) 跨移动平台实现沉浸式虚拟现实的系统
KR20180118219A (ko) 이동형 원격현전 로봇과의 인터페이싱
CN104915979A (zh) 跨移动平台实现沉浸式虚拟现实的系统
JPH0785312A (ja) 3次元動画作成装置
JP7345042B2 (ja) 移動ロボットのナビゲーション
US10706624B1 (en) Three-dimensional room model generation using panorama paths with augmented reality guidance
Lan et al. XPose: Reinventing User Interaction with Flying Cameras.
KR101819589B1 (ko) 이동형 프로젝션 기술을 이용한 증강현실 시스템 및 그 운영 방법
Nescher et al. Simultaneous mapping and redirected walking for ad hoc free walking in virtual environments
US10645275B1 (en) Three-dimensional room measurement process with augmented reality guidance
Angelopoulos et al. Drone brush: Mixed reality drone path planning
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
US10643344B1 (en) Three-dimensional room measurement process

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21870829

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21870829

Country of ref document: EP

Kind code of ref document: A1