WO2021031755A1 - 基于增强现实设备的交互方法及系统、电子设备、计算机可读介质 - Google Patents

基于增强现实设备的交互方法及系统、电子设备、计算机可读介质 Download PDF

Info

Publication number
WO2021031755A1
WO2021031755A1 PCT/CN2020/102478 CN2020102478W WO2021031755A1 WO 2021031755 A1 WO2021031755 A1 WO 2021031755A1 CN 2020102478 W CN2020102478 W CN 2020102478W WO 2021031755 A1 WO2021031755 A1 WO 2021031755A1
Authority
WO
WIPO (PCT)
Prior art keywords
augmented reality
reality device
scene
interaction
range
Prior art date
Application number
PCT/CN2020/102478
Other languages
English (en)
French (fr)
Inventor
刘幕俊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20853764.7A priority Critical patent/EP3978089A4/en
Publication of WO2021031755A1 publication Critical patent/WO2021031755A1/zh
Priority to US17/563,144 priority patent/US20220122331A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/216Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/536Depth or shape recovery from perspective effects, e.g. by using vanishing points
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/206Game information storage, e.g. cartridges, CD ROM's, DVD's, smart cards
    • A63F2300/207Game information storage, e.g. cartridges, CD ROM's, DVD's, smart cards for accessing game resources from local storage, e.g. streaming content from DVD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction

Definitions

  • the embodiments of the present application relate to the field of augmented reality, and more specifically, to an interactive method based on an augmented reality device, an interactive system based on an augmented reality device, an electronic device, and a computer-readable medium.
  • the virtual game scene can be superimposed on the real scene screen, so that the virtual game scene and the real scene interact.
  • the augmented reality game scene cannot be loaded accurately, and the situation in which it is impossible to accurately interact with the virtual object. Lead to poor user experience.
  • the embodiments of the present application provide an interactive method based on augmented reality devices, an interactive system based on augmented reality devices, electronic devices, and computer-readable media, which are beneficial to provide users with accurate positioning in enhancing existing game scenes. .
  • an interaction method based on an augmented reality device includes: acquiring current location information of the augmented reality device, confirming whether a loadable scene is included in a preset range of the current location; presetting at the current location When the range includes a loadable scene, obtain the distance between the current position and the loadable scene to determine whether to enter the loading range of a target scene; when the augmented reality device enters the loading range of the target scene When, load the target scene model for displaying the target scene in the augmented reality device.
  • an interactive system based on an augmented reality device includes: a loadable scene judgment module for acquiring current location information of the augmented reality device and confirming whether the current location includes loadable Scene; a target scene judgment module, used to obtain the distance between the current position and the loadable scene when the loadable scene is included in the preset range of the current position to determine whether to enter the loading range of a target scene;
  • the target scene loading module is configured to load the target scene model for displaying the target scene in the augmented reality device when the augmented reality device enters the loading range of the target scene.
  • an electronic device including: one or more processors; a storage device, used to store one or more programs, when the one or more programs are executed by the one or more processors At this time, the one or more processors are caused to execute the method in the foregoing first aspect.
  • a computer-readable medium for storing computer software instructions used to execute the method in the above-mentioned first aspect, which contains the programs designed to execute the above-mentioned aspects.
  • Fig. 1 shows a schematic diagram of an augmented reality device-based interaction method according to an embodiment of the present application.
  • Fig. 2 shows a schematic diagram of the positional relationship between a virtual scene and an augmented reality device in an embodiment of the present application.
  • FIG. 3 shows a schematic diagram of the relationship between the target scene coordinate system and the actual environment coordinate system in an embodiment of the present application.
  • FIG. 4 shows a schematic diagram of position interaction between the augmented reality device and the virtual interactive object in the target scene coordinate system according to an embodiment of the present application.
  • FIG. 5 shows a schematic block diagram of an interactive system based on an augmented reality device according to an embodiment of the present application.
  • Fig. 6 shows a schematic block diagram of a computer system of an electronic device according to an embodiment of the present application.
  • augmented reality games In existing role-playing games, users generally use a display to watch the game screen. Or for virtual reality games, users can watch virtual pictures in an immersive manner by using a helmet.
  • the above-mentioned game forms can only be realized in fixed locations, and cannot be combined with realistic scenes or objects.
  • augmented reality games there are more and more augmented reality games.
  • the feature of augmented reality games is to superimpose game scenes (ie virtual scenes) on real scene screens, so that the game scenes interact with the real scenes.
  • Fig. 1 shows a schematic diagram of an augmented reality device-based interaction method according to an embodiment of the present application.
  • the interactive method includes some or all of the following:
  • S11 Acquire current location information of the augmented reality device, and confirm whether a loadable scene is included in a preset range of the current location;
  • the aforementioned augmented reality device may be smart terminal devices such as AR glasses and AR helmets.
  • a binocular or monocular perspective optical engine can be set on the glasses frame. Through the optical engine, dynamic data such as videos, charts, instruction information, control information, etc. can be displayed to the user without affecting the surrounding environment. Observation.
  • the AR glasses can also be equipped with camera components, which can include high-definition cameras, depth cameras, and so on.
  • the AR glasses can also be equipped with sensors, such as gyroscopes, acceleration sensors, magnetometers, and light sensors, or they can also be nine-axis sensors, such as three-axis gyroscopes, three-axis accelerometers, and three-axis geomagnetometers.
  • sensors such as gyroscopes, acceleration sensors, magnetometers, and light sensors
  • nine-axis sensors such as three-axis gyroscopes, three-axis accelerometers, and three-axis geomagnetometers.
  • AR glasses can also be equipped with GPS components, Bluetooth components, power components and input devices.
  • AR glasses can also be connected to a controller, on which the aforementioned GPS component, Bluetooth component, WiFi component, power supply component, input device, processor, memory and other modules or units can be assembled.
  • a data interface can also be provided on the AR glasses body or the controller to facilitate data transmission and connection with external devices.
  • the present disclosure does not specifically limit the specific structure and form of AR glasses.
  • the augmented reality device can also be a smart terminal device such as a mobile phone or a tablet computer equipped with a rear camera, a sensor component, and an augmented reality application.
  • a smart terminal device such as a mobile phone or a tablet computer equipped with a rear camera, a sensor component, and an augmented reality application.
  • the screen of the mobile phone can be used as a display while displaying the real environment and virtual controls, and so on.
  • the augmented reality device adopts AR glasses as an example for description.
  • the above-mentioned loadable scene may be a virtual scene of a game containing different contents.
  • Each virtual scene may include boundaries and display ranges of different shapes, and a corresponding coordinate range may be pre-configured according to the display range of each virtual scene.
  • the GPS component mounted on AR glasses can be used to obtain current location information. And it can be judged on the map whether there is a loadable virtual scene near the current location.
  • the user 201 surrounding the current game map includes a loadable virtual scene 211, a virtual scene 212, a virtual scene 213, a virtual scene 214, and a virtual scene 215.
  • the current position of the user 201 may be the center of the circle, and the preset distance is the radius as a circle, and it is determined whether there is a loadable scene within this range.
  • the virtual scene 212 and the virtual scene 214 are located in the preset range of the current position of the user 201, and the virtual scene 212 and the virtual scene 214 are loadable scenes.
  • the loading range of each virtual scene can be preset.
  • the corresponding loading range of each virtual scene can be configured according to the actual position and surrounding environment set in the real scene. For example, if the display position of the virtual scene is on a relatively empty square, and there are no obstacles obstructing the line of sight around, it is necessary to load the virtual scene when the user's normal vision can see it.
  • the virtual scene can be configured to have Larger loading range. If the virtual scene is located indoors or in a relatively small space, such as under a tree, a corner of a wall, etc., it can be configured to have a smaller loading range.
  • the corresponding target virtual scene is loaded to make the user's viewing experience closer to reality, and to avoid displaying the corresponding target scene when the user has entered the effective range of the target scene.
  • its loading range and display range may be different coordinate ranges.
  • the loading range of virtual scene 211, virtual scene 212, virtual scene 213, and virtual scene 214 are all larger than their loading range. Actual display range.
  • the loading range of the virtual scene can also be the same as the display range, for example, as shown in the virtual scene 215 in FIG. 2.
  • the distance between the user 201 and each loadable scene can be calculated.
  • the current distance between the user’s current position and the center coordinates of each loadable scene can be calculated according to the coordinate data. If the current distance is less than or equal to the radius distance from the center coordinates of the loadable scene to the loading range, it is considered to enter the loadable In the loading range of the scene. Otherwise, it is considered that it has not entered the loading range of the loadable scene.
  • the target scene mode can be loaded, and the target scene can be displayed on the interface of the augmented reality device.
  • the model of each target scene can be stored locally or on a network server. For example, show the target scene in AR glasses.
  • the task list corresponding to the target scene can also be read for displaying the task in the augmented reality device List.
  • the task list may include data such as introduction information and task information of the target scene.
  • the foregoing method may further include:
  • S131 Generate a trigger instruction to activate the camera component and the sensor component of the augmented reality device;
  • S133 Acquire position information of the augmented reality device in the target scene coordinate system in combination with the image data and the motion data.
  • the coordinate system of the target scene in the aforementioned augmented reality environment may be established based on the real environment. As shown in FIG. 3, the coordinate system of the target scene may adopt the same scale as the real environment.
  • a trigger instruction can be generated, and the augmented reality device can activate the camera component and the sensor component to start collecting data according to the trigger instruction.
  • the position information of the augmented reality device in the target scene coordinate system is not limited to the position information of the augmented reality device in the target scene coordinate system.
  • the aforementioned combination of the image data and the motion data to obtain the position information of the augmented reality device in the target scene coordinate system may include the following steps:
  • S1331 Recognizing the depth image to obtain depth data of a target object, so as to obtain the distance between the augmented reality device and the target object according to the depth data;
  • S1332 Read sensor data of the augmented reality device, and obtain an action recognition result of the augmented reality device according to the sensor data;
  • S1333 Determine the scene position information of the augmented reality device in the target scene coordinate system in combination with the action recognition result and the distance between the augmented reality device and the target object.
  • one or more target objects can be pre-configured in each augmented reality scene to be displayed and the target scene.
  • the target object can be an existing object in the real scene, such as a marked telephone pole, street sign, or trash can. Of course, it can also be a subject with mark information configured specifically for each virtual scene.
  • the coordinates of each target object in the target scene coordinate system can be determined in advance.
  • the camera component that can be assembled on AR glasses includes at least one depth camera, for example, a ToF module.
  • the depth camera can be used to shoot the depth image corresponding to the real scene in the current field of view of the augmented reality device.
  • the target object in the depth image is recognized, the depth information is obtained, and the distance is used as the distance recognition result.
  • the distance between the AR glasses and the at least one target object is obtained according to the depth data.
  • each circle is made with A and B as the center and the recognized distance as the radius. The intersection of the two circles is the current position of the AR glasses in the target scene coordinate system.
  • the sensor data of the augmented reality device can also be read, and the action recognition result of the augmented reality device is obtained according to the sensor data; combined with the augmented reality device
  • the action recognition result of and the distance data in the target scene coordinate system between the augmented reality device and the target object determine the more accurate coordinate information of the augmented reality device in the target scene coordinate system.
  • the AR glasses can be calculated based on the data collected by the nine-axis sensor, that is, the horizontal and vertical angles of the user's line of sight, and then the horizontal and vertical angles between the AR glasses and the feature matching object can be obtained. Therefore, in the target scene coordinate system, the distance between the AR glasses and the target object, combined with the horizontal and vertical angle information between the AR glasses and the target object, can more accurately calculate the user's current target scene coordinate system Precise coordinates. For example, when a user stands on the ground and looks at an object suspended in mid-air, his line of sight has an angle with both the horizontal and vertical directions.
  • the nine-axis sensor can recognize the user's head-up movement and the specific angle, so that the user's position can be more accurately located based on the angle data combined with the coordinates of the target and the recognized distance.
  • the aforementioned subject matter may include multiple; the interaction method may further include: using each subject matter to perform calculations to obtain a plurality of corresponding scene position information; The scene position information is subjected to position verification to obtain accurate scene position information.
  • two or more target objects identified in the depth image may be used to calculate the user's precise coordinates using the above-mentioned method, and then the multiple precise coordinates may be checked against each other to obtain the final precise coordinates.
  • the augmented reality device it is determined in the target scene coordinate system whether the augmented reality device enters the effective interaction range of a virtual interactive object, so as to be used for the effective interaction of the augmented reality device entering the virtual object.
  • the interaction range the interaction with the virtual interaction object is triggered.
  • the target scene may include mobile and fixed virtual interactive objects.
  • the above-mentioned interactive method may further include:
  • S211 Acquire current scene position information of the mobile virtual object in the target scene coordinate system to determine the current effective interaction range of the mobile virtual object;
  • the target scene may include virtual objects such as NPC (non-player character) and a store.
  • Each virtual object can be pre-configured with a certain effective interaction range.
  • the virtual object 411 has a larger effective interaction range
  • the virtual object 412 has a smaller effective interaction range.
  • the effective interaction range of each virtual object can be determined according to specific needs or according to the characteristics of the character.
  • For a mobile virtual interactive object its current coordinates can be determined first, and then the coordinates corresponding to the effective interaction range at the current moment are calculated according to its preset interaction range.
  • the effective interaction range may also be configured in advance.
  • the current user interaction range of the augmented reality device is calculated and obtained according to the current scene position information of the augmented reality device in the target scene coordinate system and a preset interaction range.
  • the current user interaction range can also be calculated according to the current coordinates in the target scene coordinate system and the preset interaction range.
  • the current user interaction range of the augmented reality device overlaps with the effective interaction range of the mobile virtual interactive object, it is determined that the augmented reality device enters the effective interaction range of the mobile virtual interactive object, and can begin to interact with the mobile virtual interactive object, for example, Talk, receive task data, etc.
  • the above-mentioned interaction method may include: acquiring the current scene position information of the augmented reality device in the target scene coordinate system, and is located at the current position.
  • the effective interaction range corresponding to the fixed virtual interactive object is determined, it is determined that the augmented reality device enters the effective interaction range of the fixed mobile virtual interactive object.
  • its effective interaction range is a fixed coordinate range.
  • the fixed virtual interaction is triggered Interaction between objects.
  • the interaction with the fixed virtual interactive object is triggered.
  • the interactive method based on the augmented reality device of the embodiment of the present application prejudges the augmented reality scenes around the user that can be loaded, and when one or more scenes to be loaded reach a certain specific range, the advance The loading can trigger and load the augmented reality game area scene in time.
  • Improve the user experience after entering the target scene, a more accurate position of the user in the target scene coordinate system can be accurately located by using the image of the user's current field of view for recognition and combining the action recognition result. It enables users to interact with virtual objects in augmented reality scenes more accurately. Realize accurate positioning of the augmented reality scene and accurate positioning in the augmented reality scene coordinate system, effectively improving the user experience.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not be implemented in this application.
  • the implementation process of the example constitutes any limitation.
  • Fig. 5 shows a schematic block diagram of an augmented reality device-based interactive system 50 according to an embodiment of the present application.
  • the interactive system 50 includes:
  • the loadable scene judging module 501 is configured to obtain the current position information of the augmented reality device, and confirm whether the loadable scene is included in the preset range of the current position.
  • the target scene judgment module 502 is configured to obtain the distance between the current position and the loadable scene when the loadable scene is included in the preset range of the current position to determine whether it enters the loading range of a target scene.
  • the target scene loading module 503 is configured to load the target scene model for displaying the target scene in the augmented reality device when the augmented reality device enters the loading range of the target scene.
  • the interactive system of the embodiment of the present application can enable users to interact with virtual objects in the augmented reality scene more accurately; realize accurate positioning of the augmented reality scene and accurate positioning in the augmented reality scene coordinate system, which is effective Enhance the user experience.
  • the interactive system 50 further includes:
  • the component activation module is used to generate a trigger instruction that can be used to activate the camera component and the sensor component of the augmented reality device.
  • the data collection module is configured to use the camera component to obtain image data corresponding to the current field of view of the augmented reality device, and use the sensor component to obtain motion data of the augmented reality device.
  • the position information calculation module is used to obtain the position information of the augmented reality device in the target scene coordinate system in combination with the image data and the motion data.
  • the location information calculation module includes:
  • the image processing unit is configured to identify the depth image to obtain depth data of the target object, so as to obtain the distance between the augmented reality device and the target object according to the depth data.
  • the sensor data processing unit is configured to read sensor data of the augmented reality device, and obtain an action recognition result of the augmented reality device according to the sensor data.
  • the result calculation unit is configured to determine the scene position information of the augmented reality device in the target scene coordinate system in combination with the action recognition result and the distance between the augmented reality device and the target object.
  • the interactive system 50 further includes:
  • the virtual interactive object recognition module is used to determine whether the augmented reality device enters the effective interactive range of a virtual interactive object in the target scene coordinate system, so as to be used when the augmented reality device enters the effective interactive range of a virtual object To trigger the interaction with the virtual interactive object.
  • the virtual interactive object is a mobile virtual interactive object;
  • the virtual interactive object recognition module includes:
  • the mobile object interaction range calculation unit is configured to obtain the current scene position information of the mobile virtual object in the target scene coordinate system when the virtual interactive object is a mobile virtual interactive object, so as to determine that the mobile virtual object is currently Effective range of interaction;
  • the first interaction judgment unit is configured to determine that the augmented reality device enters the effective interaction range of the mobile virtual interactive object if the current user interaction range of the augmented reality device overlaps with the effective interaction range of the mobile virtual interactive object .
  • the virtual interactive object is a fixed virtual interactive object; the virtual interactive object recognition module further includes:
  • the second interaction judgment unit is configured to obtain the current scene position information of the augmented reality device in the target scene coordinate system when the virtual interaction object is a fixed virtual interaction object, and the current position is located in the fixed virtual
  • the effective interaction range corresponding to the interactive object it is determined that the augmented reality device enters the effective interaction range of the fixed and mobile virtual interactive object.
  • the current user interaction range of the augmented reality device is calculated and obtained according to the current scene position information of the augmented reality device in the target scene coordinate system and a preset interaction range.
  • the subject matter includes multiple; the interactive system 50 further includes:
  • the location information verification module is configured to use each of the subject matter to perform calculations to obtain corresponding multiple of the scene location information; and perform location verification based on the multiple of the scene location information to obtain the accurate scenario location information.
  • the interactive system further includes:
  • the task list acquisition module is used to read the task list corresponding to the target scene when the target scene model is loaded for displaying the target scene in the augmented reality device for use in the augmented reality device Show the task list.
  • FIG. 6 shows a computer system 600 of an electronic device implementing an embodiment of the present invention according to an embodiment of the present application; the electronic device may be an augmented reality device such as AR glasses and an AR helmet.
  • the electronic device may be an augmented reality device such as AR glasses and an AR helmet.
  • the computer system 600 includes a central processing unit (Central Processing Unit, CPU) 601, which can be loaded into a random access memory (Random Access Memory) according to a program stored in a read-only memory (Read-Only Memory, ROM) 602 or from a storage part 608. Access Memory, RAM) 603 programs to execute various appropriate actions and processing. In RAM 603, various programs and data required for system operation are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (Input/Output, I/O) interface 605 is also connected to the bus 604.
  • the following components are connected to the I/O interface 605: the input part 606 including a keyboard, a mouse, etc.; and an output part 607 such as a cathode ray tube (Cathode Ray Tube, CRT), a liquid crystal display (LCD), and speakers. ; A storage part 608 including a hard disk, etc.; and a communication part 609 including a network interface card such as a LAN (Local Area Network) card and a modem. The communication section 609 performs communication processing via a network such as the Internet.
  • the driver 610 is also connected to the I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 610 as needed, so that the computer program read from it is installed into the storage part 608 as needed.
  • an embodiment of the present invention includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 509, and/or installed from the removable medium 511.
  • CPU central processing unit
  • various functions defined in the system of the present application are executed.
  • the computer system 600 of the embodiment of the present application can realize accurate positioning of the target scene and display it in time; and can effectively deepen the user's sense of immersion and improve the user experience.
  • the computer-readable medium shown in the embodiment of the present application has a computer program stored thereon, and the computer program is executed by a processor to realize the interactive method based on the augmented reality device of the present invention.
  • the computer-readable medium shown in the embodiment of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium can send, propagate or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Environmental & Geological Engineering (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于增强现实设备的交互方法、系统、电子设备和计算机可读介质,该方法包括:获取增强现实设备的当前位置信息,确认当前位置的预设范围内是否包括可加载场景(S11);在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围(S12);在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景(S13)。通过上述方式能够实现对目标场景的准确定位并及时展示,有效的提升用户体验。

Description

基于增强现实设备的交互方法及系统、电子设备、计算机可读介质
相关申请的交叉引用
本申请要求在2019年8月19日提交中国专利局、申请号为201910765900.8、发明名称为“基于增强现实设备的交互方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及增强现实领域,并且更具体地,涉及一种基于增强现实设备的交互方法、基于增强现实设备的交互系统、电子设备和计算机可读介质。
背景技术
在现有的角色扮演类增强现实游戏中,能够在现实场景画面上叠加游戏虚拟场景,使游戏虚拟场景和现实场景产生互动。但在游戏过程中,由于用户定位不准确,存在不能准确加载增强现实游戏场景的情况,以及无法准确与虚拟对象交互的情况。导致用户体验差。
发明内容
有鉴于此,本申请实施例提供了一种基于增强现实设备的交互方法、基于增强现实设备的交互系统、电子设备和计算机可读介质,有利于为用户在增强现有游戏场景提供准确的定位。
第一方面,提供了一种基于增强现实设备的交互方法,该交互方法包括:获取增强现实设备的当前位置信息,确认当前位置的预设范围内是否包括可加载场景;在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围;在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景。
第二方面,提供了一种基于增强现实设备的交互系统,该交互系统包括:可加载场景判断模块,用于获取增强现实设备的当前位置信息,确认当前位 置的预设范围内是否包括可加载场景;目标场景判断模块,用于在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围;目标场景加载模块,用于在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景。
第三方面,提供了一种电子设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器执行上述第一方面中的方法。
第四方面,提供了一种计算机可读介质,用于储存为执行上述第一方面中的方法所用的计算机软件指令,其包含用于执行上述各方面所设计的程序。
本申请中,电子设备以及交互系统等的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本申请类似,属于本申请权利要求及其等同技术的范围之内。
本申请的这些方面或其他方面在以下实施例的描述中会更加简明易懂。
附图说明
图1示出了本申请实施例的基于增强现实设备的交互方法的示意图。
图2示出了本申请实施例的虚拟场景与增强现实设备之间位置关系的示意图。
图3示出了本申请实施例的目标场景坐标系与现实环境坐标系之间的关系示意图。
图4示出了本申请实施例的增强现实设备在目标场景坐标系中与虚拟交互对象的位置交互示意图。
图5示出了本申请实施例的基于增强现实设备的交互系统的的示意性框图。
图6示出了本申请实施例的电子设备的计算机系统的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
应理解,本申请实施例的技术方案可以应用于各种增强现实设备,例如:AR(Augmented Reality,增强现实)眼镜、AR头盔;或者,也可以应用于手机、平板电脑等配置有后置摄像头的智能终端设备。
在现有的角色扮演类游戏,用户一般利用显示器来观看游戏画面。或者对于虚拟现实游戏来说,用户通过使用头盔,以沉浸式的方式观看虚拟画面。但上述的游戏形式只能在固定的地点实现,不能结合现实的场景或物体。目前,出现了越来越多的增强现实游戏。增强现实游戏的特点是在现实场景画面上叠加游戏场景(即虚拟场景),使游戏场景和现实场景产生互动。
但现有的增强现实游戏中,由于用户定位不准确,存在不能准确加载增强现实游戏场景的情况。此外,在进入游戏场景后,用户与增强现实游戏场景中的虚拟对象交互时,存在用户定位不准确,无法准确与虚拟对象交互的情况。导致用户体验差。这样,就需要一种方法,能够提高增强现实设备定位的准确性。
图1示出了本申请实施例的一种基于增强现实设备的交互方法的示意性。如图1所示,该交互方法包括以下部分或全部内容:
S11,获取增强现实设备的当前位置信息,确认当前位置的预设范围内是否包括可加载场景;
S12,在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围;
S13,在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景。
具体地,上述的增强现实设备可以是AR眼镜、AR头盔等智能终端设备。以AR眼镜为例,可以在眼镜框架上设置双目或单目透视光学引擎,通过该光学引擎可以向用户展示动态数据,例如视频、图表、指示信息、控制信息等,并且不影响对周围环境的观察。另外,在AR眼镜上还可以配置摄像组件,摄像组件可以包括高清摄像机,以及深度相机等。同时,AR眼镜上还可以装配有传感器,例如陀螺仪、加速度传感器、磁力计以及光传感器,或者也可以是九轴传感器,例如包括:三轴陀螺仪、三轴加速度计、三轴地磁计的组合,或者六轴加速度传感器、三轴陀螺仪的组合,或者是六轴陀螺仪、三轴加速度计的组合。并且,AR眼镜还可以配置有GPS组件、蓝牙组 件以及电源组件和输入设备。AR眼镜还可以连接有一控制器,在该控制器上可以装配上述的GPS组件、蓝牙组件、WiFi组件、电源组件以及输入设备、处理器和存储器等模块或单元。当然,在AR眼镜本体或控制器上还可以设置有数据接口,便于数据的传输,以及与外接设备的连接。本公开对AR眼镜的具体结构和形式不做特殊限定。
或者,也可以是配置有后置摄像头、传感器组件以及增强现实应用程序的手机、平板电脑等智能终端设备。例如,在手机中安装增强现实应用程序后,可以将手机的屏幕作为显示器,同时显示真实的环境以及虚拟控件,等等。以下实施例中以增强现实设备采用AR眼镜为例进行说明。
可选地,在本申请实施例中,上述的可加载场景可以是包含不同内容的游戏的虚拟场景。各虚拟场景可以包含不同形状的边界和展示范围,并可以根据各虚拟场景的展示范围预先配置对应的坐标范围。举例来说,在增强现实游戏中,可以利用AR眼镜上装配的GPS组件获取当前的位置信息。并可以在地图中判断当前位置附近是否包括可加载的虚拟场景。例如,参考图2所示,在当前场景中,当前游戏地图中用户201周围包括可加载的虚拟场景211、虚拟场景212、虚拟场景213、虚拟场景214以及虚拟场景215。或者,也可以将用户201当前位置为圆心,预设的距离为半径作圆,判断在该范围内是否存在可加载场景。例如,图2中所示,虚拟场景212和虚拟场景214位于用户201当前位置的预设范围中,则虚拟场景212和虚拟场景214为可加载场景。
为了提高用户的使用体验,提升游戏沉浸感受,可以预先设置各虚拟场景的加载范围。举例来说,可以根据各虚拟场景在现实场景中设置的实际位置及周围环境配置其对应的加载范围。例如,若虚拟场景的展示位置为相对空旷的广场上,且周围不存在阻碍视线的障碍物,便需要当用户的常规视力可以看见时便加载该虚拟场景,此时便可以配置该虚拟场景具有较大的加载范围。若虚拟场景位于室内或者相对较小的空间中,例如树下、墙角等位置,则可以配置其具有较小的加载范围。当增强现实设备的当前视野朝向该场景时,便加载对应的目标虚拟场景,使用户的观看感受更加贴近真实,避免用户已经进入目标场景的有效范围内时才展示对应的目标场景。此外,对于虚拟场景来说,其加载范围和展示范围可以是不同的坐标范围,例如,参考图 2所示,虚拟场景211、虚拟场景212、虚拟场景213以及虚拟场景214的加载范围均大于其实际展示范围。此外,虚拟场景的加载范围也可以与展示范围相同,例如图2中虚拟场景215所示。
在获取用户201当前位置周围的可加载场景后,便可以计算用户201与各可加载场景之间的距离。举例来说,可以根据坐标数据计算用户当前位置与各可加载场景中心坐标之间的当前距离,若当前距离小于或等于可加载场景的中心坐标到加载范围的半径距离,则认为进入该可加载场景的加载范围中。否则,则认为未进入可加载场景的加载范围中。
当判断增强现实设备进入目标场景的记载范围内时,便可以加载目标场景模式,并在增强现实设备的界面中展示所述目标场景。各目标场景的模型可以存储在本地,或者网络服务器上。例如,在AR眼镜中展示目标场景。通过在判断用户进入虚拟场景的加载范围内时便预先加载该虚拟场景的地图模型,能够有效的提升用户观看的感受,提高用户沉浸感。
此外,加载所述目标场景模型以用于在增强现实设备中展示所述目标场景时,还可以读取所述目标场景对应的任务列表,以用于在所述增强现实设备中展示所述任务列表。任务列表中可以包括该目标场景的介绍信息以及任务信息等数据。
可选地,在本申请实施例中,在增强现实设备进入所述目标场景的加载范围内时,上述方法还可以包括:
S131,生成一触发指令,以激活所述增强现实设备的摄像组件和传感器组件;
S132,利用所述摄像组件获取所述增强现实设备当前视野对应的图像数据,以及利用所述传感器组件获取所述增强现实设备的动作数据;
S133,结合所述图像数据和所述动作数据获取所述增强现实设备在所述目标场景坐标系中的位置信息。
具体地,上述增强现实环境中目标场景的坐标系可以基于现实环境建立,如图3所示,目标场景的坐标系可以与现实环境采用相同比例。并且,当增强现实设备进入目标场景的加载范围,并开始加载目标场景对应的模型时,便可以生成一触发指令,增强现实设备可以根据该触发指令激活摄像组件和传感器组件开始采集数据,并获取增强现实设备在目标场景坐标系中的位置 信息。
可选地,在本申请实施例中,上述的结合所述图像数据和所述动作数据获取所述增强现实设备在所述目标场景坐标系中的位置信息可以包括以下步骤:
S1331,对所述深度图像进行识别以获取标的物的深度数据,以根据所述深度数据获取所述增强现实设备与所述标的物的距离;以及
S1332,读取所述增强现实设备的传感器数据,根据所述传感器数据获取所述增强现实设备的动作识别结果;
S1333,结合所述动作识别结果、所述增强现实设备与所述标的物的距离确定所述增强现实设备在所述目标场景坐标系中的场景位置信息。
具体的,各待展示的增强现实场景以及目标场景中可以预先配置一个或多个标的物。该标的物可以采用真实场景中既有的物体,例如带有标记的电线杆、路牌或者垃圾箱等等。当然,也可以是专门为各虚拟场景配置的带有标记信息的标的物。并且,可以预先确定各标的物在目标场景坐标系中的坐标。
AR眼镜上可以装配的摄像组件至少包括一深度摄像头,例如采用ToF模组。可以利用该深度摄像头拍摄增强现实设备当前视野中现实场景对应的深度图像。并对深度图像中的标的物进行识别,获取深度信息,并将该距离作为距离识别结果。从而根据深度数据获取AR眼镜与至少一个标的物之间的距离。
具体的,若用户佩戴AR眼镜,拍摄的当前视野对应的深度图像进行识别后,在当前识别到两个不同的标的物A、B时,若A、B在同一平面,便可以在目标场景坐标系中,分别以A、B为圆心,以识别出的距离为半径分别作圆,则两圆的交点便是AR眼镜当前在目标场景坐标系中的位置。
具体的,在计算用户在目标场景坐标系中的场景坐标时,还可以读取所述增强现实设备的传感器数据,根据所述传感器数据获取所述增强现实设备的动作识别结果;结合增强现实设备的动作识别结果和增强现实设备与标的物之间的在目标场景坐标系中的距离数据确定增强现实设备在所述目标场景坐标系中更加精确的坐标信息。
具体的,可以根据九轴传感器采集的数据来计算AR眼镜,即用户视线 在水平方向及垂直方向的角度,进而获取AR眼镜与特征匹配对象之间在水平方向及垂直方向的角度。从而在目标场景坐标系中,将AR眼镜与标的物之间的距离,结合AR眼镜与标的物之间在水平方向及垂直方向的角度信息,可以更加准确的计算用户当前在目标场景坐标系中的精确坐标。例如,当用户站在地面看向悬挂在半空中的标的物时,其视线与水平方向、垂直方向均具有夹角。利用九轴传感器便可以识别到用户的抬头动作,以及具体的角度,从而可以根据该角度数据结合标的物的坐标以及识别出的距离更加精准的定位用户的位置。
作为一个可替代的实施例,上述的标的物可以包括多个;所述交互方法还可以包括:利用各所述标的物进行计算以获取对应的多个所述场景位置信息;基于该多个所述场景位置信息进行位置校验,以获取精确的所述场景位置信息。
具体的,可以利用深度图像中识别到的两个或者多个标的物分别利用上述的方法计算用户的精确坐标,再利用多个精确坐标之间相互校验,从而获取最终的精确坐标。
可选地,在本申请实施例中,在所述目标场景坐标系中判断所述增强现实设备是否进入一虚拟交互对象的有效交互范围,以用于在所述增强现实设备进入虚拟对象的有效交互范围时,触发与所述虚拟交互对象的交互。
具体的,目标场景中可以包括移动和固定的虚拟交互对象。
作为一个可替代的实施例,对于可以移动虚拟交互对象来说,上述的交互方法还可以包括:
S211,获取所述移动虚拟对象当前在所述目标场景坐标系中的场景位置信息,以确定所述移动虚拟对象当前的有效交互范围;
S212,若所述增强现实设备的当前用户交互范围与所述移动虚拟交互对象的有效交互范围重叠,则确定所述增强现实设备进入所述移动虚拟交互对象的有效交互范围。
具体的,参考图4所示,目标场景中可以包括NPC(non-player character,非玩家角色)、商店等虚拟对象。各虚拟对象可以预先配置有一定的有效交互范围。例如,如图4所示,虚拟对象411具有较大的有效交互范围,虚拟对象412具有较小的有效交互范围。各虚拟对象的有效交互范围的大小可以根 据具体需求,或者根据角色的特点来确定。对于移动虚拟交互对象来说,可以先确定其当前坐标,再根据其预设的交互范围计算当前时刻的有效交互范围对应的坐标。此外,对于增强现实设备401来说,也可以预先配置有效交互范围。
可选地,在本申请实施例中,所述增强现实设备的当前用户交互范围,根据所述增强现实设备当前在目标场景坐标系中的场景位置信息和预设的交互范围计算而获取。
具体的,对于增强现实设备而言,也可以根据当前在目标场景坐标系中的坐标和预设的交互范围计算当前的用户交互范围。在增强现实设备的当前用户交互范围与移动虚拟交互对象的有效交互范围发生重叠时,便判定增强现实设备进入移动虚拟交互对象的有效交互范围,可以开始与该移动虚拟交互对象进行交互,例如进行对话、接收任务数据等等。
作为一个可替代的实施例,对于可以固定虚拟交互对象来说,上述的交互方法可以包括:获取所述增强现实设备在所述目标场景坐标系中当前的场景位置信息,并在当前位置位于所述固定虚拟交互对象对应的有效交互范围时,确定所述增强现实设备进入所述固定移动虚拟交互对象的有效交互范围。
具体的,对于固定虚拟交互对象而言,其有效交互范围为固定的坐标范围,当增强现实设备的当前坐标处于一固定虚拟交互对象的固定的有效交互范围内时,便触发与该固定虚拟交互对象之间的交互。或者,当增强现实设备对象的当前用户交互范围与固定虚拟交互对象的有效交互范围发生重叠时,便触发与该固定虚拟交互对象之间的交互。
因此,本申请实施例的基于增强现实设备的交互方法,通过预先判断用户周围的可以加载的增强现实场景,并在于某一个或多个待加载的场景到达一定的具体范围时,便可以进行预先的加载,可以及时的触发并加载增强现实游戏区域场景。提升用户的使用体验。另外,在进入目标场景后,通过利用采集用户当前视野的图像进行识别以及结合动作识别结果来准确定位用户在目标场景坐标系中的更加精确的位置。能够使用户可以更加准确的与增强现实场景中的虚拟对象进行交互。实现对增强现实场景的准确定位,以及在增强现实场景坐标系中的准确定位,有效的提升用户体验。
应理解,本文中术语“系统”和“网络”在本文中常被可互换使用。本 文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
上文中详细描述了根据本申请实施例的基于增强现实设备的交互方法,下面将结合附图,描述根据本申请实施例的基于增强现实设备的交互系统,方法实施例所描述的技术特征适用于以下系统实施例。
图5示出了本申请实施例的基于增强现实设备的交互系统50的示意性框图。如图5所示,该交互系统50包括:
可加载场景判断模块501,用于获取增强现实设备的当前位置信息,确认当前位置的预设范围内是否包括可加载场景。
目标场景判断模块502,用于在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围。
目标场景加载模块503,用于在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景。
因此,本申请实施例的交互系统,能够使用户可以更加准确的与增强现实场景中的虚拟对象进行交互;实现对增强现实场景的准确定位,以及在增强现实场景坐标系中的准确定位,有效的提升用户体验。
可选地,在本申请实施例中,所述交互系统50还包括:
组件激活模块,用于生成一触发指令可以用于以激活所述增强现实设备的摄像组件和传感器组件。
数据采集模块,用于利用所述摄像组件获取所述增强现实设备当前视野对应的图像数据,以及利用所述传感器组件获取所述增强现实设备的动作数据。
位置信息计算模块,用于结合所述图像数据和所述动作数据获取所述增 强现实设备在所述目标场景坐标系中的位置信息。
可选地,在本申请实施例中,所述位置信息计算模块包括:
图像处理单元,用于对所述深度图像进行识别以获取标的物的深度数据,以根据所述深度数据获取所述增强现实设备与所述标的物的距离。
传感器数据处理单元,用于读取所述增强现实设备的传感器数据,根据所述传感器数据获取所述增强现实设备的动作识别结果。
结果计算单元,用于结合所述动作识别结果、所述增强现实设备与所述标的物的距离确定所述增强现实设备在所述目标场景坐标系中的场景位置信息。
可选地,在本申请实施例中,所述交互系统50还包括:
虚拟交互对象识别模块,用于在所述目标场景坐标系中判断所述增强现实设备是否进入一虚拟交互对象的有效交互范围,以用于在所述增强现实设备进入虚拟对象的有效交互范围时,触发与所述虚拟交互对象的交互。
可选地,在本申请实施例中,所述虚拟交互对象为移动虚拟交互对象;所述虚拟交互对象识别模块包括:
移动对象交互范围计算单元,用于在所述虚拟交互对象为移动虚拟交互对象时,获取所述移动虚拟对象当前在所述目标场景坐标系中的场景位置信息,以确定所述移动虚拟对象当前的有效交互范围;
第一交互判断单元,用于若所述增强现实设备的当前用户交互范围与所述移动虚拟交互对象的有效交互范围重叠,则确定所述增强现实设备进入所述移动虚拟交互对象的有效交互范围。
可选地,在本申请实施例中,所述虚拟交互对象为固定虚拟交互对象;所述虚拟交互对象识别模块还包括:
第二交互判断单元,用于在所述虚拟交互对象为固定虚拟交互对象时,获取所述增强现实设备在所述目标场景坐标系中当前的场景位置信息,并在当前位置位于所述固定虚拟交互对象对应的有效交互范围时,确定所述增强现实设备进入所述固定移动虚拟交互对象的有效交互范围。
可选地,在本申请实施例中,所述增强现实设备的当前用户交互范围,根据所述增强现实设备当前在目标场景坐标系中的场景位置信息和预设的交互范围计算而获取。
可选地,在本申请实施例中,所述标的物包括多个;所述交互系统50还包括:
位置信息校验模块,用于利用各所述标的物进行计算以获取对应的多个所述场景位置信息;并基于该多个所述场景位置信息进行位置校验,以获取精确的所述场景位置信息。
可选地,在本申请实施例中,所述交互系统还包括:
任务列表获取模块,用于在加载所述目标场景模型以用于在增强现实设备中展示所述目标场景时,读取所述目标场景对应的任务列表,以用于在所述增强现实设备中展示所述任务列表。
应理解,根据本申请实施例的交互系统50中的各个单元的上述和其它操作和/或功能分别为了实现图1方法中的相应流程,为了简洁,在此不再赘述。
图6示出了本申请实施例的实现本发明实施例的电子设备的计算机系统600;所述电子设备可以是例如AR眼镜、AR头盔等增强现实设备。
该计算机系统600包括:中央处理单元(Central Processing Unit,CPU)601,其可以根据存储在只读存储器(Read-Only Memory,ROM)602中的程序或者从存储部分608加载到随机访问存储器(Random Access Memory,RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(Input/Output,I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN(Local Area Network,局域网)卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本发明的实施例,下文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明的实施例包括一种计算机程序产品,其包 括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分509从网络上被下载和安装,和/或从可拆卸介质511被安装。在该计算机程序被中央处理单元(CPU)501执行时,执行本申请的系统中限定的各种功能。
因此,本申请实施例的计算机系统600,能够实现对目标场景的准确定位,并及时展示;并可以有效的加深用户的沉浸感,提升用户体验。
需要说明的是,本申请实施例所示的计算机可读介质,其上存储有计算机程序,计算机程序被处理器执行时实现本发明的基于增强现实设备的交互方法。
具体的,本申请实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本发明中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本发明中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结 合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
该功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限 于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (20)

  1. 一种基于增强现实设备的交互方法,其特征在于,包括:
    获取增强现实设备的当前位置信息,确认当前位置的预设范围内是否包括可加载场景;
    在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围;
    在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景。
  2. 根据权利要求1所述的交互方法,其特征在于,所述增强现实设备进入所述目标场景的加载范围内时,所述方法还包括:
    生成一触发指令,以激活所述增强现实设备的摄像组件和传感器组件;
    利用所述摄像组件获取所述增强现实设备当前视野对应的图像数据,以及利用所述传感器组件获取所述增强现实设备的动作数据;
    结合所述图像数据和所述动作数据获取所述增强现实设备在所述目标场景坐标系中的位置信息。
  3. 根据权利要求2所述的交互方法,其特征在于,所述图像数据包括深度图像;
    所述结合所述图像数据和所述动作数据获取所述增强现实设备在所述目标场景坐标系中的位置信息包括:
    对所述深度图像进行识别以获取标的物的深度数据,以根据所述深度数据获取所述增强现实设备与所述标的物的距离;以及
    读取所述增强现实设备的传感器数据,根据所述传感器数据获取所述增强现实设备的动作识别结果;
    结合所述动作识别结果、所述增强现实设备与所述标的物的距离确定所述增强现实设备在所述目标场景坐标系中的场景位置信息。
  4. 根据权利要求3所述的交互方法,其特征在于,所述方法还包括:
    在所述目标场景坐标系中判断所述增强现实设备是否进入一虚拟交互对象的有效交互范围,以用于在所述增强现实设备进入虚拟对象的有效交互范围时,触发与所述虚拟交互对象的交互。
  5. 根据权利要求4所述的交互方法,其特征在于,所述虚拟交互对象为移动虚拟交互对象;
    所述判断所述增强现实设备是否进入一虚拟交互对象的有效交互范围包括:
    获取所述移动虚拟对象当前在所述目标场景坐标系中的场景位置信息,以确定所述移动虚拟对象当前的有效交互范围;
    若所述增强现实设备的当前用户交互范围与所述移动虚拟交互对象的有效交互范围重叠,则确定所述增强现实设备进入所述移动虚拟交互对象的有效交互范围。
  6. 根据权利要求4所述的交互方法,其特征在于,所述虚拟交互对象为固定虚拟交互对象;
    所述判断所述增强现实设备是否进入一虚拟交互对象的有效交互范围包括:
    获取所述增强现实设备在所述目标场景坐标系中当前的场景位置信息,并在当前位置位于所述固定虚拟交互对象对应的有效交互范围时,确定所述增强现实设备进入所述固定移动虚拟交互对象的有效交互范围。
  7. 根据权利要求5或6中所述的交互方法,其特征在于,所述增强现实设备的当前用户交互范围,根据所述增强现实设备当前在目标场景坐标系中的场景位置信息和预设的交互范围计算而获取。
  8. 根据权利要求3中所述的交互方法,其特征在于,所述标的物包括多个;所述方法还包括:
    利用各所述标的物进行计算以获取对应的多个所述场景位置信息;
    基于该多个所述场景位置信息进行位置校验,以获取精确的所述场景位置信息。
  9. 根据权利要求1中所述的交互方法,其特征在于,所述加载所述目标场景模型以用于在增强现实设备中展示所述目标场景时,所述方法还包括:
    读取所述目标场景对应的任务列表,以用于在所述增强现实设备中展示所述任务列表。
  10. 一种基于增强现实设备的交互系统,其特征在于,所述交互系统包括:
    可加载场景判断模块,用于获取增强现实设备的当前位置信息,确认当前位置的预设范围内是否包括可加载场景;
    目标场景判断模块,用于在所述当前位置预设范围内包含可加载场景时,获取所述当前位置与所述可加载场景之间的距离以判断是否进入一目标场景的加载范围;
    目标场景加载模块,用于在所述增强现实设备进入所述目标场景的加载范围内时,加载所述目标场景模型以用于在所述增强现实设备中展示所述目标场景。
  11. 根据权利要求10所述的交互系统,其特征在于,所述终端设备还包括:
    组件激活模块,用于生成一触发指令可以用于以激活所述增强现实设备的摄像组件和传感器组件;
    数据采集模块,用于利用所述摄像组件获取所述增强现实设备当前视野对应的图像数据,以及利用所述传感器组件获取所述增强现实设备的动作数据;
    位置信息计算模块,用于结合所述图像数据和所述动作数据获取所述增强现实设备在所述目标场景坐标系中的位置信息。
  12. 根据权利要求11所述的交互系统,其特征在于,所述图像数据包括深度图像;所述位置信息计算模块包括:
    图像处理单元,用于对所述深度图像进行识别以获取标的物的深度数据,以根据所述深度数据获取所述增强现实设备与所述标的物的距离;
    传感器数据处理单元,用于读取所述增强现实设备的传感器数据,根据所述传感器数据获取所述增强现实设备的动作识别结果;
    结果计算单元,用于结合所述动作识别结果、所述增强现实设备与所述标的物的距离确定所述增强现实设备在所述目标场景坐标系中的场景位置信息。
  13. 根据权利要求12所述的交互系统,其特征在于,所述交互系统还包括:
    虚拟交互对象识别模块,用于在所述目标场景坐标系中判断所述增强现实设备是否进入一虚拟交互对象的有效交互范围,以用于在所述增强现实设 备进入虚拟对象的有效交互范围时,触发与所述虚拟交互对象的交互。
  14. 根据权利要求13所述的交互系统,其特征在于,所述虚拟交互对象为移动虚拟交互对象;所述虚拟交互对象识别模块包括:
    移动对象交互范围计算单元,用于在所述虚拟交互对象为移动虚拟交互对象时,获取所述移动虚拟对象当前在所述目标场景坐标系中的场景位置信息,以确定所述移动虚拟对象当前的有效交互范围;
    第一交互判断单元,用于若所述增强现实设备的当前用户交互范围与所述移动虚拟交互对象的有效交互范围重叠,则确定所述增强现实设备进入所述移动虚拟交互对象的有效交互范围。
  15. 根据权利要求13所述的交互系统,其特征在于,所述虚拟交互对象为固定虚拟交互对象;所述虚拟交互对象识别模块还包括:
    第二交互判断单元,用于在所述虚拟交互对象为固定虚拟交互对象时,获取所述增强现实设备在所述目标场景坐标系中当前的场景位置信息,并在当前位置位于所述固定虚拟交互对象对应的有效交互范围时,确定所述增强现实设备进入所述固定移动虚拟交互对象的有效交互范围。
  16. 根据权利要求15或16所述的交互系统,其特征在于,所述增强现实设备的当前用户交互范围,根据所述增强现实设备当前在目标场景坐标系中的场景位置信息和预设的交互范围计算而获取。
  17. 根据权利要求12所述的交互系统,其特征在于,所述标的物包括多个;所述交互系统还包括:
    位置信息校验模块,用于利用各所述标的物进行计算以获取对应的多个所述场景位置信息;并基于该多个所述场景位置信息进行位置校验,以获取精确的所述场景位置信息。
  18. 根据权利要求10所述的交互系统,其特征在于,所述交互系统还包括:
    任务列表获取模块,用于在加载所述目标场景模型以用于在增强现实设备中展示所述目标场景时,读取所述目标场景对应的任务列表,以用于在所述增强现实设备中展示所述任务列表。
  19. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至9中任一项所述的基于增强现实设备的交互方法。
  20. 一种计算机可读介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的基于增强现实设备的交互方法。
PCT/CN2020/102478 2019-08-19 2020-07-16 基于增强现实设备的交互方法及系统、电子设备、计算机可读介质 WO2021031755A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20853764.7A EP3978089A4 (en) 2019-08-19 2020-07-16 INTERACTIVE METHOD AND SYSTEM BASED ON AN AUGMENTED REALITY DEVICE, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIA
US17/563,144 US20220122331A1 (en) 2019-08-19 2021-12-28 Interactive method and system based on augmented reality device, electronic device, and computer readable medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910765900.8 2019-08-19
CN201910765900.8A CN110478901B (zh) 2019-08-19 2019-08-19 基于增强现实设备的交互方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/563,144 Continuation US20220122331A1 (en) 2019-08-19 2021-12-28 Interactive method and system based on augmented reality device, electronic device, and computer readable medium

Publications (1)

Publication Number Publication Date
WO2021031755A1 true WO2021031755A1 (zh) 2021-02-25

Family

ID=68552079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102478 WO2021031755A1 (zh) 2019-08-19 2020-07-16 基于增强现实设备的交互方法及系统、电子设备、计算机可读介质

Country Status (4)

Country Link
US (1) US20220122331A1 (zh)
EP (1) EP3978089A4 (zh)
CN (1) CN110478901B (zh)
WO (1) WO2021031755A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113577766A (zh) * 2021-08-05 2021-11-02 百度在线网络技术(北京)有限公司 对象处理方法及装置
CN115268749A (zh) * 2022-07-20 2022-11-01 广州视享科技有限公司 增强现实设备的控制方法、移动终端及防遮挡系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11538199B2 (en) * 2020-02-07 2022-12-27 Lenovo (Singapore) Pte. Ltd. Displaying a window in an augmented reality view
CN113262478B (zh) * 2020-02-17 2023-08-25 Oppo广东移动通信有限公司 增强现实处理方法及装置、存储介质和电子设备
CN113516989A (zh) * 2020-03-27 2021-10-19 浙江宇视科技有限公司 声源音频的管理方法、装置、设备和存储介质
CN111790151A (zh) * 2020-06-28 2020-10-20 上海米哈游天命科技有限公司 一种场景中物体加载方法、装置、存储介质及电子设备
CN112330820A (zh) * 2020-11-12 2021-02-05 北京市商汤科技开发有限公司 信息展示方法、装置、电子设备及存储介质
CN113791846A (zh) * 2020-11-13 2021-12-14 北京沃东天骏信息技术有限公司 信息展示方法及装置、电子设备、存储介质
CN117354568A (zh) * 2022-06-27 2024-01-05 华为技术有限公司 一种显示方法、设备及系统
CN115268655A (zh) * 2022-08-22 2022-11-01 江苏泽景汽车电子股份有限公司 基于增强现实的交互方法、系统,车辆及存储介质
CN115082648B (zh) * 2022-08-23 2023-03-24 海看网络科技(山东)股份有限公司 一种基于标志物模型绑定的ar场景布置方法及系统
CN117687718A (zh) * 2024-01-08 2024-03-12 元年科技(珠海)有限责任公司 一种虚拟数字场景展示方法以及相关装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6166744A (en) * 1997-11-26 2000-12-26 Pathfinder Systems, Inc. System for combining virtual images with real-world scenes
CN106020493A (zh) * 2016-03-13 2016-10-12 成都市微辣科技有限公司 一种基于虚拟现实的产品展示装置及方法
CN108415570A (zh) * 2018-03-07 2018-08-17 网易(杭州)网络有限公司 基于增强现实的控件选择方法和装置
CN108499103A (zh) * 2018-04-16 2018-09-07 网易(杭州)网络有限公司 场景元素的显示方法及装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9398287B2 (en) * 2013-02-28 2016-07-19 Google Technology Holdings LLC Context-based depth sensor control
CN103257876B (zh) * 2013-04-28 2016-04-13 福建天晴数码有限公司 C3游戏地图动态加载的方法
CN107767459A (zh) * 2016-08-18 2018-03-06 深圳市劲嘉数媒科技有限公司 基于增强现实的展示方法、装置和系统
CN106547599B (zh) * 2016-11-24 2020-05-05 腾讯科技(深圳)有限公司 一种资源动态加载的方法及终端
IT201700058961A1 (it) * 2017-05-30 2018-11-30 Artglass S R L Metodo e sistema di fruizione di un contenuto editoriale in un sito preferibilmente culturale o artistico o paesaggistico o naturalistico o fieristico o espositivo
CN107198876B (zh) * 2017-06-07 2021-02-05 北京小鸟看看科技有限公司 游戏场景的加载方法及装置
CN113975808A (zh) * 2017-10-31 2022-01-28 多玩国株式会社 输入接口系统、输入接口的控制方法以及存储有控制程序的存储介质
CN108434739B (zh) * 2018-01-30 2019-03-19 网易(杭州)网络有限公司 游戏场景中虚拟资源的处理方法及装置
CN108568112A (zh) * 2018-04-20 2018-09-25 网易(杭州)网络有限公司 一种游戏场景的生成方法、装置和电子设备
CN108854070A (zh) * 2018-06-15 2018-11-23 网易(杭州)网络有限公司 游戏中的信息提示方法、装置及存储介质
CN109685909B (zh) * 2018-11-12 2022-12-20 腾讯科技(深圳)有限公司 图像的显示方法、装置、存储介质和电子装置
CN109782901A (zh) * 2018-12-06 2019-05-21 网易(杭州)网络有限公司 增强现实交互方法、装置、计算机设备及存储介质
CN109656441B (zh) * 2018-12-21 2020-11-06 广州励丰文化科技股份有限公司 一种基于虚拟现实的导览方法及系统
CN109886191A (zh) * 2019-02-20 2019-06-14 上海昊沧系统控制技术有限责任公司 一种基于ar的识别物管理方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6166744A (en) * 1997-11-26 2000-12-26 Pathfinder Systems, Inc. System for combining virtual images with real-world scenes
CN106020493A (zh) * 2016-03-13 2016-10-12 成都市微辣科技有限公司 一种基于虚拟现实的产品展示装置及方法
CN108415570A (zh) * 2018-03-07 2018-08-17 网易(杭州)网络有限公司 基于增强现实的控件选择方法和装置
CN108499103A (zh) * 2018-04-16 2018-09-07 网易(杭州)网络有限公司 场景元素的显示方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113577766A (zh) * 2021-08-05 2021-11-02 百度在线网络技术(北京)有限公司 对象处理方法及装置
CN113577766B (zh) * 2021-08-05 2024-04-02 百度在线网络技术(北京)有限公司 对象处理方法及装置
CN115268749A (zh) * 2022-07-20 2022-11-01 广州视享科技有限公司 增强现实设备的控制方法、移动终端及防遮挡系统
CN115268749B (zh) * 2022-07-20 2024-04-09 广州视享科技有限公司 增强现实设备的控制方法、移动终端及防遮挡系统

Also Published As

Publication number Publication date
US20220122331A1 (en) 2022-04-21
CN110478901B (zh) 2023-09-22
EP3978089A4 (en) 2022-08-10
CN110478901A (zh) 2019-11-22
EP3978089A1 (en) 2022-04-06

Similar Documents

Publication Publication Date Title
WO2021031755A1 (zh) 基于增强现实设备的交互方法及系统、电子设备、计算机可读介质
US11127210B2 (en) Touch and social cues as inputs into a computer
US10488659B2 (en) Apparatus, systems and methods for providing motion tracking using a personal viewing device
EP3137976B1 (en) World-locked display quality feedback
US9401050B2 (en) Recalibration of a flexible mixed reality device
EP3011418B1 (en) Virtual object orientation and visualization
US20130174213A1 (en) Implicit sharing and privacy control through physical behaviors using sensor-rich devices
KR20110136012A (ko) 위치와 시선방향을 추적하는 증강현실 장치
CN111290580B (zh) 基于视线追踪的校准方法及相关装置
US11151804B2 (en) Information processing device, information processing method, and program
KR102190743B1 (ko) 로봇과 인터랙션하는 증강현실 서비스 제공 장치 및 방법
US20180160093A1 (en) Portable device and operation method thereof
KR101914660B1 (ko) 자이로 센서를 기반으로 증강현실 컨텐츠의 표시를 제어하는 방법 및 그 장치
CN107848460A (zh) 用于车辆的系统、方法和设备以及计算机可读介质
KR101939530B1 (ko) 지형정보 인식을 기반으로 증강현실 오브젝트를 표시하는 방법 및 그 장치
CN115686233A (zh) 一种主动笔与显示设备的交互方法、装置及交互系统
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
US11176375B2 (en) Smart glasses lost object assistance
KR20190006584A (ko) 지형정보 인식을 기반으로 증강현실 오브젝트를 표시하는 방법 및 그 장치
CN117173252A (zh) Ar-hud驾驶员眼盒标定方法、系统、设备及介质
CN116193246A (zh) 用于拍摄视频的提示方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20853764

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020853764

Country of ref document: EP

Effective date: 20211229

NENP Non-entry into the national phase

Ref country code: DE