WO2020048441A1 - 通信连接方法、终端设备及无线通信系统 - Google Patents

通信连接方法、终端设备及无线通信系统 Download PDF

Info

Publication number
WO2020048441A1
WO2020048441A1 PCT/CN2019/104161 CN2019104161W WO2020048441A1 WO 2020048441 A1 WO2020048441 A1 WO 2020048441A1 CN 2019104161 W CN2019104161 W CN 2019104161W WO 2020048441 A1 WO2020048441 A1 WO 2020048441A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
controller
scene
terminal device
identification code
Prior art date
Application number
PCT/CN2019/104161
Other languages
English (en)
French (fr)
Inventor
王国泰
戴景文
贺杰
吴宜群
蔡丽妮
Original Assignee
广东虚拟现实科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811021765.8A external-priority patent/CN110875944B/zh
Priority claimed from CN201811023511.XA external-priority patent/CN110873963B/zh
Priority claimed from CN201811368617.3A external-priority patent/CN111198608B/zh
Application filed by 广东虚拟现实科技有限公司 filed Critical 广东虚拟现实科技有限公司
Priority to US16/727,976 priority Critical patent/US11375559B2/en
Publication of WO2020048441A1 publication Critical patent/WO2020048441A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/005Discovery of network devices, e.g. terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • the present application relates to the field of computer technology, and in particular, to a communication connection method, a terminal device, and a wireless communication system.
  • VR Virtual Reality
  • AR Augmented Reality
  • terminal devices related to virtual reality and augmented reality have gradually come into people's lives and work. Users can observe a variety of three-dimensional virtual content through the VR / AR equipment they wear, and can also interact with the displayed three-dimensional virtual content through a controller or the like. Before using the controller for interaction, manual operations are usually required to establish a communication connection between the VR / AR device and the controller, which is cumbersome to operate.
  • a communication connection method includes: collecting an image including a marker and identifying the marker in the image; and when the marker is a marker of a controller When obtaining the identification code of the controller corresponding to the marker, the identification code is an identification code used for pairing when the controller establishes a communication connection; and based on the identification code, establishing with the controller Communication connection.
  • a wireless communication system including: at least one marker; at least one controller, the marker is provided on the at least one controller; and at least one terminal device is used for The marker set on the at least one controller performs identification, obtains an identification code of the at least one controller, and establishes a communication connection with the at least one controller based on the identification code.
  • a method for displaying virtual content includes: identifying a scene marker, determining a current scene where a terminal device is located; and obtaining a server corresponding to the current scene from the current scene. Scene matching scene data; and displaying virtual content based on the scene data.
  • a system for displaying virtual content including: at least one scene marker for setting in at least one scene; at least one server for storing scene data of the at least one scene At least one terminal device, configured to establish a communication connection with the at least one server, identify the scene marker, determine a current scene according to the scene marker, and obtain the current scene from the connected server Matching scene data, and displaying virtual content according to the scene data.
  • an information prompting method includes: acquiring a target image collected by a camera, where the target image includes a marker; and acquiring the terminal device and the A relative spatial position relationship between the markers; and when the relative spatial position relationship satisfies a preset condition, generating prompt information, wherein the preset condition is at least one of a position and an attitude of the marker condition.
  • a terminal device which includes a memory and a processor, where the memory is coupled to the processor; the memory stores a computer program, and when the computer program is executed by the processor, causes all the The processor executes the method described above.
  • a computer-readable medium stores program code, and the program code can be called by a processor to perform the method as described above.
  • FIG. 1 is an application scenario diagram of a communication connection method in an embodiment
  • FIG. 2 is a structural block diagram of a terminal device in an embodiment
  • FIG. 3 is a schematic diagram of a communication connection between a terminal device and a server in an embodiment
  • FIG. 6 is a schematic diagram of a wireless mesh network in an embodiment
  • FIG. 7 is a schematic diagram of a wireless communication system in an embodiment
  • FIG. 8 is a flowchart of a method for displaying virtual content in an embodiment
  • FIG. 9 is a flowchart of a method for displaying virtual content in another embodiment
  • FIG. 10 is a flowchart of displaying a scene icon in an embodiment
  • FIG. 11a is a schematic diagram of a screen displaying a scene icon in an embodiment
  • 11b is a schematic diagram of a screen displaying a scene icon in another embodiment
  • 11c is a schematic diagram of a screen displaying a scene icon in still another embodiment
  • FIG. 11d is a schematic diagram of a screen displaying scene description information in an embodiment
  • 12a is a schematic diagram of a distance between a marker and a terminal device in an embodiment
  • 12b is a schematic diagram of a positional relationship between a marker and a boundary of a visual range of a camera in an embodiment
  • 12c is a schematic diagram of a distance between a marker and a boundary of a field of view of a camera provided in an embodiment
  • FIG. 12d is a schematic diagram of posture information of a marker relative to a terminal device in one embodiment
  • 13a is an interface diagram for prompting in an embodiment
  • FIG. 13b is an interface diagram for prompting in another embodiment.
  • an interaction system 10 provided in an embodiment of the present application includes a terminal device 20 and a controller 50.
  • the controller 50 is provided with a marker 30, and the terminal device 20 is provided with a camera.
  • the camera may collect an image of the contained marker 30 and perform a process on the marker 30 in the image. Identify.
  • the terminal device 20 may determine the marker 30 as a marker on the controller 50, and obtain an identification code of the controller 50 corresponding to the marker 30.
  • the identification code may be used when the terminal device 20 establishes a communication connection with the controller 50.
  • the paired identification code can be used by the terminal device 20 to establish a communication connection with the controller 50 based on the identification code.
  • the marker 30 has a pattern of a topological structure, and the topological structure refers to a connection relationship between a sub-marker and a feature point in the marker 30, and the topological structure represents the identity information of the marker 30.
  • the marker 30 may also be other patterns, which is not limited herein, as long as it can be identified and tracked by the terminal device 20.
  • the terminal device 20 may be a head-mounted display device or a mobile device such as a mobile phone or a tablet computer.
  • the head-mounted display device may be an integrated head-mounted display device. It may also be a head-mounted display device connected with an external electronic device.
  • the terminal device 20 may also be a smart terminal such as a mobile phone connected to an external / plug-in head-mounted display device, that is, the terminal device 20 is used as a processing and storage device of the head-mounted display device, and the external head-mounted display device is inserted or connected to A virtual object is displayed on the head-mounted display device.
  • the terminal device 20 may include a processor 210 and a memory 220.
  • the memory 220 stores one or more computer programs, and may be configured to be executed by the processor 210 to implement the method described in the embodiment of the present application.
  • the processor 210 includes one or more processing cores.
  • the processor 210 uses various interfaces and lines to connect various parts in the entire terminal device 100, and executes or executes instructions, programs, code sets, or instruction sets stored in the memory 220 and calls data stored in the memory 220 to execute Various functions and processing data of the terminal device 100.
  • the processor 210 may be implemented in at least one hardware form of digital signal processing (DSP), field programmable gate array (FPGA), and programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field programmable gate array
  • PDA programmable logic array
  • the processor 210 may integrate one or a combination of a central processing unit (CPU), an image processing unit (GPU), and a modem.
  • the CPU mainly handles the operating system, user interface, and application programs; the GPU is responsible for rendering and rendering of the displayed content; the modem is used for wireless communication.
  • the modem may not be integrated into the processor 210, and may be implemented by a communication chip alone.
  • the memory 220 includes a random access memory (RAM) and a read-only memory (ROM).
  • the memory 220 may be used to store instructions, programs, codes, code sets, or instruction sets.
  • the memory 220 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , Instructions for implementing the following method embodiments, and the like.
  • the storage data area may also store data and the like created by the terminal device 20 in use.
  • the terminal device 20 is a head-mounted display device, which further includes one or more of the following components: a display module, an optical module, a communication module, and a power source.
  • the display module may include a display control unit.
  • the display control unit is configured to receive an image of the virtual content rendered by the processor, display and project the image onto the optical module, so that the user can view the virtual content through the optical module.
  • the display module may be a display screen or a projection device, and is used to display an image.
  • the optical module can adopt an off-axis optical system or a waveguide optical system. After the image of the display module passes through the optical module, it can be projected to the user's eyes. The user can see the image projected by the display module through the optical module.
  • the user can also observe the real environment through the optical module, and experience the visual effect of the virtual content superimposed on the real environment.
  • the terminal device is communicatively connected with the interactive device through the communication module to perform information and instruction interaction.
  • the power supply can supply power to the entire terminal equipment to ensure the normal operation of each component of the terminal equipment.
  • the camera 230 on the terminal device 20 is an infrared camera, and the marker 30 is covered with an infrared filter so that the marker pattern is invisible to the user.
  • the marker 30 is illuminated by the emitted infrared light to make the camera 230
  • the image of the marker 30 is collected, which reduces the influence of visible light on the marker image in the environment, and improves the accuracy of positioning and tracking.
  • the terminal device 20 may also communicate with the server 40 through a network.
  • the client terminal of the AR / VR application runs on the terminal device 20, and the server terminal of the AR / VR application corresponding to the client runs on the server 40.
  • the server 40 stores identity information of each marker, virtual image data bound to the marker corresponding to the identity information, and the like, and the terminal device 20 may perform data transmission with the server 40.
  • an embodiment of the present application further provides a communication connection method, which is applied to the terminal device 20.
  • the communication connection method may include steps S410 to S430.
  • Step S410 Collect an image containing a marker and identify the marker in the image.
  • the camera of the terminal device can collect images of the markers in the visual range.
  • the marker may include at least one sub-marker, the sub-marker is a pattern having a certain shape, and the sub-markers in different markers are The distribution rules of objects are different, so each tag has different identity information.
  • the terminal device obtains the identity information corresponding to the tag by identifying the sub-tags contained in the tag.
  • the identity information can be a code, etc., which can be used for unique identification. Marker information.
  • the markers included in the real scene may include, but are not limited to, scene markers, content display markers, controller markers, and the like, where the scene markers can be identified by the terminal device and displayed corresponding to them Virtual scene, content display markers can be used by terminal devices to identify them and display their corresponding virtual content images, controller markers can be used by terminal devices to identify them and obtain information such as the position and attitude of the controller, different types
  • the tags correspond to different identity information.
  • Step S420 When the marker is a marker of the controller, obtain an identification code of the controller corresponding to the marker, and the identification code is an identification code for pairing when the controller establishes a communication connection.
  • the terminal device obtains the identity information of the marker, and when it is determined that the marker is a marker set on the controller according to the identity information, the terminal device can obtain the identity of the controller that sets the marker according to the image containing the marker, And the position and attitude of the controller relative to the terminal device.
  • the terminal device may obtain an identification code of the controller that sets the marker, wherein the identification code may be an identification code used for pairing when the controller establishes a communication connection, and the communication connection may be Bluetooth, wifi,
  • the wireless communication connection such as infrared or radio frequency may also be other wireless communication connection or wired communication connection, which is not limited herein.
  • RFID radio frequency identification information
  • the controller can broadcast the RFID to pair with the terminal device. Establish a communication connection with the terminal device.
  • the identification code may be obtained by the terminal device by scanning the broadcast content of the controller in the environment; in other embodiments, the identification code may also be obtained by the terminal device by connecting to the current venue according to the controller tag.
  • the wireless router obtains it by searching in the background database.
  • Step S430 Establish a communication connection with the controller based on the identification code.
  • the identification code is a credential for authentication between the terminal device and the controller.
  • the identification code can be directly used as encoding information for establishing a communication connection between the terminal device and the controller; in other embodiments, the identification code can also be used only for pairing, and the terminal device needs to be confirmed according to the identification code. After communicating with the object (controller), you can establish a communication connection with the object through other methods.
  • multiple exhibition booths with controllers are usually set in the exhibition hall.
  • the terminal device collects through the camera.
  • the image of the marker on the controller on the booth to obtain the identity information of the marker.
  • the corresponding controller identification code is obtained and established with the controller Communication connection.
  • the terminal device can perform data transmission with the controller, and the user can manipulate the controller to interact with the virtual content displayed in the display module of the terminal device.
  • data can be shared between multiple terminal devices through the router in the venue and the content can be updated in real time. Multiple people interact in the same virtual scene.
  • the terminal device scans the marker on the controller, and then it can automatically connect with the controller to interact with the virtual content.
  • the operation is simple and the interaction between the user and the virtual content is improved.
  • FIG. 5 another communication connection method according to an embodiment of the present application.
  • the method includes steps S510 to S530.
  • Step S510 Collect an image containing a marker and identify the marker in the image.
  • the terminal device obtains the identity information of the marker in the image, and can search the identity information in the database to determine the category to which the marker belongs.
  • Step S520 When the marker is a marker of a controller, obtain an identification code of the controller corresponding to the marker.
  • step S520 may further include steps S520a, S520b, and S520c.
  • Step S520a Scan the identification code broadcasted by the controller.
  • the terminal device may scan for an identification code (which may be an RFID) broadcasted by the controller (for example, via Bluetooth).
  • the user can press the communication button on the controller to make the controller enter a connectable state and broadcast its identification code. The user also does not need to operate the controller.
  • the controller can broadcast the identification code in real time. After the terminal device starts the scanning function, it can scan the identification code broadcasted by the controller in real time. In other embodiments, the terminal device may also always enable the scanning function to scan the identification code.
  • connection prompt information may be displayed, or the connection prompt information may be played by voice, prompting the user to operate the controller to make the controller enter a connectable state. And broadcast its identification code.
  • Step S520b Match the scanned identification code with the marker.
  • Step S520c When the matching is successful, determine that the scanned identification code is the identification code of the controller corresponding to the marker.
  • the identification code may be a 16-bit UUID (Universally Unique Identifier, universal identification number), and the identification code may include a code corresponding to a tag of a controller broadcasting the identification code and vendor-specific information. For example, when the code of the marker set on the controller is "7", the identification code broadcast by the controller may be "0xF0007", where "0xF000" is the manufacturer-specific information corresponding to the controller. By adding vendor-specific information to the identification code, it is easy to distinguish between different types of controllers.
  • UUID Universalally Unique Identifier, universal identification number
  • the terminal device matches the identification code with the marker, which may be comparing the code of the marker contained in the identification code with the code of the identified marker.
  • the terminal device determines that the scanned identification code is an identification code broadcasted by the controller corresponding to the collected marker, and can communicate with the controller through the identification code.
  • the terminal device determines that the scanned identification code is broadcast by another controller, and may discard the identification code and perform scanning again.
  • the terminal device may scan multiple identification codes at the same time, and the scanned identification codes may be matched with the identified markers one by one to determine the identification code corresponding to the controller controlled by the user.
  • the identifier may further include a scene identifier of a scene in which the controller is currently located, and different scenes may correspond to different scene identifiers.
  • the scene identifier corresponding to a game scene is 001 and the scene identifier corresponding to an education scene. For 005 and so on.
  • the scene identification may be part of the identification code, and the terminal device parses the scanned identification code to obtain the scene identification.
  • the terminal device can match the scene identifier contained in the identification code with the scene identifier of the current scene. When the two are consistent, it means that the identification code scanned by the terminal device is broadcast by the controller in the current scene, but not other The identification code broadcasted by the controller in the scene.
  • the identification code is matched with the identified markers to avoid misconnection between the terminal device and the controller that is being paired in other scenes.
  • the identification code may be matched with the identified marker first, and the scene identification matching may be performed after the matching is successful.
  • each scene area may be provided with a scene marker (for example, an entrance of an exhibition hall, a doorway of a room, etc.).
  • a scene marker for example, an entrance of an exhibition hall, a doorway of a room, etc.
  • the camera collects an image of the scene marker to identify the scene marker. Get the scene ID of the scene you are in.
  • the terminal device recognizes that the captured image contains a scene marker, a virtual scene corresponding to the scene marker is created and displayed by the display module, and the user can observe that the virtual scene and the real scene are superimposed.
  • a router may be provided in each scene, and after the terminal device enters the scene, it may connect to the router corresponding to the scene in which it is currently located, so that the virtual content data corresponding to the scene in which it is currently located may be downloaded from the server to construct and display the virtual content. content.
  • the terminal device recognizes the scene marker, and can obtain the network connection password corresponding to the current scene according to the scene marker, and perform network connection with the router corresponding to the current scene through the network connection password.
  • the network connection password of the router of each scene may correspond to the scene marker.
  • the network connection password may be the identity information of the scene marker of the corresponding scene, that is, the scene identifier, or may have Correspondence string.
  • the terminal device can obtain a scene identifier according to the identified scene marker, and obtain a network connection password corresponding to the scene identifier, so as to connect with the router of the current scene.
  • the wireless router in the scenario may form a wireless mesh network with multiple controllers.
  • the wireless router can access the wireless mesh network corresponding to the current scene through the wireless router to obtain the status of each node (controller) in the current scene.
  • the terminal device can first determine whether other controllers are pairing in the wireless mesh network. When other controllers are pairing, it can display a waiting prompt message to prompt the user to wait for the completion of pairing .
  • the terminal device can enable the scanning function, and the controller that can enter the connectable state can broadcast an identification code to pair with the terminal device.
  • only one terminal device is always enabled for scanning, and only one controller is enabled for identification code broadcast, which ensures that there is no misconnection between the terminal device and the controller in the same scenario.
  • the controller while the controller broadcasts the identification code, it can broadcast the information of entering the pairing state to the entire wireless mesh network, so that other devices in the wireless mesh network can know that the controller has entered the pairing state.
  • FIG. 6 is a schematic diagram of a wireless mesh network according to an embodiment.
  • a plurality of controllers 50 can establish a communication connection with each other.
  • a wireless router 60 can form a wireless mesh network with a plurality of controllers 50, and each control is controlled by the wireless router 60.
  • the status of the controller 50 is fed back to the server 40 in the background, and the maintenance personnel can check the status of the controller 50 in various scenarios in the real environment through the background console, such as whether charging, battery replacement, failure, loss, etc. are needed, which is convenient for the device. Perform timely maintenance.
  • the wireless router 60 can have both Wi-Fi and Bluetooth communication functions, and the wireless router 60 and the terminal device 20 can be wirelessly connected via Wi-Fi; the wireless router 60 and the controller 50 and each controller 50
  • the wireless communication connection can be established through Bluetooth Mesh (Bluetooth Mesh) and form a wireless mesh network; the terminal device 20 and the controller 50 can be wirelessly connected via Bluetooth BLE (Bluetooth Low Energy). Communication connection.
  • Bluetooth Mesh Bluetooth Mesh
  • Bluetooth BLE Bluetooth Low Energy
  • Communication connection may also be collected in other ways for connection, which is not specifically limited.
  • Step S530 Establish a communication connection with the controller based on the identification code.
  • the terminal device may detect the position of the controller, and when it is detected that the controller is located at a preset position or that the controller satisfies a preset motion trajectory, it is determined that the controller is a control that requires a communication connection.
  • the preset position may be a space position or a space area where the controller is allowed to enter a communicable connection state, and the marker on the controller is matched with the scanned identification code to communicate with the controller.
  • the terminal device collects an image containing a controller marker through a camera, and the controller marker in the image can be identified to obtain the relative position and attitude information between the controller marker and the terminal device.
  • the terminal device automatically starts a scanning function and establishes a communication connection with the controller.
  • a connection result prompt message may be displayed, which is used to prompt the current terminal device to successfully connect to the controller or the connection fails.
  • each paired controller in the wireless mesh network broadcasts the pairing end information to the wireless mesh network at the end of pairing, and the terminal device can obtain the pairing result of the controller through the connected wireless router.
  • other terminal devices may also obtain the pairing end information and display a connection result prompt message, which is used to prompt other users that similar devices have been paired.
  • the terminal device detects that the controller is placed back to the initial position, for example, the placement position of the controller on the booth, it can be considered that the controller has been used up, and then the communication connection is disconnected from the controller.
  • the current controller when it is detected that the attitude information collected by the controller through the IMU (Inertial Measurement Unit) has not changed for a period of time, the current controller may be considered to be in an unused state, and further, The controller disconnects the communication connection.
  • IMU Inertial Measurement Unit
  • the communication connection with the original controller may be disconnected, and a communication connection with the new controller located at the preset position may be re-established. To complete the replacement between the controllers.
  • the communication connection method provided in the foregoing embodiment avoids a situation of incorrect connection and improves the accuracy of matching between the terminal device and the controller.
  • a wireless communication system 100 includes at least one marker, at least one controller 50, a terminal device 20, a wireless router 60, and a server 40 distributed in a real environment.
  • the wireless router 60 may establish a communication connection with the terminal device 20 and the controller 50, and also communicate with the server 40 of the background maintenance data center.
  • the marker may include a scene marker 31, a content display marker 32, a controller marker 33, and the like, and different types of markers are used to implement different functions.
  • the scene marker 31 may be set at the entrance of each scene for the terminal device 20 to recognize it and display a virtual scene corresponding thereto.
  • a virtual scene corresponding thereto For example, in a multi-theme AR / VR pavilion, there are multiple exhibition themes such as ocean, grassland, and starry sky. Different themes correspond to different areas in the museum, and the entrance of each area is set to correspond to the theme of the area.
  • the terminal device after the terminal device recognizes a scene marker at the entrance of a marine-themed area, it can construct a marine-related virtual scene based on the scene marker, and display the marine-related virtual scene to the user through a display module;
  • the terminal device When the subject area moves to the starry sky theme area, after the terminal device recognizes the scene marker of the area entrance with the starry sky theme, it can build a starry sky-related virtual scene based on the scene marker and replace the previous ocean-related virtual scene, and The display module presents the virtual scene related to the sky to the user.
  • the terminal device 20 after identifying the scene marker 31, the terminal device 20 can also obtain information such as the connection password of the wireless router 60 in the scene to which the scene marker 31 belongs, so as to establish a communication connection with the wireless router 60 in the current environment.
  • the content display markers 32 can be set on various booths in a real environment, and the terminal device 20 can identify the content display markers 32 and display corresponding virtual objects, for example, display virtual exhibits, exhibit introductions, and the like.
  • the controller marker 33 is provided on each controller 50.
  • the terminal device 20 can recognize the controller marker 33 to obtain information such as the position and posture of the controller 50.
  • the terminal device 20 after identifying the controller marker 33, the terminal device 20 also displays a virtual object corresponding to the controller marker 33 and interacts with other virtual content. For example, in a game scenario, the terminal device 20 displays a corresponding virtual game item according to the controller marker 33, and the user may control the controller 50 to implement interaction between the virtual game item and other virtual content.
  • the terminal device 20 can identify the marker and obtain the identity information of the marker to determine the type of the marker (scene marker 31, content display marker 32, controller Marker 33, etc.). In some embodiments, the terminal device 20 may establish a communication connection with the controller 50 after identifying the controller marker 33.
  • a method for displaying virtual content according to an embodiment of the present application is applied to the foregoing terminal device, and includes steps S810 to S830.
  • Step S810 Identify scene markers to determine the current scene where the terminal device is located.
  • the scene markers are set at the entrance of the real scene area. Different scene markers can be set for different scenes, and the scene markers can correspond to the scene one by one.
  • the terminal device recognizes the scene marker, a scene corresponding to the identified scene marker may be obtained, that is, a scene in which the terminal device is currently located.
  • the terminal device may further obtain position and posture information of the terminal device with respect to the scene marker according to an image including the scene marker to determine the position and posture of the terminal device in the entire real environment.
  • Step S820 Acquire scene data matching the current scene from the server corresponding to the current scene.
  • the terminal device After the terminal device determines the current scene, it can obtain scene data that matches the current scene from the server corresponding to the current scene. For example, when the current scene where the terminal device is located is a marine theme scene, the terminal device may establish a communication connection with a server corresponding to the marine theme scene, and download scene data related to the marine theme from the server.
  • the scene data may include modeling data, which may be used to construct and render virtual content that matches the current scene, and may include vertex data, textures, maps, and the like that construct three-dimensional virtual content.
  • the current scene is a marine-themed scene
  • the scene data may include three-dimensional model data of a virtual underwater world, and model data of virtual marine life such as coral reefs, fish schools, and marine plants.
  • Step S830 construct virtual content according to the scene data.
  • the terminal device obtains scene data corresponding to the current scene, can load the scene data, constructs virtual content corresponding to the current scene according to the scene data, and displays it through a display module.
  • the virtual content may include a virtual scene and a virtual object.
  • the virtual object may be a static virtual object or a dynamic virtual object.
  • the terminal device is in a marine-themed scene, and downloads scene data matching the marine-themed scene from the server.
  • the terminal device can construct and display a three-dimensional virtual scene of the ocean based on the scene data, and also superimpose and display a coral reef in the three-dimensional scene of the ocean , Sunken virtual objects, such as sunken ships, and dynamic virtual objects, such as fish schools and marine plants.
  • scene data matching the fashion-themed scene can be downloaded from the server, and a three-dimensional virtual scene of the stage can be constructed and displayed according to the scene data, and Static virtual objects such as art posters, clothing, and dynamic virtual objects such as fashion catwalks and lights are superimposed and displayed in the three-dimensional scene of the stage.
  • the user can also interact with the virtual content through other methods, such as gestures, operating controllers, etc., and can also synchronize data with other terminal devices through the server to achieve multi-person interaction in the same virtual scene.
  • other methods such as gestures, operating controllers, etc.
  • a service desk for applying for the use of a terminal device may also be provided.
  • the user may apply for the use of the terminal device at the service desk, and the user or the service personnel may configure the terminal device, which may include user settings, wireless configuration, and control.
  • Device matching, hardware device installation, software setting startup, etc., or terminal devices can perform automatic configuration.
  • the terminal device can obtain user information to authenticate the user's identity.
  • the virtual content associated with the current scene is automatically displayed by identifying a scene marker set in a specific scene.
  • a method for displaying virtual content includes the following steps.
  • Step S910 Acquire an image including a scene marker.
  • Step S920 Obtain the identity information of the scene marker based on the image.
  • the terminal device can collect the image containing the scene marker through the camera, identify the scene marker contained in the image, and obtain the identity information corresponding to the scene marker.
  • the scene marker set in different scenes is different, and the corresponding identity information is also different.
  • Step S930 Determine the current scene where the terminal device is located according to the identity information.
  • Each scene has different scene markers. You can obtain the scene identifier corresponding to the identity information of the scene marker, and determine the scene where the terminal device is currently located according to the scene identifier. Different scenes have different scene identifiers. Correspondence between identity information of scene markers. For example, the scene marker set in the ocean scene is different from the scene marker set in the fashion scene. The identity information of the scene marker in the marine scene is "010", the scene ID of the marine scene is "01”, and the scene marker of the fashion scene The identity information of the object is "020”, and the scene identifier of the fashion scene is "02". The terminal device can obtain the corresponding scene identifier according to the identity information of the identified scene marker to determine whether the current scene is an ocean scene or a fashion Scenes, or other themed scenes.
  • Step S940 Establish a connection with the server corresponding to the current scenario according to the identity information.
  • the terminal device may connect with the server corresponding to the current scene according to the obtained identity information of the scene marker.
  • each scene may be covered with a wireless network corresponding to the scene, and the terminal device accesses the wireless network corresponding to the scene through a wireless router corresponding to each scene, and establishes a communication connection with the server.
  • a server may correspond to a single scene or multiple different scenes.
  • the wireless networks to which terminal devices located in different scenes may respectively connect to the server request to download the scene data corresponding to the scene.
  • the terminal device may obtain the network connection password corresponding to the current scene according to the identity information of the scene marker, and perform network connection with the router corresponding to the current scene through the network connection password.
  • Step S950 Acquire scene data matching the current scene from the server corresponding to the current scene.
  • the scene data may include a spatial map and modeling data, where the spatial map may be a virtual map (which may be two-dimensional or three-dimensional) constructed according to a real environment, and may be used to perform a location of the terminal device in a real space. Positioning.
  • the spatial map may be a virtual map (which may be two-dimensional or three-dimensional) constructed according to a real environment, and may be used to perform a location of the terminal device in a real space. Positioning.
  • the position and posture information of the terminal device in the real scene can be obtained.
  • the position information includes the position coordinates of the terminal device in the real scene, and the position coordinates can be the coordinates of a spatial coordinate system established with the scene marker as the origin. In addition to the position coordinates, it can also include the area location of the scene where the terminal device is located, etc. The area location of the scene can be obtained through the spatial map. Middle area.
  • the attitude information may be information such as the rotation and orientation of the terminal device.
  • the terminal device collects an image containing a scene marker through a camera, recognizes the scene marker, obtains relative position and attitude information between the terminal device and the scene marker, and obtains a scene according to a spatial map.
  • the position of the marker in the real scene is determined based on the relative position and posture information and the position of the scene marker in the real scene to determine the position and posture information of the terminal device in the real scene.
  • the terminal device When the terminal device moves in the scene, it can also collect the image of the content marker, and obtain the position and posture information of the terminal device in the real scene according to the collected content marker; when the image of the scene marker or the content marker cannot be collected
  • the terminal device can also obtain the position and attitude information of the terminal device in the real scene in real time through VIO (Visual-Inertial Odometry).
  • Step S960 Render the virtual content according to the scene data.
  • the terminal device can load scene data that matches the current scene, and construct a three-dimensional model of virtual content according to the model data contained in the scene data, including three-dimensional models of virtual scenes and virtual objects that match the real scene.
  • the three-dimensional model of the virtual content may be rendered in real time according to the information and posture information of the terminal device in the real scene, and the rendered virtual content may be displayed.
  • the rendering coordinates of the virtual content in the virtual space can be fixed relative to the origin of the world coordinates, and the rendering coordinates are related to the real scene.
  • the displayed virtual content is fixed to the position of the marker set in the scene, and can also be matched with different areas of the real scene. Wait.
  • the terminal device recognizes the scene identifier and can obtain the position and posture information relative to the scene identifier.
  • the virtual entrance guide is rendered and displayed, and the user can observe the The virtual entrance guide, when the user is in different positions of the ocean scene or rotates the perspective of the terminal device, the terminal device can render and display different virtual ocean scenes and virtual ocean creatures.
  • Step S970 Display the virtual content.
  • the method for displaying virtual content in the foregoing embodiment renders virtual content in real time according to the position and posture information of the terminal device.
  • different virtual content can be displayed, which enriches the visual effects of AR / VR and improves the real Feeling and immersion.
  • the above method for displaying virtual content may further include the following steps.
  • Step S1010 Obtain the relative spatial position relationship between the terminal device and the preset scene based on the position and posture information in the real space to determine the orientation information of the preset scene relative to the terminal device.
  • Multiple scenes can be set in the real space in advance, and the position and posture information of the terminal device in the real space is obtained according to the space map corresponding to the real space, so that the relative position information of each preset scene in the real space and the terminal device can be determined, where
  • the relative orientation information includes the direction and distance of the preset scene relative to the terminal device.
  • Step S1020 The scene icon corresponding to the preset scene is superimposed and displayed on the area corresponding to the orientation information.
  • the terminal device can superimpose and display the corresponding scene icon in the area corresponding to the orientation information of the preset scene in the field of view.
  • the field of view of the terminal device can be understood as the range that can be seen by the user's eyes through the terminal device.
  • the scene icon can be used to identify Different scenes, for example, the scene icon can be the name, pattern, or number of the scene.
  • the scene icon can be displayed superimposed on the position of the preset scene within the field of view, or it can be superimposed and displayed on the area where the direction of the preset scene matches. According to the scene icon, the user can accurately know each scene in the field of view and the location of each scene.
  • the field of view 1100 includes scene 1 and scene 2, where scene 1 is an ocean scene, scene 2 is a starry sky scene, scene 1 is in the upper left position relative to the terminal device, and scene 2 Relative to the terminal device, it is in the upper right position, and Scene 2 is farther away from Scene 1.
  • the scene icon 1102 of the ocean scene is superimposed to show the position of scene 1
  • the scene icon 1102 of the starry scene is superimposed to show the position of scene 2
  • the scene icon 1104 of the sea scene is superimposed to be displayed between the terminal device and the direction of scene 1 (upper left).
  • the scene icon 1104 of the starry sky scene is superimposed and displayed between the terminal device and the direction of the scene 2 (upper right direction).
  • a scene icon corresponding to the preset scene outside the field of view may also be displayed, and the scene icon may be displayed superimposed on the preset scene
  • the azimuth information corresponds to the edge of the field of view to help the user quickly obtain the position of the preset scene outside the field of view.
  • the user can see the scene 1 included in the field of view 1100 through the terminal device, the scene 2 is outside the field of view 1100, and the scene 2 is in the upper right position relative to the terminal device.
  • the position where the scene 1 is superimposed is displayed, and the scene icon 1102 of the starry sky scene is displayed on the right edge of the field of view 1100.
  • the expression form of the scene icon and the position of the superimposed display are not limited to the ones described above, and are not limited herein.
  • a scene icon of a preset scene when displaying a scene icon of a preset scene, other information such as the actual distance between the terminal device and the preset scene may also be displayed.
  • the terminal device when the terminal device is inside the scene, only the scene icon of the scene may be displayed, or the scene icon may be temporarily hidden to avoid affecting the user's viewing.
  • the posture information of the terminal device may be detected, and the orientation of the terminal device may be determined according to the posture information.
  • the orientation may be used to indicate the orientation of the human eye of the user who wears the terminal device.
  • scene description information corresponding to a preset scene with the same azimuth information may be displayed, where the scene description information may include a name of the preset scene, an introduction (which may include text and video), and a popularity (within a certain period of time) Number of visitors), estimated arrival time (time it takes for a user to walk to the preset scene), estimated queue time (if there are too many viewers, you may need to queue), but it is not limited to this.
  • the orientation of the terminal device may be considered to be consistent with the orientation information of the preset scene.
  • the user can see that scene 1 is included in the field of view 1100 through the terminal device.
  • the terminal device can superimpose the scene icon 1102 of the ocean scene to display scene 1.
  • Location, and display scene description information 1106 of scene 1 which may include the current number of visitors, estimated waiting time, and a brief introduction to the scene.
  • the user can also use the audio output unit of the terminal device to explain the scene to the user through sound.
  • a scene icon of the scene may be displayed.
  • the transparency of the scene icon may be reduced or the scene icon may be enlarged. Make the displayed scene icon more visible; when the terminal device is farther from the entrance or exit of the scene, you can increase the transparency of the scene icon or reduce the scene icon.
  • the terminal device When the terminal device recognizes the new scene marker, it indicates that the scene where the terminal device is located has changed. According to the new scene marker, a connection can be established with the server corresponding to the new scene, and the new scene is downloaded from the server. Scene data, constructing virtual content for display. When the scene where the terminal device is located changes, a scene icon corresponding to the new scene may be obtained, and the scene icon of the previous scene displayed is replaced with the scene icon of the new scene.
  • the terminal device may also upload the operation record of the user during use to the server in the form of a log.
  • the operation record may include scenes accessed by the terminal device, interactive actions performed, etc. It is convenient for subsequent statistics of user preferences and optimization of the virtual display experience.
  • the content display method provided in the foregoing embodiment displays scene icons of each scene and identifies and guides the scenes, which can enrich the visual effects of AR / VR and improve the sense of reality and immersion.
  • the present application further provides an information prompting method.
  • a terminal device collects a target image including a marker through a camera, and obtains a relative spatial position relationship between the terminal device and the marker according to the target image. When the relationship meets a preset condition, prompt information is generated, where the preset condition is at least one of a position and a posture of the marker.
  • the relative spatial position relationship includes the target distance between the terminal device and the marker.
  • the terminal device can determine whether the target distance exceeds the first distance threshold. When the first distance threshold is exceeded, a prompt is generated. information.
  • the terminal device can analyze the contour size of the marker in the target image, and find the distance corresponding to the contour size of the marker in the correspondence between the distance and the contour size to determine the terminal device and the Target distance between markers.
  • the target distance can also be obtained in real time.
  • a Depth Map lens can be used to generate a real-time map of the distance between the marker and the lens to obtain the distance between the terminal device and the marker in real time.
  • magnetic tracking, Acoustic tracking, inertial tracking, optical tracking, or multi-sensor fusion to obtain the distance between the marker and the terminal device in real time are not specifically limited.
  • the relative spatial position relationship includes the distance between the position of the marker and the boundary position of the camera's visual range
  • the terminal device can determine whether the distance between the position of the marker and the boundary position of the camera's visual range is less than The second distance threshold, when less than the second distance threshold, generates prompt information.
  • the visual range of the camera refers to the range in which the camera can capture images
  • the boundary of the visual range refers to the edge position of the area range corresponding to the visual range.
  • L1 and L2 are the horizontal boundaries of the horizontal field of view
  • L3 and L4 are the vertical boundaries of the vertical field of view.
  • the horizontal boundary of the target image can be used as the horizontal field of view
  • the vertical boundary can be used as the vertical field of view.
  • the position of can be obtained by analyzing the pixel coordinates of the marker image in the target image; as an implementation, the intersection of L1 and L4 is used as the origin of the target image, between the position of the marker and the boundary position of the camera's visual range
  • the distance can include the distance d1 between the marker and L1, the distance d2 between the marker and L4, the distance d3 between the marker and L2, or the distance d4 between the marker and L3.
  • the smallest distance value among, d3, and d4 is taken as the distance between the position of the marker and the boundary position of the camera's visual range to obtain the distance between the position of the marker and the boundary position of the camera's visual range.
  • the relative spatial position relationship includes attitude information of the marker relative to the terminal device, and the attitude information includes a rotation angle.
  • the terminal device can determine whether the rotation angle exceeds a preset value of the rotation angle, and when the rotation angle exceeds the preset value of the rotation angle To generate a prompt message.
  • the attitude information of the marker with respect to the terminal device includes information such as a rotation direction and a rotation angle of the marker.
  • the target feature points of the marker can be used to determine the pose information of the marker.
  • the target feature points are a specific number of feature points arbitrarily selected from all the feature points in the target image.
  • the pixel coordinates in the image and the real physical coordinates on the marker can obtain information such as the position, rotation direction, and rotation angle of the marker relative to the terminal device.
  • the attitude information further includes a rotation direction.
  • the terminal device may obtain a preset rotation angle value corresponding to the rotation direction, and determine whether the rotation angle exceeds a preset rotation angle value corresponding to the rotation direction.
  • the preset rotation angle corresponding to the direction generates a prompt message.
  • the preset value of the rotation angle is a critical angle value set in advance. When the critical angle value is exceeded, the front side of the marker (the side on which the marker pattern is set) cannot be captured by the camera.
  • the terminal device may determine the position and posture change of the marker according to the image position of the marker in the multi-frame target image, and obtain the predicted motion of the terminal device and / or the marker according to the position and posture change of the marker. Information, and determine whether a preset condition is satisfied by predicting the motion information, and when the preset condition is satisfied, a prompt message is generated.
  • the predicted motion information may include motion direction prediction, motion speed prediction, motion rotation direction prediction, and the like.
  • the terminal device may obtain historical images containing markers in successive frames before the current target image, and obtain pixel coordinates of the markers in each historical image, according to the markers in successive frames of image The pixel coordinates of the object can fit the trajectory of the marker.
  • the direction of movement can be obtained through a change in the target distance between the terminal device and the marker, and whether to generate prompt information is determined based on the direction of movement and the target distance.
  • the distance between the marker and the terminal device becomes smaller, that is, the marker moves in the direction of the terminal device, no prompt information can be generated; when the distance between the marker and the terminal device becomes larger, the marker moves toward the distance from the terminal device.
  • Directional movement can generate prompt information.
  • the distance between the position of the marker and the boundary position of the camera's visual range is less than the second distance threshold, if the marker is moving toward the center of the visual range, no prompt information may be generated.
  • the marker is moving toward the boundary of the visual range, Move around to generate prompt information.
  • the terminal device can generate the prompt information, and the three relative spatial position relationships can also be combined with each other to determine whether to generate the prompt information.
  • the prompt information may include at least one of an image prompt, a voice prompt, and a vibration prompt.
  • the image prompt may be an arrow prompt, an expression prompt, or other forms of image prompts.
  • the voice prompt can be set according to the user's preference.
  • the sound can be the default sound, children's sound, star sound or the user's own sound.
  • Vibration reminder can be used to generate the reminder effect through the vibrator, etc.
  • the vibration can be continuously strengthened with the length of the prompt.
  • the terminal device may use a virtual “sad” expression to prompt the user that the marker will not be displayed normally; as shown in FIG. 13b, the “arrow” To remind the user that the marker will not be displayed normally.
  • the degree of freedom information of the terminal device may be obtained in real time through a visual inertial odometer, and the position direction of the marker relative to the terminal device may be determined according to the degree of freedom information. Generate prompt information based on the position and direction.
  • the current position of the terminal device can be used as the starting point, and information such as the position change and posture information of the terminal device relative to the starting point can be continuously calculated through VIO;
  • information such as the position change and attitude information of the terminal device relative to the starting point can be obtained, and the position of the starting point can be re-determined to obtain the real-time position and attitude information of the marker.
  • the terminal device can obtain the rotation direction of the marker, obtain the preset angle value corresponding to the rotation direction according to the rotation direction, and determine whether the rotation angle exceeds the corresponding angle preset. Set value.
  • the relative spatial position relationship between the terminal device and the marker when it is detected that the relative spatial position relationship between the terminal device and the marker satisfies a preset condition, a situation in which the marker cannot be accurately identified may occur, so a prompt message is generated to remind the user to adjust the terminal device.
  • the relative spatial position relationship with the markers enables the markers to be accurately identified and improves the accuracy of the virtual device display by the terminal device.
  • a computer-readable storage medium stores program code, and the program code can be called by a processor to execute the method described in the foregoing embodiment.
  • the computer-readable storage medium may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM, a hard disk, or a ROM.
  • the computer-readable storage medium includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium has a storage space of a program code for performing any of the method steps in the above method. These program codes can be read from or written into one or more computer program products.
  • the program code may be compressed, for example, in a suitable form.

Abstract

本申请公开了一种通信连接方法,该方法包括:采集包含标记物的图像,并对图像中的标记物进行识别;当标记物为控制器标记物时,获取与标记物对应的控制器的标识码,标识码为控制器建立通信连接时用于进行配对的标识码;基于标识码,与控制器建立通信连接。

Description

通信连接方法、终端设备及无线通信系统 技术领域
本申请涉及计算机技术领域,尤其涉及一种通信连接方法、终端设备及无线通信系统。
背景技术
随着虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)技术的发展,虚拟现实、增强现实相关的终端设备逐渐走入了人们的生活及工作中。用户通过佩戴的VR/AR设备可观察到各式各样的三维虚拟内容,还可通过控制器等与显示的三维虚拟内容进行交互。在利用控制器进行交互前,通常需要手动操作以建立VR/AR设备与控制器之间的通信连接,操作繁琐。
发明内容
在本申请一个实施例中,提供一种通信连接方法,所述方法包括:采集包含标记物的图像,并对所述图像中的标记物进行识别;当所述标记物为控制器的标记物时,获取与所述标记物对应的控制器的标识码,所述标识码为所述控制器建立通信连接时用于进行配对的标识码;及基于所述标识码,与所述控制器建立通信连接。
在本申请一个实施例中,还提供一种无线通信系统,包括:至少一个标记物;至少一个控制器,所述至少一个控制器上设置有所述标记物;至少一个终端设备,用于对所述至少一个控制器上设置的标记物进行识别,获取所述至少一个控制器的标识码,并基于所述标识码与所述至少一个控制器建立通信连接。
在本申请一个实施例中,还提供一种显示虚拟内容的方法,所述方法包括:识别场景标记物,确定终端设备所处的当前场景;从所述当前场景对应的服务器获取与所述当前场景匹配的场景数据;及根据所述场景数据,显示虚拟内容。
在本申请一个实施例中,还提供一种显示虚拟内容的系统,包括:至少一个场景标记物,用于设置在至少一个场景中;至少一个服务器,用于存储所述至少一个场景的场景数据;至少一个终端设备,用于与所述至少一个服务器建立通信连接,对所述场景标记物进行识别,根据所述场景标记物确定所处的当前场景,从连接的服务器获取与所述当前场景匹配的场景数据,并根据所述场景数据显示虚拟内容。
在本申请一个实施例中,还提供一种信息提示方法,所述方法包括:获取通过相机采集的目标图像,所述目标图像包括标记物;根据所述目标图像获取所述终端设备与所述标记物之间的相对空间位置关系;以及,当所述相对空间位置关系满足预设条件时,生成提示信息,其中,所述预设条件为所述标记物的位置及姿态中的至少一种条件。
在一个实施例中,提供一种终端设备,包括存储器以及处理器,所述存储器与所述处理器耦合;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的方法。
在一个实施例中,提供一种计算机可读介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用,以执行如上所述的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他 的附图。
图1为一个实施例中的通信连接方法的应用场景图;
图2为一个实施例中终端设备的结构框图;
图3为一个实施例中终端设备与服务器的通信连接示意图;
图4为一个实施例中通信连接方法的流程图;
图5为另一个实施例中通信连接方法的流程图;
图6为一个实施例中无线网格网络的示意图;
图7为一个实施例中无线通信系统的示意图;
图8为一个实施例中显示虚拟内容的方法的流程图;
图9为另一个实施例中显示虚拟内容的方法的流程图;
图10为一个实施例中显示场景图标的流程图;
图11a为一个实施例中显示场景图标的画面示意图;
图11b为另一个实施例中显示场景图标的画面示意图;
图11c为又一个实施例中显示场景图标的画面示意图;
图11d为一个实施例中显示场景说明信息的画面示意图;
图12a为一个实施例中标记物与终端设备之间的距离示意图;
图12b为一个实施例中标记物与相机的视觉范围的边界之间的位置关系的示意图;
图12c为一个实施例中提供的标记物与相机的视野范围的边界之间的距离的示意图;
图12d为一个实施例中一个实施例中标记物相对终端设备的姿态信息示意图;
图13a为一个实施例中进行提示的界面图;
图13b为另一个实施例中进行提示的界面图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参图1,本申请实施例提供的交互系统10,包括终端设备20及控制器50。控制器50上设置有标记物30,终端设备20上设置有相机,当标记物30处于相机的视觉范围内时,相机可采集包含的标记物30的图像,并对图像中的标记物30进行识别。终端设备20可确定标记物30为控制器50上的标记物,并获取与标记物30对应的控制器50的标识码,标识码可为终端设备20与控制器50建立通信连接时用于进行配对的标识码,终端设备20可基于该标识码与控制器50建立通信连接。
在一些实施方式中,标记物30具有拓扑结构的图案,拓扑结构是指标记物30中的子标记物和特征点等之间连通关系,该拓扑结构表示标记物30的身份信息。标记物30也可为其他图案,在此不作限定,只要能被终端设备20识别追踪即可。
在一些实施方式中,终端设备20可为头戴显示装置,也可为手机、平板电脑等移动设备;终端设备20为头戴显示装置时,头戴显示装置可为一体式头戴显示装置,也可为连接有外置电子装置的头戴显示装置。终端设备20还可为与外接式/插入式头戴显示装置连接的手机等智能终端,即终端设备20作为头戴显示装置的处理和存储设备,插入或者接入外接式头戴显示装置,以在头戴显示装置中显示虚拟对象。
请参图2,在一些实施例中,终端设备20可包括处理器210和存储器220。其中,存储器220中存储一个或多个计算机程序,并可被配置为由处理器210执行,以实现本申请实施例中描述的方法。
处理器210包括一个或者多个处理核。处理器210利用各种接口和线路连接整个终端设备100内的各个部分,通过运行或执行存储在存储器220内的指令、程序、代码集 或指令集,以及调用存储在存储器220内的数据,执行终端设备100的各种功能和处理数据。处理器210可采用数字信号处理(DSP)、现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)中的至少一种硬件形式来实现。处理器210可集成中央处理器(CPU)、图像处理器(GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。其中,上述调制解调器也可不集成到处理器210中,单独通过一块通信芯片进行实现。
存储器220包括随机存储器(RAM)、只读存储器(ROM)。存储器220可用于存储指令、程序、代码、代码集或指令集。存储器220可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可存储终端设备20在使用中所创建的数据等。
在一个实施例中,终端设备20为头戴显示装置,其还包括如下一个或多个部件:显示模组、光学模组、通信模块以及电源。显示模组可包括显示控制单元,显示控制单元用于接收处理器渲染后的虚拟内容的图像,将该图像显示并投射至光学模组上,使用户能够通过光学模组观看到虚拟内容。其中,显示模组可以是显示屏或投射装置等,用于显示图像。光学模组可采用离轴光学系统或波导光学系统,显示模组的图像经光学模组后,能够被投射至用户的眼睛。用户通过光学模组可看到显示模组投射的图像。在一些实施方式中,用户还能够透过光学模组观察到现实环境,感受虚拟内容与现实环境叠加后的视觉效果。终端设备通过通信模块与交互设备通信连接,以进行信息以及指令的交互。电源可为整个终端设备进行供电,保证终端设备各个部件的正常运行。
在一个实施例中,终端设备20上的相机230为红外相机,标记物30外覆盖有红外滤光片,以使得标记图案对用户不可见,通过发射的红外光线照射标记物30,使得相机230采集到标记物30的图像,降低了环境中可见光对标记物图像的影响,提高定位追踪的准确性。
请参图3,在一个实施例中,终端设备20还可通过网络与服务器40进行通信连接。其中,终端设备20上运行AR/VR应用的客户端,服务器40上运行与客户端对应的AR/VR应用的服务端。服务器40存储每个标记物的身份信息、与该身份信息对应的标记物绑定的虚拟图像数据等,终端设备20可与服务器40进行数据传输。
请参图4,本申请的实施例还提供一种通信连接方法,应用于上述终端设备20,该通信连接方法可包括步骤S410至S430。
步骤S410:采集包含标记物的图像,并对图像中的标记物进行识别。
终端设备的相机可对处于视觉范围内的标记物进行图像采集,作为一种示例,标记物中可包括至少一个子标记物,子标记物为具有一定形状的图案,不同标记物内的子标记物的分布规则不同,因此每个标记物具备不同的身份信息,终端设备通过识别标记物中包含的子标记物,获取与标记物对应的身份信息,该身份信息可以是编码等可用于唯一标识标记物的信息。
在一些实施例中,现实场景中包含的标记物可包括但不限于场景标记物、内容展示标记物以及控制器标记物等,其中,场景标记物可供终端设备对其进行识别并显示与其对应的虚拟场景,内容展示标记物可供终端设备对其进行识别并显示与其对应的虚拟内容图像,控制器标记物可供终端设备对其进行识别并获取控制器的位置、姿态等信息,不同种类的标记物分别对应不同的身份信息。
步骤S420:当标记物为控制器的标记物时,获取与标记物对应的控制器的标识码,标识码为控制器建立通信连接时用于进行配对的标识码。
终端设备获取标记物的身份信息,当根据该身份信息判断出该标记物为设置在控制器上的标记物时,终端设备可根据包含该标记物图像获取设置该标记物的控制器的身份, 以及该控制器相对终端设备的位置及姿态等信息。
在一个实施例中,终端设备可获取设置该标记物的控制器的标识码,其中,标识码可为控制器建立通信连接时用于进行配对的标识码,该通信连接可为蓝牙、wifi、红外或射频等无线通信连接,也可为其他无线通信连接或有线通信连接,在此不限定。作为一种方式,当控制器通过蓝牙方式等与终端设备进行通信连接时,其标识码可以是射频身份信息(RFID),控制器可通过广播该RFID与终端设备进行配对,配对成功后即可与终端设备建立通信连接。
在一些实施方式中,标识码可以是终端设备通过扫描环境中控制器的广播内容获取的;在另一些实施方式中,标识码还可以是终端设备根据控制器标记物,通过连接当前场馆对应的无线路由器在后台数据库中进行查找获取的。
步骤S430:基于标识码,与控制器建立通信连接。
标识码为终端设备与控制器之间进行身份认证的凭证。在一些实施方式中,标识码可直接作为终端设备与控制器之间建立通信连接的编码信息;在另一些实施方式中,标识码还可仅用于配对,在终端设备根据该标识码确认需要进行通信连接的对象(控制器)后,可通过其他方式与该对象建立通信连接。
作为一种可能的应用场景,例如,在VR/AR博物馆内,展厅中通常会设置多个放置有控制器的展台,当用户佩戴终端设备在展厅内某个展台前时,终端设备通过相机采集展台上的控制器的标记物的图像以获取标记物的身份信息,当根据标记物的身份信息确定该标记物为控制器标记物时,再获取相应控制器的标识码,与该控制器建立通信连接。在建立通信连接后,终端设备可与控制器之间进行数据传输,用户可操控该控制器与终端设备的显示模组中显示的虚拟内容进行交互。进一步地,若当前场馆中存在多个终端设备同时在其他展台与不同的控制器建立了通信连接,可通过场馆内的路由器进行多个终端设备之间的数据共享以及内容实时同步更新,可实现同一虚拟场景内的多人互动。
上述的举例只是本实施例提供的通信连接方法的部分实际应用,随着VR/AR技术的进一步发展与普及,本实施例提供的通信连接方法可以在更多的实际应用场景中发挥作用。
上述实施例的通信连接方法,通过终端设备扫描控制器上的标记物,即可与控制器自动连接,以与虚拟内容进行交互,操作简便,提高用户与虚拟内容的互动性。
请参图5,本申请实施例提供的另一种通信连接方法,该方法包括步骤S510至S530。
步骤S510:采集包含标记物的图像,并对图像中的标记物进行识别。
终端设备获取图像中的标记物的身份信息,可将该身份信息在数据库中进行查找,确定该标记物所属的类别。
步骤S520:当所述标记物为控制器的标记物时,获取与所述标记物对应的控制器的标识码。
在一个实施例中,步骤S520还可包括步骤S520a、步骤S520b和步骤S520c进行。
步骤S520a:扫描由控制器广播的标识码。
终端设备可扫描获取由控制器广播(例如,通过蓝牙广播)的标识码(可以是RFID)。在一些实施方式中,用户可通过按压控制器上的通信按键,以使控制器进入可连接状态并广播其标识码。用户也可无需对控制器进行操作,控制器可以实时广播标识码,终端设备开启扫描功能后,可扫描由控制器实时广播的标识码。在其他实施例中,终端设备也可一直开启扫描功能对标识码进行扫描。
在一些实施方式中,当终端设备识别到控制器标记物后,可显示连接提示信息,也可通过语音的方式播放连接提示信息,提示用户对控制器进行操作,以使控制器进入可连接状态并广播其标识码。
步骤S520b:将扫描到的标识码与标记物进行匹配。
步骤S520c:当匹配成功时,确定所述扫描到的标识码为与标记物对应的控制器的 标识码。
在一些实施方式中,标识码可以是16位的UUID(Universally Unique Identifier,通用唯一识别码),标识码可包含广播该标识码的控制器的标记物对应的编码以及厂商特定信息。例如,当设置在控制器的标记物的编码为“7”时,则该控制器广播的标识码可为“0xF0007”,其中“0xF000”为该控制器对应的厂商特定信息。通过在标识码中加入厂商特定信息,可易于分辨不同类型的控制器。
终端设备将标识码与标记物进行匹配,可以是将标识码中包含的标记物的编码与识别到的标记物的编码进行比对。当二者一致时,终端设备确定扫描到的标识码为采集到的标记物对应的控制器所广播的标识码,可通过该标识码与该控制器进行通信连接。当二者不一致时,终端设备确定扫描到的标识码是其他控制器广播的,可丢弃该标识码并重新进行扫描。作为一种方式,终端设备可能同时扫描到多个标识码,可将扫描到的标识码逐一与识别到的标记物进行匹配,以确定用户操控的控制器对应的标识码。
在一个实施例中,该标识码中还可包含有控制器当前所在场景的场景标识,不同的场景可对应不同的场景标识,例如,游戏场景对应的场景标识为001,教育场景对应的场景标识为005等。场景标识可为标识码的一部分,终端设备对扫描到的标识码进行解析,获取到场景标识。终端设备可将标识码中包含的场景标识与当前所处场景的场景标识进行匹配,当二者一致时,说明终端设备扫描到的标识码是当前所在场景中的控制器广播的,而非其他场景中的控制器广播的标识码。在场景标识匹配成功后,将标识码与识别到的标记物进行匹配,避免终端设备与其他场景中正在进行配对的控制器之间误连。在其他实施例中,也可先将标识码与识别到的标记物匹配,匹配成功后再进行场景标识匹配。
在一些实施例中,每个场景区域可设置有场景标记物(例如展厅的入口、房间的门口等),终端设备进入场景时通过相机采集场景标记物的图像,对该场景标记物进行识别,得到当前所处场景的场景标识。可选地,当终端设备识别到采集的图像中包含场景标记物,建立与该场景标记物对应的虚拟场景,并通过显示模组进行显示,用户可观察到虚拟场景与现实场景进行叠加。
在一个实施例中,每个场景可设置有路由器,终端设备进入场景后,可连接当前所处场景对应的路由器,从而可从服务器下载与当前所处场景对应的虚拟内容数据,构建并显示虚拟内容。终端设备识别到场景标记物,可根据该场景标记物获取当前所在场景对应的网络连接密码,并通过网络连接密码与当前所在场景对应的路由器进行网络连接。作为一种实施方式,各个场景的路由器的网络连接密码可与场景标记物对应,例如,网络连接密码可为对应场景的场景标记物的身份信息,也即场景标识,也可为与场景标识具有对应关系的字符串。终端设备可根据识别到的场景标记物获取场景标识,并获取与场景标识对应的网络连接密码,从而与当前场景的路由器进行连接。
在一些实施例中,场景中的无线路由器可与多个控制器组成无线网格网络(wireless mesh network)。当终端设备连接到无线路由器后,可通过该无线路由器接入当前所在场景对应的无线网格网络,以获取当前场景中各个节点(控制器)的状态。在控制器进入可连接状态时,终端设备可先判断无线网格网络中是否存在其他控制器正在配对,当存在其他控制器正在配对时,可显示等待提示信息,提示用户等待其他控制器配对完成。当不存在其他控制器正在配对,或其他控制器配对完成时,终端设备可开启扫描功能,且该进入可连接状态的控制器可广播标识码,与终端设备进行配对。场景的无线网格网络中始终只有一个终端设备开启扫描,且只有一个控制器开启标识码广播,保证同一场景内的终端设备与控制器之间不会出现误连。
作为一种方式,控制器广播标识码的同时,可向整个无线网格网络广播进入配对状态的信息,以使无线网格网络中的其他设备能够获知已经存在控制器进入了配对状态。
图6为一个实施例中无线网格网络的示意图,多个控制器50之间可互相建立通信连 接,无线路由器60可与多个控制器50组成无线网格网络,通过无线路由器60将各个控制器50的状态反馈至后台的服务器40,维护人员可通过后台控制台查看现实环境中各个场景中的控制器50的状态,例如,是否需要充电、更换电池、发生故障、丢失等,便于对设备进行及时维护。
作为一种方式,无线路由器60可同时具有Wi-Fi和蓝牙通信功能,无线路由器60与终端设备20之间可通过Wi-Fi进行无线通信连接;无线路由器60与控制器50以及各个控制器50之间可通过蓝牙Mesh(Bluetooth Mesh,蓝牙组网)进行无线通信连接,并构成无线网格网络;终端设备20与控制器50之间可通过蓝牙BLE(Bluetooth Low Energy,低能耗蓝牙)进行无线通信连接。当然,各设备间的网络连接也可采集其他方式进行连接,具体不限定。
步骤S530:基于标识码,与控制器建立通信连接。
在一个实施例中,终端设备可以检测控制器的位置,当检测到控制器位于预设位置,或是检测到控制器满足预设的运动轨迹时,确定该控制器为需要进行通信连接的控制器,其中,预设位置可为控制器允许进入可通信连接状态的空间位置或空间区域,将该控制器上的标记物与扫描到的标识码进行匹配,从而与该控制器进行通信连接。
在一些实施例中,终端设备通过相机采集包含控制器标记物的图像,可对该图像中的控制器标记物进行识别,得到该控制器标记物与终端设备之间的相对位置及姿态信息。当根据该相对位置及姿态信息检测到控制器处于预设位置时,终端设备自动开启扫描功能并与控制器建立通信连接。用户通过简单地拿起控制器等操作,即可实现终端设备与控制器的通信连接,提高了连接效率,使得交互过程更流畅,且可避免控制器与终端设备之间的误连接。
在一些实施例中,当终端设备获取到控制器广播的配对结束信息,可显示连接结果提示信息,用于提示当前终端设备与控制器连接成功或连接失败。可选地,无线网格网络中的每个进行配对的控制器,均会在配对结束时向无线网格网络广播配对结束信息,终端设备通过连接的无线路由器可获知该控制器的配对结果。作为一种方式,其他终端设备也可获取该配对结束信息,并显示连接结果提示信息,用于提示其他用户同类设备已配对完成。
作为一种方式,当终端设备检测到控制器被放置回初始位置时,例如,控制器在展台上的摆放位置,可认为该控制器已经使用完毕,进而与该控制器断开通信连接。
在一些实施方式中,当检测到控制器通过IMU(Inertial measurement unit,惯性测量单元)采集的姿态信息等在一段时间内没有发生变化时,可认为当前控制器处于未被使用的状态,进而与该控制器断开通信连接。
在另一些实施方式中,当检测到存在新的控制器位于预设位置时,可断开与原先控制器之间的通信连接,并与位于预设位置的新的控制器重新建立通信连接,以完成控制器之间的替换。
上述实施例提供的通信连接方法,避免了误连接的情况,提升了终端设备与控制器进行匹配的准确度。
请参图7,本申请一个实施例的一种无线通信系统100,包括分布于现实环境中的至少一个标记物、至少一个控制器50、终端设备20、无线路由器60以及服务器40。其中,无线路由器60可分别与终端设备20和控制器50建立通信连接,还与后台维护数据中心的服务器40通信连接。在一些实施例中,标记物可包括场景标记物31、内容展示标记物32以及控制器标记物33等,不同类别的标记物用于实现不同的功能。
作为一种方式,场景标记物31可设置于各个场景的入口处,以供终端设备20对其进行识别并显示与其对应的虚拟场景。例如,在一个多主题的AR/VR展馆中,具有海洋、草原、星空等多个展览主题,不同的主题对应于博物馆中的不同区域,每个区域的入口处设置一个对应于该区域主题的场景标记物31。例如,当终端设备识别到海洋主题的区 域入口的场景标记物后,可基于该场景标记物构建海洋相关的虚拟场景,并通过显示模组将海洋相关的虚拟场景展示给用户;当用户从海洋主题区域移动到星空主题区域时,终端设备识别到以星空为主题的区域入口的场景标记物后,可基于该场景标记物构建星空相关的虚拟场景并替换掉先前的海洋相关的虚拟场景,并通过显示模组将星空相关的虚拟场景呈现给用户。在一些实施方式中,终端设备20在识别场景标记物31后,还可获取场景标记物31所属场景中的无线路由器60的连接密码等信息,以与当前环境的无线路由器60建立通信连接。
内容展示标记物32可设置于现实环境中的各个展台上,终端设备20可对内容展示标记物32进行识别并显示对应的虚拟对象,例如,显示虚拟展览品、展览品简介等。
控制器标记物33设置在各个控制器50上,终端设备20对控制器标记物33进行识别可获取控制器50的位置及姿态等信息。在一些实施方式中,终端设备20在识别控制器标记物33后,还显示与该控制器标记物33对应的虚拟对象,并与其他的虚拟内容进行交互。例如,在游戏场景中,终端设备20根据控制器标记物33显示对应的虚拟游戏道具,用户可通过控制控制器50实现虚拟游戏道具与其他虚拟内容的交互。
作为一种方式,终端设备20在采集到标记物后,可识别标记物,并获取该标记物的身份信息,以确定该标记物的类别(场景标记物31、内容展示标记物32、控制器标记物33等)。在一些实施方式中,终端设备20在识别控制器标记物33后,可与控制器50建立通信连接。
请参图8,本申请一个实施例的一种显示虚拟内容的方法,应用于上述终端设备,包括步骤S810至S830。
步骤S810:识别场景标记物,以确定终端设备所处的当前场景。
场景标记物设置于现实场景区域的入口处,不同场景可设置不同的场景标记物,场景标记物可与场景一一对应。当终端设备识别到场景标记物时,可获取与识别到的场景标记物对应的场景,即终端设备当前所处的场景。在一些实施方式中,终端设备还可根据包含场景标记物的图像获取终端设备相对于该场景标记物的位置和姿态信息,以确定终端设备在整个现实环境中的位置和姿态。
步骤S820:从当前场景对应的服务器获取与当前场景匹配的场景数据。
终端设备确定所处的当前场景后,可从当前场景所对应的服务器获取与当前场景匹配的场景数据。例如,当终端设备所处的当前场景为海洋主题的场景时,终端设备可与海洋主题场景对应的服务器建立通信连接,并从该服务器下载与海洋主题相关的场景数据。
在一些实施例中,场景数据可包括建模数据,其可用于构建以及渲染与当前场景匹配的虚拟内容,可包括构建三维虚拟内容的顶点数据、纹理、贴图等。例如,当前场景为海洋主题的场景,场景数据可包括虚拟海底世界的三维模型数据,以及珊瑚礁、鱼群、海洋植物等虚拟海洋生物的模型数据。
步骤S830:根据场景数据,构建虚拟内容。
终端设备获取与当前场景对应的场景数据,可加载该场景数据,根据场景数据构建与当前场景对应的虚拟内容,并通过显示模块进行显示。虚拟内容可包括虚拟场景及虚拟对象,虚拟对象可以是静态虚拟对象,也可以是动态虚拟对象。
例如,终端设备处于海洋主题的场景,从服务器下载与海洋主题场景匹配的场景数据,终端设备根据该场景数据可构建并显示海洋的三维虚拟场景,同时还在海洋的三维场景中叠加显示如珊瑚礁、沉船等静态的虚拟对象,以及鱼群、海洋植物等动态的虚拟对象。
又例如,当终端设备从海洋主题的场景移动到时尚主题的场景时,可从与服务器下载与时尚主题场景匹配的场景数据,根据该场景数据可构建并显示出舞台的三维虚拟场景,并在舞台的三维场景中叠加显示如艺术海报、服装等静态的虚拟对象,以及时尚走 秀、灯光等动态的虚拟对象。
在一些实施方式中,用户还可通过其他方式例如手势、操作控制器等方式与虚拟内容进行交互,还可通过服务器与其他终端设备进行数据同步更新,实现同一虚拟场景内的多人互动。
在一些实施例中,还可设有用于申请使用终端设备的服务台,用户可在服务台申请使用终端设备,并由用户或服务人员对终端设备进行配置,可包含用户设置、无线配置、控制器匹配、硬件设备安装、软件设定启动等工作,也可是终端设备进行自动配置。当完成终端设备配置后,终端设备可获取用户信息,以对用户的身份进行认证。
上述实施例的方法,通过识别设置于特定场景的场景标记物,自动显示与当前所处场景相关联的虚拟内容。
请参图9,本申请另一个实施例的显示虚拟内容的方法,包括如下步骤。
步骤S910:获取包含场景标记物的图像。
步骤S920:基于图像,获取场景标记物的身份信息。
终端设备可通过相机采集包含场景标记物的图像,并对图像中包含的场景标记物进行识别,获取该场景标记物对应的身份信息,设置在不同场景的场景标记物不同,对应的身份信息也不同。
步骤S930:根据身份信息,确定终端设备所处的当前场景。
每个场景设置的场景标记物不同,可获取场景标记物的身份信息对应的场景标识,并根据场景标识确定终端设备当前所处的场景,不同场景具有不同的场景标识,可预先存储场景标识与场景标记物的身份信息的对应关系。例如,设置在海洋场景的场景标记物与设置在时尚场景的场景标记物不同,海洋场景的场景标记物的身份信息为“010”,海洋场景的场景标识为“01”,时尚场景的场景标记物的身份信息为“020”,时尚场景的场景标识为“02”,终端设备可根据识别到的场景标记物的身份信息获取对应的场景标识,以确定当前所处的场景是海洋场景还是时尚场景,或是其他主题场景。
步骤S940:根据身份信息,与当前场景对应的服务器建立连接。
终端设备可根据获取的场景标记物的身份信息,与当前场景对应的服务器进行连接。在一些实施例中,每个场景均可覆盖有与场景对应的无线网络,终端设备通过每个场景对应的无线路由器接入该场景对应的无线网络,并与服务器建立通信连接。作为一种方式,一个服务器可对应于单个场景,也可对应于多个不同的场景,位于不同场景的终端设备可分别连接的无线网络向服务器请求下载所处场景对应的场景数据。
作为一种方式,终端设备可根据场景标记物的身份信息获取当前所在场景对应的网络连接密码,并通过网络连接密码与当前所在场景对应的路由器进行网络连接。
步骤S950:从当前场景对应的服务器获取与当前场景匹配的场景数据。
在一些实施例中,场景数据可包括空间地图和建模数据,其中,空间地图可为根据真实环境构建的虚拟地图(可以是二维或三维),可用于对终端设备在现实空间的位置进行定位。
可获取终端设备在现实场景中的位置及姿态信息,位置信息包括终端设备在现实场景中的位置坐标,该位置坐标可为以场景标记物为原点建立的空间坐标系的坐标。除位置坐标外,还可包括终端设备所处场景的区域位置等,通过空间地图可获取场景的区域位置,例如终端设备当前处于教育场景,利用空间地图可得到教育场景位于建筑的第二层的中间区域。姿态信息则可以是终端设备的旋转、朝向等信息。
作为一种实施方式,终端设备通过相机采集到包含场景标记物的图像,对该场景标记物进行识别,获取终端设备与该场景标记物之间的相对位置及姿态信息,并根据空间地图获取场景标记物在现实场景中的设置位置,根据该相对位置及姿态信息以及场景标记物在现实场景的设置位置以确定终端设备在现实场景中的位置及姿态信息。终端设备在场景中移动时,还可采集内容标记物的图像,根据采集到的内容标记物获取终端设备 在现实场景中的位置及姿态信息;当不能采集到场景标记物或内容标记物的图像时,终端设备还可通过VIO(Visual-Inertial Odometry,视觉惯性里程计)实时获取终端设备在现实场景中的位置及姿态信息。
步骤S960,根据场景数据渲染虚拟内容。
终端设备可加载与当前所处的场景匹配的场景数据,并根据场景数据中包含的模型数据构建虚拟内容三维模型,包括与现实场景匹配的虚拟场景及虚拟对象等的三维模型。
在一些实施例中,可根据终端设备在现实场景中的信息及姿态信息实时对虚拟内容三维模型进行渲染,并显示渲染的虚拟内容。
虚拟内容在虚拟空间的渲染坐标可相对世界坐标原点固定,且渲染坐标与现实场景关联,例如,显示的虚拟内容相对场景中设置的标记物的位置固联,也可与现实场景的不同区域匹配等。当用户处于现实场景的不同位置时,即终端设备在现实场景的位置及姿态信息不同时,可观察到不同的虚拟内容。例如,用户处于海洋场景的入口处,终端设备识别到场景标识物,可获取相对场景标识物的位置及姿态信息,根据该位置及姿态信息渲染虚拟入口指引,并进行显示,用户可观察到该虚拟入口指引,当用户处于海洋场景的不同位置或转动终端设备的视角时,终端设备可渲染并显示不同的虚拟海洋场景及虚拟海洋生物。
步骤S970:显示虚拟内容。
上述实施例的显示虚拟内容的方法,根据终端设备的位置及姿态信息实时渲染虚拟内容,用户处于不同位置或转动不同视角时,可显示不同的虚拟内容,丰富AR/VR的视觉效果,提高真实感及沉浸感。
请参图10,在一个实例中,上述显示虚拟内容的方法还可包括如下步骤。
步骤S1010:基于在现实空间中的位置及姿态信息,获取终端设备与预设场景的相对空间位置关系,以确定预设场景相对于终端设备的方位信息。
现实空间中可预先设置有多个场景,根据现实空间对应的空间地图获取终端设备在现实空间的位置及姿态信息,从而可确定现实空间中的各个预设场景与终端设备的相对方位信息,其中,相对方位信息包括预设场景相对于终端设备的方向和距离。
步骤S1020:在与方位信息对应的区域上叠加显示与预设场景对应的场景图标。
终端设备可在视野范围内与预设场景的方位信息对应的区域叠加显示对应的场景图标,其中,终端设备的视野范围可理解为用户双眼通过终端设备可看到的范围,场景图标可用于标识不同的场景,例如,场景图标可为场景的名称、图案或编号等。当根据预设场景的方向信息及终端设备在现实空间的位置及姿态信息确定该预设场景处于终端设备的视野范围内时,可叠加显示场景图标以对该预设场景进行指引,场景图标在现实空间中的叠加位置可与该预设场景的方向信息匹配,例如,场景图标可叠加显示在视野范围内预设场景所在的位置,也可叠加显示在预设场景所处方向匹配的区域上,用户根据场景图标可准确获知处于视野范围内的各个场景,以及各个场景所在的方位。
请参图11a、11b,用户通过终端设备可看到视野范围1100内包含场景1及场景2,其中,场景1为海洋场景,场景2为星空场景,场景1相对终端设备处于左上方位,场景2相对终端设备处于右上方位,场景2相对场景1的距离更远。海洋场景的场景图标1102叠加显示场景1所处的位置,星空场景的场景图标1102叠加显示在场景2所在的位置,海洋场景的场景图标1104叠加显示在终端设备与场景1的方向之间(左上方向)的区域,星空场景的场景图标1104叠加显示在终端设备与场景2的方向之间(右上方向)的区域。
在一些实施例中,当存在预设场景处于终端设备的视野范围之外时,也可显示处于视野范围之外的预设场景对应的场景图标,该场景图标可叠加显示在与该预设场景的方位信息对应的视野范围边缘,以帮助用户快速获取视野范围之外的预设场景所在的位置。如图11c所示,用户通过终端设备可看到视野范围1100内包含的场景1,场景2处于视 野范围1100之外,场景2相对终端设备处于右上方位,终端设备可将海洋场景的场景图标1102叠加显示场景1所处的位置,并将星空场景的场景图标1102显示在视野范围1100的右侧边缘。场景图标的表现形式及叠加显示的位置并不限于上述描述的几种,在此不限定。
在一些实施例中,在显示预设场景的场景图标时,还可显示终端设备与该预设场景的实际距离等其他信息。作为一种方式,当终端设备处于场景内部时,可仅显示该场景的场景图标,或是将场景图标暂时隐藏,避免对用户的观看产生影响。
在一些实施例中,可检测终端设备的姿态信息,并根据姿态信息确定终端设备的朝向,该朝向可用于表示确定佩戴终端设备的用户人眼的朝向方向,当终端设备的朝向与预设场景的方位信息一致时,可显示方位信息一致的预设场景对应的场景说明信息,其中,场景说明信息可包括预设场景的名称、简介(可包括文字和视频)、热度(某个时间段内的访问人数)、预计到达时间(用户步行前往该预设场景预计花费的时间)、预计排队时间(若观看人数太多,可能需要排队)等信息,但不限于些。作为一种方式,当预设场景处于视野范围的中间区域时,即用户的视线正对着预设场景时,可认为终端设备的朝向与预设场景的方位信息一致。
如图11d所示,用户通过终端设备可看到视野范围1100内包含场景1,当终端设备的朝向与场景1的方位信息一致时,终端设备可将将海洋场景的场景图标1102叠加显示场景1所处的位置,并显示场景1的场景说明信息1106,可包括当前访问人数、预计等待时间及场景简介等。作为一种方式,在显示场景说明信息的同时,还可通过终端设备的音频输出单元通过声音的方式向用户进行场景说明。
在一些实施方式中,终端设备处于场景内部时,可显示该场景的场景图标,作为一种方式,当终端设备距离场景的入口或出口越近时,可降低场景图标的透明度或放大场景图标,使显示的场景图标更明显;当终端设备距离场景的入口或出口越远时,可提高场景图标的透明度或缩小场景图标。
当终端设备识别到新的场景标记物时,表明终端设备所处的场景发生了变化,根据新的场景标记物可与所处的新场景对应的服务器建立连接,并从该服务器下载新场景的场景数据,构建虚拟内容进行显示。当终端设备所处的场景发生变化时,可获取新场景对应的场景图标,并将显示的上一场景的场景图标替换为新场景的场景图标。
作为一种方式,在用户结束使用后,终端设备还可将用户在使用过程中的操作记录通过日志的形式上传至服务器,例如,操作记录可包括终端设备访问的场景、进行的交互动作等,便于后续进行用户喜好的统计以及虚拟显示体验的优化。
上述实施例提供的内容显示方法,显示各个场景的场景图标,对场景进行标识及指引,可丰富AR/VR的视觉效果,提高真实感及沉浸感。
在一个实施例中,本申请还提供一种信息提示方法,终端设备通过相机采集包括标记物的目标图像,并根据目标图像获取终端设备与标记物之间的相对空间位置关系,当相对空间位置关系满足预设条件时,生成提示信息,其中,预设条件为标记物的位置及姿态中的至少一种条件。
请参图12a,在一些实施例中,相对空间位置关系包括终端设备与标记物之间的目标距离,终端设备可判断目标距离是否超出第一距离阈值,当超出第一距离阈值时,生成提示信息。作为一种方式,终端设备采集目标图像后,可分析该目标图像内标记物的轮廓大小,并在距离与轮廓大小的对应关系中,查找标记物的轮廓大小对应的距离,以确定终端设备与标记物之间的目标距离。目标距离也可通过实时获取,例如,可利用Depth Map(深度图像)镜头生成标记物到镜头距离的实时分布图,以实时获取到终端设备与标记物的距离,另外,也可利用磁力追踪、声学追踪、惯性追踪、光学追踪或者多传感器融合实时获取标记物与终端设备的距离,具体不限定。
在一些实施例中,相对空间位置关系包括标记物的位置与相机的视觉范围的边界位 置之间的距离,终端设备可判断标记物的位置与相机的视觉范围的边界位置之间的距离是否小于第二距离阈值,当小于第二距离阈值时,生成提示信息。如图12b所示,相机的视觉范围指的是相机可采集到图像的范围,视觉范围的边界是指视觉范围对应的区域范围的边缘位置。作为一种方式,如图12c中,L1和L2为水平视野的水平边界,L3和L4为垂直视野的垂直边界,可将目标图像的水平边界作为水平视野,将垂直边界作为垂直视野,标记物的位置可通过分析目标图像内标记物图像的像素坐标而获取;作为一种实施方式,以L1和L4的交点作为该目标图像的原点,标记物的位置与相机的视觉范围的边界位置之间的距离可包括标记物与L1之间的距离d1、标记物与L4之间的距离d2、标记物与L2之间的距离d3或者是标记物与L3之间的距离d4,可将d1、d2、d3和d4中最小的距离值作为标记物的位置与相机的视觉范围的边界位置之间的距离,以得到标记物的位置与相机的视觉范围的边界位置之间的距离。
在一些实施例中,相对空间位置关系包括标记物相对终端设备的姿态信息,姿态信息包括旋转角度,终端设备可判断旋转角度是否超出旋转角度预设值,当旋转角度超出旋转角度预设值时,生成提示信息。如图12d所示,标记物相对终端设备的姿态信息包括标记物的旋转方向以及旋转角度等信息。作为一种方式,可利用标记物的目标特征点来确定标记物的姿态信息,其目标特征点是从目标图像内的所有特征点中任意选取的特定数量的特征点,根据目标特征点在目标图像中的像素坐标和标记物上的真实物理坐标,可获取标记物相对终端设备的位置、旋转方向以及旋转角度等信息。
在一些实施例中,姿态信息还包括旋转方向,终端设备可获取与旋转方向对应的旋转角度预设值,并判断旋转角度是否超出旋转方向对应的旋转角度预设值,当旋转角度超出与旋转方向对应的旋转角度预设值,生成提示信息。其中,旋转角度预设值为预先设定的一个临界角度值,超过临界角度值时,标记物的正面(设置有标记图案的一面)无法被相机采集到。
在一些实施例中,终端设备可根据多帧目标图像中标记物的图像位置确定所述标记物的位置及姿态变化,根据标记物的位置及姿态变化获取终端设备和/或标记物的预测运动信息,并通过预测运动信息判断是否满足预设条件,当满足预设条件时,生成提示信息。预测运动信息中可包括运动方向预测、运动速度预测和运动旋转方向预测等。作为一种具体方式,终端设备可获取在当前的目标图像之前连续几帧的包含有标记物的历史图像,并获取每个历史图像中的标记物的像素坐标,根据连续几帧图像中的标记物的像素坐标能够拟合出标记物的轨迹。
作为一种方式,可通过终端设备与标记物之间的目标距离变化获得运动方向,并根据运动方向及目标距离共同判断是否生成提示信息。当标记物与终端设备之间的距离变小,即标记朝着终端设备的方向运动,可以不生成提示信息;当标记物与终端设备之间的距离变大,标记物朝着远离终端设备的方向运动,可生成提示信息。
作为一种方式,还可结合标记物的运动方向和标记物的位置与相机的视野范围的边界位置之间的距离的变化共同判断是否生成提示信息。当标记物的位置与相机的视觉范围的边界位置之间距离小于第二距离阈值时,若标记物正在向视觉范围的中心处移动,可不生成提示信息,当标记物正向视觉范围的边界线处移动,可生成提示信息。
需要说明的是,终端设备与标记物之间的目标距离、标记物的位置与相机的视野范围的边界位置之间的距离以及标记物相对终端设备的姿态信息这三种相对空间位置关系至少一种满足预设条件时,终端设备就可以生成提示信息,这三种相对空间位置关系也可相互结合来判断是否生成提示信息。
在一个实施例中,提示信息可包括图像提示、语音提示以及震动提示中的至少一种。图像提示可为箭头提示、表情提示或者其他形式的图像提示等,在进行图像提示的时候可在终端设备上实时显示终端设备与标记物之间的位置关系,方便用户进行相应的调整。语音提示可根据用户的喜好进行设置,声音可以是默认声音、儿童声音、明星声音或用 户自己的声音等。震动提示可通过震动器等产生提示效果,震动可随着提示时间长短不断加强。
例如,如图13a所示,当相对空间位置关系满足预设条件时,终端设备可通过虚拟的“悲伤”表情来提示用户标记物即将无法正常显示;如图13b所示,可通过“箭头”来提示用户标记物即将无法正常显示。
在一些实施例中,当标记物不在终端设备的相机的视觉范围内时,可通过视觉惯性里程计实时获取终端设备的自由度信息,并根据自由度信息确定标记物相对终端设备的位置方向,根据位置方向生成提示信息。作为一种方式,当标记物处于相机的视觉范围内时,可将终端设备当前所处的位置作为起始点,并通过VIO不断计算终端设备相对起始点的位置变化和姿态信息等信息;当标记物不在视觉范围内时,可获取终端设备相对起始点的位置变化和姿态信息等信息,重新确定起始点所在的位置,以得到标记物的实时位置和姿态信息。
作为一种方式,还可判断标记物的旋转角度是否超出角度预设值,当超出时,可生成提示信息。标记物的不同的旋转方向可对应不同的角度预设值,终端设备可获取标记物的旋转方向,根据该旋转方向获取该旋转方向对应的角度预设值,并判断旋转角度是否超出对应角度预设值。
上述实施例中的信息提示方法,当检测到终端设备与标记物之间的相对空间位置关系满足预设条件,可能导致出现无法准确识别标记物的情况,因此生成提示信息,提醒用户调整终端设备与标记物之间的相对空间位置关系,以使标记物能够被准确识别,提高终端设备对虚拟内容显示的准确性。
在一个实施例中,还提供一种计算机可读存储介质,该计算机可读介质中存储有程序代码,程序代码可被处理器调用执行上述实施例中所描述的方法。
计算机可读存储介质可为诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质具有执行上述方法中的任何方法步骤的程序代码的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种通信连接方法,其特征在于,所述方法包括:
    采集包含标记物的图像,并对所述图像中的标记物进行识别;
    当所述标记物为控制器的标记物时,获取与所述标记物对应的控制器的标识码,所述标识码为所述控制器建立通信连接时用于进行配对的标识码;及
    基于所述标识码,与所述控制器建立通信连接。
  2. 根据权利要求1所述的方法,其特征在于,所述获取与所述标记物对应的控制器的标识码,包括:
    扫描由控制器广播的标识码;
    将扫描到的标识码与所述标记物进行匹配;及
    当匹配成功时,确定所述扫描到的标识码为与所述标记物对应的控制器的标识码。
  3. 根据权利要求2所述的方法,其特征在于,所述标识码包括所述控制器的场景标识;
    在所述扫描由控制器广播的标识码之后,所述方法还包括:
    将扫描到的标识码中包含的所述场景标识与当前所在场景的场景标识进行匹配。
  4. 根据权利要求1所述的方法,其特征在于,所述基于所述标识码,与所述控制器建立通信连接,包括:
    检测控制器的位置,当所述控制器处于预设位置时,基于所述标识码与所述控制器建立通信连接。
  5. 根据权利要求1所述的方法,其特征在于,在所述对所述图像中的标记物进行识别之后,所述方法还包括:
    当所述标记物为控制器标记物时,生成连接提示信息,所述连接提示信息用于提示与控制器建立通信连接。
  6. 根据权利要求1所述的方法,其特征在于,在所述对所述图像中的标记物进行识别之后,所述方法还包括:
    当所述标记物为场景标记物时,根据所述场景标记物获取与当前所在场景对应的无线网络连接密码;及
    通过所述无线网络连接密码与所述当前所在场景的无线路由器进行连接,同一场景无线路由器与多个控制器组成无线网格网络。
  7. 根据权利要求6所述的方法,其特征在于,在所述获取与所述标记物对应的控制器的标识码之前,所述方法还包括:
    当所述无线网格网络中存在其他控制器正在配对时,生成等待提示信息,所述等待提示信息用于提示等待其他控制器配对完成。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述控制器广播的配对结束信息;及
    基于所述配对结束信息,生成连接结果提示信息,所述连接结果提示信息用于提示与所述控制器连接成功或连接失败。
  9. 一种终端设备,其特征在于,包括存储器以及处理器,所述存储器与所述处理器耦合;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:
    采集包含标记物的图像,并对所述图像中的标记物进行识别;
    当所述标记物为控制器的标记物时,获取与所述标记物对应的控制器的标识码,所述标识码为所述控制器建立通信连接时用于进行配对的标识码;及
    基于所述标识码,与所述控制器建立通信连接。
  10. 根据权利要求9所述的终端设备,其特征在于,所述获取与所述标记物对应的 控制器的标识码,包括:
    扫描由控制器广播的标识码;
    将扫描到的标识码与所述标记物进行匹配;及
    当匹配成功时,确定所述扫描到的标识码为与所述标记物对应的控制器的标识码。
  11. 根据权利要求10所述的终端设备,其特征在于,所述标识码包括所述控制器的场景标识;所述处理器在执行所述扫描由控制器广播的标识码的步骤之后,还执行以下步骤:
    将扫描到的标识码中包含的所述场景标识与当前所在场景的场景标识进行匹配。
  12. 根据权利要求9所述的终端设备,其特征在于,所述基于所述标识码,与所述控制器建立通信连接,包括:
    检测控制器的位置,当所述控制器处于预设位置时,基于所述标识码与所述控制器建立通信连接。
  13. 根据权利要求9所述的终端设备,其特征在于,所述处理器在执行所述对所述图像中的标记物进行识别的步骤之后,还执行以下步骤:
    当所述标记物为控制器标记物时,生成连接提示信息,所述连接提示信息用于提示与控制器建立通信连接。
  14. 根据权利要求9所述的终端设备,其特征在于,所述处理器在执行所述对所述图像中的标记物进行识别的步骤之后,还执行以下步骤:
    当所述标记物为场景标记物时,根据所述场景标记物获取与当前所在场景对应的无线网络连接密码;及
    通过所述无线网络连接密码与所述当前所在场景的无线路由器进行连接,同一场景无线路由器与多个控制器组成无线网格网络。
  15. 根据权利要求14所述的终端设备,其特征在于,所述处理器在执行所述获取与所述标记物对应的控制器的标识码的步骤之前,还执行以下步骤:
    当所述无线网格网络中存在其他控制器正在配对时,生成等待提示信息,所述等待提示信息用于提示等待其他控制器配对完成。
  16. 根据权利要求14所述的终端设备,其特征在于,所述处理器还执行以下步骤:
    获取所述控制器广播的配对结束信息;及
    基于所述配对结束信息,生成连接结果提示信息,所述连接结果提示信息用于提示与所述控制器连接成功或连接失败。
  17. 一种无线通信系统,其特征在于,包括:
    至少一个标记物;
    至少一个控制器,所述至少一个控制器上设置有所述标记物;
    至少一个终端设备,用于对所述至少一个控制器上设置的标记物进行识别,获取所述至少一个控制器的标识码,并基于所述标识码与所述至少一个控制器建立通信连接。
  18. 根据权利要求17所述的系统,其特征在于,所述系统还包括:
    至少一个无线路由器,用于与所述至少一个终端设备和/或所述至少一个控制器建立通信连接;
    所述至少一个无线路由器与多个所述控制器建立通信连接时,与所述多个所述控制器组成无线网格网络。
  19. 根据权利要求18所述的系统,其特征在于,所述至少一个终端设备还用于当当所述无线网格网络中存在其他控制器正在配对时,生成等待提示信息,所述等待提示信息用于提示等待其他控制器配对完成。
  20. 根据权利要求17所述的系统,其特征在于,所述至少一个控制器,还用于广播标识码;
PCT/CN2019/104161 2018-09-03 2019-09-03 通信连接方法、终端设备及无线通信系统 WO2020048441A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/727,976 US11375559B2 (en) 2018-09-03 2019-12-27 Communication connection method, terminal device and wireless communication system

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201811023511.X 2018-09-03
CN201811021765.8A CN110875944B (zh) 2018-09-03 2018-09-03 通信连接方法、装置、终端设备及无线通信系统
CN201811021765.8 2018-09-03
CN201811023511.XA CN110873963B (zh) 2018-09-03 2018-09-03 内容显示方法、装置、终端设备及内容显示系统
CN201811368617.3 2018-11-16
CN201811368617.3A CN111198608B (zh) 2018-11-16 2018-11-16 信息提示方法、装置、终端设备及计算机可读取存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/727,976 Continuation US11375559B2 (en) 2018-09-03 2019-12-27 Communication connection method, terminal device and wireless communication system

Publications (1)

Publication Number Publication Date
WO2020048441A1 true WO2020048441A1 (zh) 2020-03-12

Family

ID=69722329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104161 WO2020048441A1 (zh) 2018-09-03 2019-09-03 通信连接方法、终端设备及无线通信系统

Country Status (2)

Country Link
US (1) US11375559B2 (zh)
WO (1) WO2020048441A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194470A (zh) * 2021-04-28 2021-07-30 Oppo广东移动通信有限公司 建立无线连接的方法、装置以及移动终端

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10832417B1 (en) * 2019-06-04 2020-11-10 International Business Machines Corporation Fusion of visual-inertial-odometry and object tracker for physically anchored augmented reality
CN110908504B (zh) * 2019-10-10 2021-03-23 浙江大学 一种增强现实博物馆协作交互方法与系统
KR20210106651A (ko) * 2020-02-21 2021-08-31 삼성전자주식회사 적어도 하나의 객체를 공유하는 전자 장치 및 그 제어 방법
CN111640235A (zh) * 2020-06-08 2020-09-08 浙江商汤科技开发有限公司 一种排队信息展示方法及装置
CN112716760A (zh) * 2020-12-22 2021-04-30 未来穿戴技术有限公司 按摩仪的连接方法及按摩仪、终端设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5638789B2 (ja) * 2009-11-12 2014-12-10 株式会社野村総合研究所 標識使用方法
CN106468993A (zh) * 2016-08-29 2017-03-01 乐视控股(北京)有限公司 虚拟现实终端设备的控制方法及装置
CN107578487A (zh) * 2017-09-19 2018-01-12 北京枭龙科技有限公司 一种基于增强现实智能设备的巡检系统
CN107610238A (zh) * 2017-09-12 2018-01-19 国网上海市电力公司 一种电力设备ar动态模型系统及其工作方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2517682C3 (de) 1975-04-22 1980-09-11 Daimler-Benz Ag, 7000 Stuttgart Einspritzventil für Brennkraftmaschinen
US7426537B2 (en) * 2002-05-31 2008-09-16 Microsoft Corporation Systems and methods for sharing dynamic content among a plurality of online co-users
US8187100B1 (en) * 2007-03-02 2012-05-29 Dp Technologies, Inc. Shared execution of hybrid states
TW200844857A (en) * 2007-05-07 2008-11-16 Vivotek Inc A method and architecture for linking wireless network devices
US8731169B2 (en) * 2012-03-26 2014-05-20 International Business Machines Corporation Continual indicator of presence of a call participant
US9041622B2 (en) * 2012-06-12 2015-05-26 Microsoft Technology Licensing, Llc Controlling a virtual object with a real controller device
US10362158B2 (en) * 2013-01-15 2019-07-23 Habit Analytics PT, LDA Appliance control system and method
US8943569B1 (en) * 2013-10-01 2015-01-27 Myth Innovations, Inc. Wireless server access control system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5638789B2 (ja) * 2009-11-12 2014-12-10 株式会社野村総合研究所 標識使用方法
CN106468993A (zh) * 2016-08-29 2017-03-01 乐视控股(北京)有限公司 虚拟现实终端设备的控制方法及装置
CN107610238A (zh) * 2017-09-12 2018-01-19 国网上海市电力公司 一种电力设备ar动态模型系统及其工作方法
CN107578487A (zh) * 2017-09-19 2018-01-12 北京枭龙科技有限公司 一种基于增强现实智能设备的巡检系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194470A (zh) * 2021-04-28 2021-07-30 Oppo广东移动通信有限公司 建立无线连接的方法、装置以及移动终端
CN113194470B (zh) * 2021-04-28 2023-03-31 Oppo广东移动通信有限公司 建立无线连接的方法、装置以及移动终端

Also Published As

Publication number Publication date
US11375559B2 (en) 2022-06-28
US20200137815A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
WO2020048441A1 (zh) 通信连接方法、终端设备及无线通信系统
US20220044019A1 (en) Augmented reality smartglasses for use at cultural sites
US11497986B2 (en) Mixed reality system for context-aware virtual object rendering
US20190019011A1 (en) Systems and methods for identifying real objects in an area of interest for use in identifying virtual content a user is authorized to view using an augmented reality device
US10516870B2 (en) Information processing device, information processing method, and program
EP3777087B1 (en) Viewing a virtual reality environment on a user device
US10241565B2 (en) Apparatus, system, and method of controlling display, and recording medium
US10963140B2 (en) Augmented reality experience creation via tapping virtual surfaces in augmented reality
CN111527525A (zh) 混合现实服务提供方法及系统
JP6470356B2 (ja) 仮想空間を提供するコンピュータで実行されるプログラム、方法、および当該プログラムを実行する情報処理装置
WO2021184952A1 (zh) 增强现实处理方法及装置、存储介质和电子设备
JP6298563B1 (ja) ヘッドマウントデバイスによって仮想空間を提供するためのプログラム、方法、および当該プログラムを実行するための情報処理装置
US9733896B2 (en) System, apparatus, and method for displaying virtual objects based on data received from another apparatus
CN112634416B (zh) 虚拟形象模型的生成方法、装置、电子设备及存储介质
CN110873963B (zh) 内容显示方法、装置、终端设备及内容显示系统
US11553009B2 (en) Information processing device, information processing method, and computer program for switching between communications performed in real space and virtual space
JP2018084878A (ja) 情報処理装置、情報処理方法、およびプログラム
US20210383097A1 (en) Object scanning for subsequent object detection
JP2018200678A (ja) ヘッドマウントデバイスと通信可能なコンピュータによって実行されるプログラム、当該プログラムを実行するための情報処理装置、およびヘッドマウントデバイスと通信可能なコンピュータによって実行される方法
US20200334396A1 (en) Method and system for providing mixed reality service
JP2019012509A (ja) ヘッドマウントデバイスによって仮想空間を提供するためのプログラム、方法、および当該プログラムを実行するための情報処理装置
CN113194329B (zh) 直播互动方法、装置、终端及存储介质
US20240135649A1 (en) System and method for auto-generating and sharing customized virtual environments
JP2018190390A (ja) 仮想空間を提供するための方法、および当該方法をコンピュータに実行させるためのプログラム、および当該プログラムを実行するための情報処理装置
WO2020244576A1 (zh) 基于光通信装置叠加虚拟对象的方法和相应的电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19856731

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/07/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19856731

Country of ref document: EP

Kind code of ref document: A1