WO2021057887A1 - Procédé et système permettant de définir un objet virtuel susceptible d'être présenté à une cible - Google Patents

Procédé et système permettant de définir un objet virtuel susceptible d'être présenté à une cible Download PDF

Info

Publication number
WO2021057887A1
WO2021057887A1 PCT/CN2020/117640 CN2020117640W WO2021057887A1 WO 2021057887 A1 WO2021057887 A1 WO 2021057887A1 CN 2020117640 W CN2020117640 W CN 2020117640W WO 2021057887 A1 WO2021057887 A1 WO 2021057887A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
camera
virtual object
position information
Prior art date
Application number
PCT/CN2020/117640
Other languages
English (en)
Chinese (zh)
Inventor
李江亮
牛旭恒
周硙
方俊
Original Assignee
北京外号信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京外号信息技术有限公司 filed Critical 北京外号信息技术有限公司
Publication of WO2021057887A1 publication Critical patent/WO2021057887A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention belongs to the field of augmented reality technology, and in particular relates to a method and system for setting a presentable virtual object for a target in a real scene.
  • Augmented Reality In recent years, Augmented Reality (AR) technology has made great progress. Augmented reality technology is also known as mixed reality technology, which superimposes virtual objects into the real scene through computer technology, so that the real scene and virtual objects can be presented in the same picture or space in real time, thereby enhancing the user's perception of the real world . Because augmented reality technology can enhance the display output of the real environment, it has been widely used in technical fields such as medical research and anatomy training, precision instrument manufacturing and maintenance, military aircraft navigation, engineering design, and remote robot control.
  • some data information can be superimposed on a fixed position in the field of view. For example, when a pilot is flying an airplane, he can view the flight data superimposed on the real scene by wearing a display helmet. It is displayed at a fixed position in the field of view (for example, always in the upper right corner of the field of view), and therefore lacks sufficient flexibility. In particular, the superimposed virtual object cannot move with the movement of the real target in the real scene. In another augmented reality application, some virtual objects can be superimposed near the people photographed by the user's mobile phone, but this method is not accurate enough and is not suitable for AR interaction between multiple people.
  • the solution of the present invention provides a method and system for setting a presentable virtual object for a target in a real scene.
  • the target By using an optical communication device and a camera, the target can be set based on the position information of the target in the real scene.
  • the associated virtual object can be accurately presented on the display medium of the device, and the presented virtual object can follow the target in the real scene.
  • One aspect of the present invention relates to a method for setting a presentable virtual object for a target in a real scene, wherein a camera and an optical communication device are installed in the real scene, and the camera and the optical communication device are connected Having a relative pose, the method includes: using the camera to track a target in the real scene; obtaining position information of the target according to the tracking result of the camera; setting the spatial position information associated with the target The virtual object, wherein the spatial position information of the virtual object is determined based on the position information of the target; and the related information of the virtual object is sent to the first device, and the information includes the spatial position of the virtual object Information, wherein the related information of the virtual object can be used by the first device to present the virtual object on its display medium based on its position information and posture information determined by the optical communication device.
  • the obtaining the position information of the target according to the tracking result of the camera includes: obtaining the position information of the target relative to the camera according to the tracking result of the camera, and obtaining the position information of the target relative to the camera according to the tracking result of the camera;
  • the position information of the target relative to the optical communication device is determined based on the position information of the camera and the relative pose between the camera and the optical communication device.
  • the spatial location information of the virtual object is relative to the spatial location information of the optical communication device.
  • the obtaining the position information of the target according to the tracking result of the camera includes: obtaining the position information of the target relative to the camera according to the tracking result of the camera, and obtaining the position information of the target relative to the camera according to the tracking result of the camera;
  • the position information of the camera and the pose information of the camera in the real scene are used to determine the position information of the target in the real scene.
  • the related information of the virtual object is configured according to the information related to the target.
  • the method further includes: obtaining information related to the target according to the tracking result of the camera.
  • the configuring the related information of the virtual object according to the information related to the target includes: receiving information related to the target from a facility; and comparing the information with the location tracked by the camera. Establishing contact with a target at a predetermined location of the facility; and configuring related information of the virtual object of the target according to the information.
  • the configuring the related information of the virtual object according to the information related to the target includes: receiving information from a second device, and the information includes the location information of the second device; The location information of the second device is compared with the location information of one or more targets determined according to the tracking result of the camera to determine the target that matches with the second device; and the configuration and location information from the second device Related information of the virtual object of the target matched by the second device.
  • the second device determines its location information at least in part by collecting an image including the optical communication device and analyzing the image.
  • the method further includes: obtaining posture information of the target according to the tracking result of the camera; and wherein the setting a virtual object with spatial position information associated with the target includes Set the posture information of the virtual object in the posture information.
  • the posture of the virtual object can be adjusted according to a change in the position and/or posture of the first device relative to the virtual object.
  • the first device at least partially collects an image including the optical communication device and analyzes the image to determine its position information and posture information relative to the optical communication device.
  • the related information of the virtual object to be sent to the first device is determined according to the information related to the first device.
  • Another aspect of the present invention relates to a system for setting a presentable virtual object for a target in a real scene, including: a camera installed in the real scene, which is used to track the target in the real scene; An optical communication device installed in the real scene, wherein the optical communication device and the camera have relative poses; and a server for executing the above method.
  • the system further includes a facility capable of obtaining information related to the target.
  • the system further includes the first device, which is configured to: receive information related to the virtual object from the server; determine the location information of the first device at least in part through the optical communication device And posture information; and presenting the virtual object on the display medium of the first device based on the position information and posture information and related information of the virtual object.
  • the first device which is configured to: receive information related to the virtual object from the server; determine the location information of the first device at least in part through the optical communication device And posture information; and presenting the virtual object on the display medium of the first device based on the position information and posture information and related information of the virtual object.
  • Another aspect of the present invention relates to a storage medium in which a computer program is stored, and when the computer program is executed by a processor, it can be used to implement the above method.
  • Another aspect of the present invention relates to an electronic device, including a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, the computer program can be used to implement the above method.
  • Figure 1 shows an exemplary optical label
  • Fig. 2 shows a real scene including a system for setting a presentable virtual object for a target according to an embodiment
  • Fig. 3 shows a method for setting a presentable virtual object for a target according to an embodiment
  • Fig. 4 shows a schematic image observed by the first user in Fig. 2 through a mobile phone screen or AR glasses according to an embodiment.
  • Optical communication devices are also called optical tags, and these two terms can be used interchangeably in this article.
  • Optical tags can transmit information through different light-emitting methods, which have the advantages of long recognition distance and relaxed requirements for visible light conditions, and the information transmitted by optical tags can change over time, which can provide large information capacity and flexible configuration capabilities.
  • the optical label Compared with the traditional two-dimensional code, the optical label has a longer recognition distance and stronger information interaction capabilities, which can provide users with great convenience.
  • the optical tag usually includes a controller and at least one light source, and the controller can drive the light source through different driving modes to transmit different information to the outside.
  • Fig. 1 shows an exemplary optical label 100, which includes three light sources (respectively a first light source 101, a second light source 102, and a third light source 103).
  • the optical label 100 also includes a controller (not shown in FIG. 1), which is used to select a corresponding driving mode for each light source according to the information to be transmitted.
  • the controller can use different driving signals to control the light emitting mode of the light source, so that when a device with imaging function is used to photograph the light label 100, the imaging of the light source therein can show a different appearance. (For example, different colors, patterns, brightness, etc.).
  • the driving mode of each light source at the moment can be analyzed, so as to analyze the information transmitted by the optical label 100 at the moment.
  • each optical tag can be assigned an identification information (ID), which is used to uniquely identify or identify the optical tag by the manufacturer, manager, or user of the optical tag .
  • ID identification information
  • the light source can be driven by the controller in the optical tag to transmit the identification information outward, and the user can use the device to collect the image of the optical tag to obtain the identification information transmitted by the optical tag, so that the corresponding identification information can be accessed based on the identification information.
  • Services for example, accessing a web page associated with identification information, obtaining other information associated with identification information (for example, location information of a light tag corresponding to the identification information), and so on.
  • the devices mentioned in this article can be, for example, devices carried or controlled by users (for example, mobile phones, tablets, smart glasses, AR glasses, smart helmets, smart watches, etc.), or machines that can move autonomously (for example, unmanned Machines, driverless cars, robots, etc.).
  • the device can acquire an image containing the optical label through the camera on the optical label, and can identify the information transmitted by the optical label by analyzing the imaging of the optical label (or each light source in the optical label) in the image.
  • the identification information (ID) or other information of each optical label can be saved on the server, such as service information related to the optical label, description information or attributes related to the optical label, such as location information, model information, and Physical size information, physical shape information, posture or orientation information, etc.
  • the optical label may also have uniform or default physical size information and physical shape information.
  • the device can use the identified identification information of the optical tag to query the server to obtain other information related to the optical tag.
  • the location information of the optical tag may refer to the actual location of the optical tag in the physical world, which may be indicated by geographic coordinate information.
  • the server may be a software program running on a computing device, a computing device, or a cluster composed of multiple computing devices.
  • the optical tag may be offline, that is, the optical tag does not need to communicate with the server. Of course, it can be understood that online optical tags that can communicate with the server are also feasible.
  • Figure 2 shows a real scene including a system for setting presentable virtual objects for a target according to an embodiment.
  • the real scene can be, for example, a bank branch, in which there are three users, namely the first user and the second user.
  • the user and the third user the first user may be a bank branch staff, and the second user and the third user may be bank customers.
  • the first user carries a device capable of recognizing the optical tag, such as a mobile phone or AR glasses.
  • the system includes a camera, a light tag, and a server (not shown in FIG. 2), where the camera and the light tag are each installed in a real scene in a specific position and posture (hereinafter collectively referred to as "pose").
  • the server may obtain the respective pose information of the camera and the optical tag, and may obtain the relative pose information between the camera and the optical tag based on the respective pose information of the camera and the optical tag.
  • the server can also directly obtain the relative pose information between the camera and the optical tag.
  • the server can obtain a transformation matrix between the camera coordinate system and the optical tag coordinate system, and the transformation matrix may include, for example, a rotation matrix R and a displacement vector t between the two coordinate systems.
  • the coordinates in one coordinate system can be converted to the coordinates in another coordinate system.
  • the pose information of the two may be manually calibrated, and the pose information may be stored in the server.
  • the camera may be a camera that is installed in a fixed position and has a fixed orientation, but it is understood that the camera may also be a movable (for example, the position or direction can be changed), as long as the current pose information can be determined.
  • the server can set the current pose information of the camera and control the movement of the camera based on the pose information, or the camera itself or other devices can control the movement of the camera and send the current pose information of the camera to the server.
  • the system may include more than one camera, or more than one optical tag.
  • a scene coordinate system (which can also be referred to as a real world coordinate system) can be established for a real scene, and the camera coordinate system and the scene coordinate system can be determined based on the pose information of the camera in the real scene
  • the transformation matrix of the light tag and the transformation matrix between the light tag coordinate system and the scene coordinate system are determined based on the pose information of the light tag in the real scene.
  • the coordinates in the camera coordinate system or the optical label coordinate system can be converted to the coordinates in the scene coordinate system, instead of transforming between the camera coordinate system and the optical label coordinate system, but it is understandable that the camera and the optical label
  • the relative pose information or transformation matrix can still be known by the server.
  • having a relative pose between the camera and the light tag means that there is a relative pose between the two objectively, and does not require the system to pre-store the relative pose information between the two or use The relative pose information.
  • the relative pose information For example, in one embodiment, only the pose information of the camera and the light tag in the scene coordinate system may be stored in the system, and the relative poses of the two may not be calculated or used.
  • the camera is used to track a target in a real scene.
  • the target may be stationary or moving.
  • it may be a person, a stationary object, a movable object, etc. in the scene.
  • the system can track the positions of the first user, the second user, and the third user through a camera.
  • the camera may be, for example, a monocular camera, a binocular camera, or other forms of cameras.
  • the camera can be used to track the position of a person or an object in a real scene through various methods in the prior art. For example, in the case of using a single monocular camera, scene information (for example, information about the plane where people or objects in the scene are located) can be combined to determine the position information of the target in the scene.
  • the position information of the target can be determined according to the position of the target in the camera's field of view and the depth information of the target.
  • the position information of the target can be determined according to the position of the target in the field of view of each camera.
  • the process of determining the location information of the target in the scene may be performed by a camera, and the corresponding result may be sent to the server.
  • the server may determine the location information of the target in the scene according to the image taken by the camera. The server may convert the determined position information of the target into position information in the light tag coordinate system or the scene coordinate system. After obtaining the location information of the target, the server may set a virtual object having spatial location information associated with the target, and the spatial location information of the virtual object may be determined based on the location information of the target. For example, the spatial position of the virtual object can be set to be directly above the corresponding target, or to be located in another location near the target.
  • the server can send related information about the virtual object to a device that can recognize the optical tag.
  • the server may send the virtual object associated with the second user and the virtual object associated with the third user to the optical tag identification device carried by the first user.
  • the optical label recognition device has a camera and a display medium.
  • the recognition device can determine its position and posture information relative to the optical label by scanning the optical label.
  • the position and posture information can also be further converted to the position in the scene coordinate system. And posture information.
  • the recognition device can use the related information of the virtual object to present the corresponding virtual object at a suitable position on its display medium based on its position information and posture information.
  • the second user or the third user may also carry the optical tag identification device, and can observe virtual objects associated with other users through the display medium of the optical tag identification device in a similar manner to the first user.
  • the solution of the present invention is particularly suitable for multi-person AR interaction in the scene.
  • Fig. 3 shows a method for setting a presentable virtual object for a target according to an embodiment, which can be implemented using the above-mentioned system, and can include the following steps:
  • Step 301 Use the camera to track the target in the real scene.
  • Camera-based visual tracking technology involves detecting, extracting, identifying, or tracking targets in image sequences to obtain the target's position, posture, velocity, acceleration, or motion trajectory.
  • the visual tracking technology belongs to the existing technology in the field, and will not be repeated here.
  • the camera when the camera is tracking a target, it may only perform continuous image collection and provide the collected images as a tracking result to the server. After that, the server may analyze these images and determine the location information of each target. In another embodiment, the camera can also perform further processing on the collected images, such as image processing, target detection, target extraction, target recognition, determination of target position or posture, etc., and the corresponding processing results can be provided as tracking results To the server.
  • Step 302 Obtain the position information of the target according to the tracking result of the camera.
  • the server may receive the tracking result from the camera, and obtain the location information of the target in the real scene according to the tracking result.
  • the target position information finally obtained by the server may be the position information of the target in the camera coordinate system, the position information of the target in the optical tag coordinate system, or the position information of the target in the scene coordinate system.
  • the server can realize the conversion of the target position between the different coordinate systems according to the transformation matrix between the different coordinate systems. For example, the server can first obtain the position information of the target relative to the camera according to the tracking result of the camera (that is, the position information in the camera coordinate system), and then can according to the position information of the target relative to the camera and the distance between the camera and the light tag.
  • the relative pose information to determine the position information of the target relative to the light tag that is, the position information in the light tag coordinate system
  • the position information of the target relative to the camera and the pose information of the camera in the real scene To determine the position information of the target in the real scene (that is, the position information in the scene coordinate system).
  • the server may also obtain the posture information of the target according to the tracking result of the camera, for example, the orientation of a person or an object.
  • Step 303 Set a virtual object with spatial location information associated with the target.
  • the server may set an associated virtual object for the target.
  • the virtual object has spatial location information, which can be determined based on the location information of the target.
  • the spatial position of the virtual object may be configured to be located at a predetermined distance above the target position.
  • the spatial position information of the virtual object may be, for example, its spatial position information relative to the light tag, or its position information in the scene coordinate system.
  • the server may also configure any other information related to the virtual object.
  • the information related to the virtual object is the related information used to describe the virtual object. For example, it can include pictures, text, numbers, icons, etc. contained in the virtual object, and it can also include shape information, color information, size information, and posture of the virtual object. Information etc.
  • the device can present the corresponding virtual object.
  • the server can configure the related information of the virtual object corresponding to the target according to the information related to the target. In this way, the corresponding virtual object can be customized for each target.
  • the server may obtain information related to the target according to the tracking result of the camera. For example, the server can determine whether the target is a person or an object, whether the target is a man or a woman, what the target is, whether the target is moving or stationary, the moving speed of the target, the moving direction of the target, etc., according to the tracking result of the camera.
  • the target-related information can be used to configure the relevant information of the virtual object corresponding to the target.
  • other facilities may also be used to obtain target-related information.
  • identity information of a person can be obtained through fingerprint or vein collection equipment, face recognition equipment, ID card reading equipment and other facilities, and the gender information, occupational information, identity information, and membership of the person can be obtained through facilities such as card readers. Card information, etc.
  • the server can receive the information from the facilities and compare the information with the information currently tracked by the camera at a predetermined location (for example, the front, rear, left, right, upper, lower, near of these facilities Etc.) to establish contact with the person, so that the person’s information (for example, gender information, occupation information, identity information, membership card information, etc.) can be used to configure the relevant information of the virtual object corresponding to the person.
  • the related information of the object can also be obtained through the facilities in the scene, and the related information of the virtual object corresponding to the object can be configured with this information.
  • the server may also set the posture information of the virtual object.
  • the posture information of the virtual object can be set based on the posture information of the target, but it can also be set in other ways.
  • Step 304 Send the related information of the virtual object to a device capable of recognizing the optical tag, and the information includes the spatial position information of the virtual object.
  • the server may send the related information of the virtual object to a device capable of recognizing the optical tag.
  • the device may be a mobile phone, AR glasses, etc., which has a camera and a display medium.
  • the related information of the virtual object includes the spatial position information of the virtual object, and in one embodiment, it may also include the posture information of the virtual object.
  • the related information of the virtual object to be sent to the device may be determined according to the information related to the device.
  • the information related to the device may include related information of the device itself, related information of the user of the device, and so on. For example, a part or all of the related information of the virtual object can be selected and sent to the device according to different permissions or different levels of the device user.
  • the optical label identification device can identify the information (such as identification information) conveyed by the optical label by scanning the optical labels arranged in the scene, and use the information to access the server (for example, access through wireless signals) to obtain information from the server. Obtain information about virtual objects.
  • the server may use the optical tag to transmit the related information of the virtual object to the optical tag identification device in an optical communication manner.
  • the related information of the virtual object can be used by the optical tag recognition device to present the virtual object on its display medium based on its position information and posture information determined by the optical tag.
  • the position and posture information of the device can be determined by the optical tag.
  • the position and posture information can be the position and posture information of the device relative to the optical tag (that is, the position and posture information in the optical tag coordinate system), but it can also be It is the position and posture information converted to other coordinate systems (such as the scene coordinate system).
  • the device can determine its position information relative to the optical label by acquiring an image including the optical label and analyzing the image. For example, the device can determine the relative distance between the optical tag and the device through the imaging size of the optical tag in the image and optional other information (for example, the actual physical size of the optical tag, the focal length of the device’s camera) (the larger the image, the greater the distance The closer; the smaller the image, the farther the distance).
  • the device may use the identification information of the optical tag to obtain the actual physical size information of the optical tag from the server, or the optical tag may have a uniform physical size and store the physical size on the device.
  • the device may determine the orientation information of the device relative to the optical label by including the perspective deformation of the optical label imaging in the image of the optical label and optional other information (for example, the imaging position of the optical label).
  • the device may use the identification information of the optical tag to obtain the physical shape information of the optical tag from the server, or the optical tag may have a unified physical shape and store the physical shape on the device.
  • the device can also directly obtain the relative distance between the optical label and the device through a depth camera or binocular camera installed on it.
  • the device can also use any other existing positioning method to determine its position information relative to the optical tag.
  • the device can scan the optical tag and determine its posture information relative to the optical tag based on the imaging of the optical tag.
  • the imaging position or imaging area of the optical tag is at the center of the imaging field of the second device, it can It is assumed that the second device is currently facing the optical tag.
  • the imaging direction of the optical tag can be further considered when determining the posture of the device. As the posture of the device changes, the imaging position and/or imaging direction of the optical tag on the device will change accordingly. Therefore, the posture information of the device relative to the optical tag can be obtained according to the imaging of the optical tag on the device.
  • the position and posture information of the device relative to the optical tag can also be determined in the following manner.
  • a coordinate system can be established based on the optical label, and the coordinate system can be referred to as the optical label coordinate system.
  • Some points on the optical label may be determined as some spatial points in the optical label coordinate system, and the coordinates of these spatial points in the optical label coordinate system may be determined according to the physical size information and/or physical shape information of the optical label.
  • Some points on the optical label may be, for example, the corners of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on.
  • the image points corresponding to these spatial points can be found in the image taken by the device camera, and the position of each image point in the image can be determined.
  • the pose information of the device camera in the optical label coordinate system when the image is taken can be calculated (R, t), where R is the rotation matrix, which can be used to represent the posture information of the device camera in the optical label coordinate system, and t is the displacement vector, which can be used to represent the position information of the device camera in the optical label coordinate system .
  • R is the rotation matrix, which can be used to represent the posture information of the device camera in the optical label coordinate system
  • t is the displacement vector, which can be used to represent the position information of the device camera in the optical label coordinate system .
  • the method of calculating R and t is known in the prior art.
  • the PnP (Perspective-n-Point) method of 3D-2D can be used to calculate R and t. In order not to obscure the present invention, the details are omitted here. Introduction.
  • the device After the device obtains the position and posture information through the optical tag, it can, for example, use the built-in acceleration sensor, gyroscope, camera, etc. of the device through methods known in the art (for example, inertial navigation, visual odometry, SLAM, VSLAM, SFM, etc.) ) To measure or track changes in the position and posture of the device, so as to obtain the new position and posture information of the device.
  • the virtual object After obtaining the spatial position information of the virtual object and the position and posture information of the device, the virtual object can be superimposed at a suitable position in the real scene presented by the display medium of the device. In the case where the virtual object has posture information, the posture of the superimposed virtual object can be further determined.
  • the posture of the virtual object can be adjusted with the position and/or posture of the device relative to the virtual object, for example, a certain direction of the virtual object (for example, the front direction of the virtual object) always faces the device.
  • a direction from the virtual object to the device can be determined in space based on the positions of the device and the virtual object, and the posture of the virtual object can be determined based on the direction.
  • the device user can perform various interactive operations on the virtual object. For example, the device user can click on the virtual object to view its details, change the posture of the virtual object, change the size or color of the virtual object, add annotations on the virtual object, and so on.
  • the modified attribute information of the virtual object can be uploaded to the server. The server can update the related information of the stored virtual object based on the modified attribute information.
  • the method shown in FIG. 3 can be executed continuously or repeatedly executed periodically, so that the virtual object can always track the target in the scene.
  • the tracking target located in the real scene may be a device or a person holding the device, and information related to the corresponding virtual object may be set according to the related information of the device.
  • the device may, for example, use the above-mentioned method to determine its location information through the optical tag, and send the location information to the server.
  • the location information may be the location information of the device relative to the optical tag.
  • the location information sent by the device can be the location information obtained when the device scans the optical tag, or the device uses a built-in acceleration sensor, gyroscope, camera, etc.
  • the device after scanning the optical tag through methods known in the art (for example, inertial Navigation, visual odometer, SLAM, VSLAM, SFM, etc.) measure or track new position information.
  • the device can also send some other information to the server, such as device identification number, device user name, device owner’s professional information, device owner’s identity information, device owner’s gender information, and device owner’s age information , Account information of an application on the device, information related to an operation performed by the device, etc.
  • the server may compare the location information of the device with the location information of one or more targets determined according to the tracking result of the camera to determine a target that matches the device.
  • the server may select the one that is closest to the location information of the device from the location information of one or more targets, and consider that the corresponding target matches the device. After determining the target that matches the device, the server can configure the related information of the virtual object corresponding to the target according to the information from the device.
  • the equipment in the scene for example, robots, unmanned vehicles, etc.
  • persons with equipment for example, mobile phones, AR glasses, etc.
  • the server can upload relevant information to the server autonomously, eliminating the need to use special facilities (For example, card swiping device, fingerprint collection device, etc.) to obtain information related to equipment or personnel.
  • Fig. 4 shows a schematic image viewed by the first user in Fig. 2 through a mobile phone screen or AR glasses according to an embodiment, which includes a second user and a third user.
  • the first user may be a bank branch staff, and the second and third users may be bank customers.
  • the first user can not only observe the second user and the third user through the mobile phone screen or AR glasses, but also the virtual objects superimposed on each user's head (for example, "VIP", "normal”, etc. ), these objects can be used to indicate the user's customer level, for example, whether it is a VIP customer or an ordinary customer.
  • the customer level of the user can be obtained, for example, by reading the user's identity information, bank card information, etc., or by receiving the user's phone number information, bank account information, etc. from the user's mobile phone.
  • the present invention can be implemented in the form of a computer program.
  • the computer program can be stored in various storage media (for example, a hard disk, an optical disk, a flash memory, etc.), and when the computer program is executed by a processor, it can be used to implement the method of the present invention.
  • the present invention may be implemented in the form of an electronic device.
  • the electronic device includes a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, it can be used to implement the method of the present invention.
  • references herein to "each embodiment”, “some embodiments”, “one embodiment”, or “an embodiment”, etc. refer to the specific features, structures, or properties described in connection with the embodiments that are included in In at least one embodiment. Therefore, the appearances of the phrases “in various embodiments”, “in some embodiments”, “in one embodiment”, or “in an embodiment” in various places throughout this document do not necessarily refer to the same implementation example.
  • specific features, structures, or properties can be combined in any suitable manner in one or more embodiments. Therefore, a specific feature, structure, or property shown or described in combination with one embodiment can be combined in whole or in part with the feature, structure, or property of one or more other embodiments without limitation, as long as the combination is not incompatible. Logical or not working.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un procédé et un système permettant de définir un objet virtuel pouvant être présenté à une cible. Dans une scène réelle, une caméra et un dispositif de communication optique sont installés et présentent une pose relative entre eux. Le procédé consiste : à utiliser la caméra pour suivre une cible dans une scène réelle ; à obtenir des informations de position de la cible en fonction du résultat de suivi de la caméra ; à définir un objet virtuel qui comporte des informations de position spatiale et qui est associé à la cible, les informations de position spatiale de l'objet virtuel étant déterminées sur la base des informations de position de la cible ; et à transmettre des informations relatives à l'objet virtuel à un premier dispositif, les informations comportant les informations de position spatiale de l'objet virtuel, les informations relatives à l'objet virtuel pouvant être utilisées par le premier dispositif pour présenter l'objet virtuel sur un support d'affichage du premier dispositif sur la base des informations de position et des informations de pose déterminées au moyen du dispositif de communication optique.
PCT/CN2020/117640 2019-09-26 2020-09-25 Procédé et système permettant de définir un objet virtuel susceptible d'être présenté à une cible WO2021057887A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910917441.0A CN112561952A (zh) 2019-09-26 2019-09-26 用于为目标设置可呈现的虚拟对象的方法和系统
CN201910917441.0 2019-09-26

Publications (1)

Publication Number Publication Date
WO2021057887A1 true WO2021057887A1 (fr) 2021-04-01

Family

ID=75029790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117640 WO2021057887A1 (fr) 2019-09-26 2020-09-25 Procédé et système permettant de définir un objet virtuel susceptible d'être présenté à une cible

Country Status (3)

Country Link
CN (1) CN112561952A (fr)
TW (1) TWI750822B (fr)
WO (1) WO2021057887A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415839A (zh) * 2022-01-27 2022-04-29 歌尔科技有限公司 一种信息显示方法、装置、设备及存储介质
CN116205952A (zh) * 2023-04-19 2023-06-02 齐鲁空天信息研究院 人脸识别与跟踪的方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339488A (zh) * 2016-08-30 2017-01-18 西安小光子网络科技有限公司 一种基于光标签的虚拟设施插入定制实现方法
CN106408667A (zh) * 2016-08-30 2017-02-15 西安小光子网络科技有限公司 基于光标签的定制现实方法
CN106446883A (zh) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 基于光标签的场景重构方法
US20180011167A1 (en) * 2015-11-18 2018-01-11 Abl Ip Holding Llc Method and system for dynamic reassignment of an identification code in a light-based positioning system
CN109936712A (zh) * 2017-12-19 2019-06-25 陕西外号信息技术有限公司 基于光标签的定位方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI279142B (en) * 2005-04-20 2007-04-11 Univ Nat Chiao Tung Picture capturing and tracking method of dual cameras
DE102009049073A1 (de) * 2009-10-12 2011-04-21 Metaio Gmbh Verfahren zur Darstellung von virtueller Information in einer Ansicht einer realen Umgebung
US20140240354A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Augmented reality apparatus and method
CA2919392C (fr) * 2016-02-01 2022-05-31 Jean-Paul Boillot Appareil de telemetrie servant a surveiller la position d'un outil de traitement robotique
CA3050177A1 (fr) * 2017-03-10 2018-09-13 Brainlab Ag Navigation a realite augmentee medicale
CN110870300A (zh) * 2017-06-30 2020-03-06 Oppo广东移动通信有限公司 定位方法、装置、存储介质及服务器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180011167A1 (en) * 2015-11-18 2018-01-11 Abl Ip Holding Llc Method and system for dynamic reassignment of an identification code in a light-based positioning system
CN106339488A (zh) * 2016-08-30 2017-01-18 西安小光子网络科技有限公司 一种基于光标签的虚拟设施插入定制实现方法
CN106408667A (zh) * 2016-08-30 2017-02-15 西安小光子网络科技有限公司 基于光标签的定制现实方法
CN106446883A (zh) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 基于光标签的场景重构方法
CN109936712A (zh) * 2017-12-19 2019-06-25 陕西外号信息技术有限公司 基于光标签的定位方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415839A (zh) * 2022-01-27 2022-04-29 歌尔科技有限公司 一种信息显示方法、装置、设备及存储介质
WO2023142265A1 (fr) * 2022-01-27 2023-08-03 歌尔股份有限公司 Procédé et appareil d'affichage d'informations, dispositif, et support de stockage
CN116205952A (zh) * 2023-04-19 2023-06-02 齐鲁空天信息研究院 人脸识别与跟踪的方法、装置、电子设备及存储介质
CN116205952B (zh) * 2023-04-19 2023-08-04 齐鲁空天信息研究院 人脸识别与跟踪的方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112561952A (zh) 2021-03-26
TW202114409A (zh) 2021-04-01
TWI750822B (zh) 2021-12-21

Similar Documents

Publication Publication Date Title
CN111989537B (zh) 用于在无约束环境中检测人类视线和手势的系统和方法
US11315526B2 (en) Transportation hub information system
US20230316682A1 (en) Beacons for localization and content delivery to wearable devices
US11614803B2 (en) Individually interactive multi-view display system for non-stationary viewing locations and methods therefor
US8860760B2 (en) Augmented reality (AR) system and method for tracking parts and visually cueing a user to identify and locate parts in a scene
US20150379770A1 (en) Digital action in response to object interaction
US11954268B2 (en) Augmented reality eyewear 3D painting
CN103365411A (zh) 信息输入设备、信息输入方法和计算机程序
US11869156B2 (en) Augmented reality eyewear with speech bubbles and translation
WO2021057887A1 (fr) Procédé et système permettant de définir un objet virtuel susceptible d'être présenté à une cible
US11263818B2 (en) Augmented reality system using visual object recognition and stored geometry to create and render virtual objects
US11195341B1 (en) Augmented reality eyewear with 3D costumes
Schütt et al. Semantic interaction in augmented reality environments for microsoft hololens
US20220157032A1 (en) Multi-modality localization of users
US20210406542A1 (en) Augmented reality eyewear with mood sharing
WO2021093703A1 (fr) Procédé et système d'interaction basés sur un appareil de communication optique
WO2021057886A1 (fr) Procédé et système de navigation basés sur un appareil de communication optique, et dispositif et support associés
US9041646B2 (en) Information processing system, information processing system control method, information processing apparatus, and storage medium
CN112581630B (zh) 一种用户交互方法和系统
CN112561953A (zh) 用于现实场景中的目标识别与跟踪的方法和系统
KR102245760B1 (ko) 테이블 탑 디바이스 및 이를 포함하는 테이블 탑 시스템
WO2020244576A1 (fr) Procédé de superposition d'objet virtuel sur la base d'un appareil de communication optique, et dispositif électronique correspondant
TWI759764B (zh) 基於光通信裝置疊加虛擬物件的方法、電子設備以及電腦可讀取記錄媒體
WO2022121606A1 (fr) Procédé et système d'obtention d'informations d'identification de dispositif ou d'utilisateur de celui-ci dans un scénario
WO2020244578A1 (fr) Procédé d'interaction faisant appel à un appareil de communication optique, et dispositif électronique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20867287

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20867287

Country of ref document: EP

Kind code of ref document: A1