CN112565165B - Interaction method and system based on optical communication device - Google Patents

Interaction method and system based on optical communication device Download PDF

Info

Publication number
CN112565165B
CN112565165B CN201910918154.1A CN201910918154A CN112565165B CN 112565165 B CN112565165 B CN 112565165B CN 201910918154 A CN201910918154 A CN 201910918154A CN 112565165 B CN112565165 B CN 112565165B
Authority
CN
China
Prior art keywords
information
virtual object
position information
optical communication
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910918154.1A
Other languages
Chinese (zh)
Other versions
CN112565165A (en
Inventor
方俊
牛旭恒
李江亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Whyhow Information Technology Co Ltd
Original Assignee
Beijing Whyhow Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Whyhow Information Technology Co Ltd filed Critical Beijing Whyhow Information Technology Co Ltd
Priority to CN201910918154.1A priority Critical patent/CN112565165B/en
Priority to EP20818510.8A priority patent/EP3962118A4/en
Priority to JP2021571443A priority patent/JP2022535793A/en
Priority to PCT/CN2020/094383 priority patent/WO2020244578A1/en
Publication of CN112565165A publication Critical patent/CN112565165A/en
Priority to US17/536,703 priority patent/US20220084258A1/en
Application granted granted Critical
Publication of CN112565165B publication Critical patent/CN112565165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An interaction method and system based on an optical communication device, the method comprising: the method comprises the steps that a server obtains information from first equipment, and attribute information of the first equipment and position information of the first equipment are determined according to the information from the first equipment; the server sets a virtual object associated with the first device based on attribute information of the first device, spatial position information of the virtual object being determined according to position information of the first device; based on a predetermined matching rule, the server sends information related to the virtual object to a second device, wherein the information related to the virtual object is usable by the second device to render the virtual object on its display medium based on its position information and pose information determined by the optical communication means.

Description

Interaction method and system based on optical communication device
Technical Field
The invention belongs to the field of augmented reality and information interaction, and particularly relates to an interaction method and system based on an optical communication device.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
In recent years, Augmented Reality (AR) technology has been advanced in a long way. Augmented reality technology, also known as mixed reality technology, superimposes virtual objects into a real scene through computer technology so that the real scene and virtual objects can be rendered in real time into the same picture or space, thereby enhancing the user's perception of the real world. Because the augmented reality technology can perform augmented display output on a real environment, the augmented reality technology is more and more widely applied to the technical fields of medical research and anatomical training, precision instrument manufacturing and maintenance, military aircraft navigation, engineering design, remote robot control and the like.
However, in the existing augmented reality technology, a good interaction manner between different devices or users of different devices cannot be provided. In addition, in the existing augmented reality technology, the device associated with the virtual object is not screened and limited.
In order to solve at least one of the above problems, the present application proposes an interactive method and system based on an optical communication device.
Disclosure of Invention
The scheme of the invention provides a method and a system for realizing interaction among users based on an optical communication device, which obtains information from user equipment in a mode that the user equipment scans the optical communication device and sets a presentable virtual object for the user equipment based on the information, and the user can receive and check the virtual object nearby the user equipment by using the user equipment and can carry out interaction communication in a mode of editing the virtual object of other people.
One aspect of the present invention relates to an interaction method based on an optical communication device, including: the method comprises the steps that a server obtains information from first equipment, and attribute information of the first equipment and position information of the first equipment are determined according to the information from the first equipment; setting a virtual object associated with the first device based on attribute information of the first device, spatial location information of the virtual object being determined according to location information of the first device; based on a predetermined matching rule, the server sends information related to the virtual object to a second device, wherein the information related to the virtual object is usable by the second device to render the virtual object on its display medium based on its position information and pose information determined by the optical communication means.
Optionally, wherein the location information of the first device is location information relative to the optical communication apparatus, location information in a venue coordinate system, or location information in a world coordinate system; and/or the position information and the attitude information of the second device are position information and attitude information with respect to the optical communication apparatus, position information and attitude information in a field coordinate system, or position information and attitude information in a world coordinate system.
Optionally, the predetermined matching rule includes one or more of the following: the matching rule set by the user is defined by the user; a matching rule set by the server; a default matching rule; and a random matching rule.
Optionally, the predetermined matching rule includes: to which devices information about virtual objects associated with the first device is to be sent; and/or the second device is to receive information about virtual objects associated with which devices.
Optionally, the server obtains information from a second device, and determines attribute information of the second device and location information of the second device according to the information from the second device; the server setting another virtual object associated with the second device based on the attribute information of the second device, the spatial position information of the another virtual object being determined according to the second device position information; the server sends information related to the other virtual object to the first device, wherein the information related to the other virtual object is usable by the first device to render the other virtual object on its display medium based on the position information and the pose information determined by the first device through the optical communication means.
Optionally, the matching rule includes a setting of content displayed on the corresponding second device or first device by the virtual object or another virtual object, or a setting of sending all or part of relevant information of the virtual object or another virtual object to the corresponding second device or first device.
Optionally, the attribute information of the first device includes information related to a user of the first device and/or information customized by the user of the first device.
Optionally, the optical communication apparatus associated with the location information of the first device and the optical communication apparatus associated with the location information of the second device are the same optical communication apparatus or different optical communication apparatuses, wherein the different optical communication apparatuses have a determined relative location relationship.
Optionally, determining the location information of the first device according to the information from the first device includes at least one of: extracting location information of the first device from information from the first device; obtaining location information of the first device by analyzing information from the first device; obtaining location information of the first device through a query using information from the first device.
Optionally, the information from the first device includes position information of the first device relative to the optical communication apparatus, wherein the first device determines its position information relative to the optical communication apparatus by capturing an image including the optical communication apparatus using an image capturing device and analyzing the image.
Optionally, the second apparatus determines the position information and/or the posture information thereof by capturing an image including the optical communication device using the image capturing means and analyzing the image.
Optionally, the related information of the virtual object further includes pose information of the virtual object.
Optionally, the pose information of the virtual object can be adjusted according to a change in position and/or pose of the second device relative to the virtual object.
Optionally, a certain orientation of the virtual object is always towards the second device.
Optionally, the virtual object can be edited, wherein the method further comprises: and the server sends the editing result of the user on the virtual object associated with the first equipment through the second equipment to the first equipment.
Optionally, the method further includes: the server receiving new information from the first device; updating relevant information of a virtual object associated with the first device according to the new information from the first device; sending the updated information related to the virtual object to the second device to enable the second device to render or update the virtual object on its display medium based on its position information and attitude information relative to the optical communication apparatus and the updated information related to the virtual object.
Another aspect of the present invention relates to an interactive system based on an optical communication apparatus, including: one or more optical communication devices; and a server configured to implement the above method.
Another aspect of the present invention relates to an interaction method based on an optical communication apparatus, including: based on a predetermined matching rule, the equipment receives information related to the virtual object from a server, wherein the information comprises spatial position information of the virtual object; the equipment determines the position information and the posture information of the equipment through an optical communication device; and the device renders the virtual object on its display medium based on its position information and pose information and the information related to the virtual object.
Optionally, wherein the spatial position information of the virtual object is spatial position information relative to the optical communication apparatus, spatial position information in a field coordinate system, or spatial position information in a world coordinate system; and/or the position information and the attitude information of the apparatus are position information and attitude information with respect to the optical communication device, position information and attitude information in a site coordinate system, or position information and attitude information in a world coordinate system.
Optionally, the spatial position information of the virtual object is determined based on position information of other devices.
Optionally, wherein the apparatus determines the position information and/or the posture information thereof by capturing an image including the optical communication device using the image capturing means and analyzing the image.
Optionally, wherein the pose of the virtual object is adjustable according to a change in position and/or pose of the device relative to the virtual object.
A further aspect of the invention relates to a storage medium in which a computer program is stored which, when being executed by a processor, can be used for carrying out the above-mentioned method.
Yet another aspect of the invention relates to an electronic device comprising a processor and a memory, in which a computer program is stored which, when being executed by the processor, is operative to carry out the method as described above.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary optical label;
FIG. 2 illustrates an exemplary optical label network;
FIG. 3 shows a schematic diagram of an optical label-based interaction method according to one embodiment;
FIG. 4 illustrates an optical label-based interaction method according to one embodiment;
FIG. 5 shows a schematic diagram of an optical label-based interaction method according to one embodiment;
FIG. 6 illustrates an optical label-based interaction method according to one embodiment; and
FIG. 7 illustrates an optical label-based interaction method according to one embodiment;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Optical communication devices are also referred to as optical labels, and these two terms are used interchangeably herein. The optical label can transmit information through different light emitting modes, has the advantages of long identification distance and loose requirements on visible light conditions, and the information transmitted by the optical label can change along with time, so that large information capacity and flexible configuration capacity can be provided. Compared with the traditional two-dimensional code, the optical label has longer identification distance and stronger information interaction capacity, thereby providing great convenience for users.
An optical label may typically include a controller and at least one light source, the controller may drive the light source through different driving modes to communicate different information to the outside. Fig. 1 shows an exemplary optical label 100 comprising three light sources (first light source 101, second light source 102, third light source 103, respectively). Optical label 100 further comprises a controller (not shown in fig. 1) for selecting a respective driving mode for each light source in dependence on the information to be communicated. For example, in different driving modes, the controller may control the manner in which the light source emits light using different driving signals, such that when the optical label 100 is photographed using the imaging-capable device, the image of the light source therein may take on different appearances (e.g., different colors, patterns, brightness, etc.). By analyzing the imaging of the light sources in the optical label 100, the driving pattern of each light source at the moment can be analyzed, so that the information transmitted by the optical label 100 at the moment can be analyzed.
In order to provide corresponding services to subscribers based on optical labels, each optical label may be assigned identification Information (ID) for uniquely identifying or identifying the optical label by a manufacturer, manager, user, or the like of the optical label. In general, the light source may be driven by a controller in the optical tag to transmit the identification information outwards, and a user may use the device to perform image capture on the optical tag to obtain the identification information transmitted by the optical tag, so that a corresponding service may be accessed based on the identification information, for example, accessing a web page associated with the identification information of the optical tag, acquiring other information associated with the identification information (e.g., location information of the optical tag corresponding to the identification information), and so on. The devices referred to herein may be, for example, devices that a user carries with or controls (e.g., a cell phone with a camera, a tablet, smart glasses, AR glasses, a smart helmet, a smart watch, etc.), or may be machines that are capable of autonomous movement (e.g., a drone, an unmanned automobile, a robot, etc.). The device can acquire images of the optical labels through the cameras on the device to obtain images containing the optical labels, and can identify information transmitted by the optical labels by analyzing images of the optical labels (or all light sources in the optical labels) in the images.
The optical label may be installed at a fixed location and may store identification Information (ID) of the optical label and any other information (e.g., location information) in the server. In reality, a large number of optical labels may be constructed into an optical label network. FIG. 2 illustrates an exemplary optical label network that includes a plurality of optical labels and at least one server, wherein information associated with each optical label may be stored on the server. For example, identification Information (ID) or any other information of each optical label, such as service information related to the optical label, description information or attributes related to the optical label, such as position information, physical size information, physical shape information, pose or orientation information, etc. of the optical label may be maintained on the server. The device may use the identification information of the identified optical label to obtain further information related to the optical label from the server query. The position information of the optical label may refer to an actual position of the optical label in the physical world, which may be indicated by geographical coordinate information. A server may be a software program running on a computing device, or a cluster of computing devices. The optical label may be offline, i.e., the optical label does not need to communicate with the server. Of course, it will be appreciated that an online optical tag capable of communicating with a server is also possible. The optical labels may be deployed in various locations, such as at a reception, beside a chair, at a desk top, etc., as desired. When a user scans the optical label by using the device, the identification information transmitted by the optical label can be identified, and the corresponding service can be accessed by using the identification information, for example, accessing a webpage of a meeting associated with the identification information of the optical label, registering an entry or logging in and signing in.
In this document, when referring to a "user", it generally refers to a user carrying a corresponding device, and when referring to a "device", it generally refers to a device of a user. Thus, unless explicitly distinguished in the specification, the terms "user," "device," or "user equipment" in this application may be used interchangeably.
FIG. 3 shows a schematic diagram of an optical label-based interaction method according to one embodiment. As shown in fig. 3, a user carrying a device identifies optical label information by scanning the optical label and obtains a virtual object associated with other devices or others. The virtual object may include attribute information from another person or another device, may be superimposed on the real scene with the optical label as an anchor point, for example, the virtual object may be used to accurately identify the location of another user or another device in the real scene, or may limit the range of receiving or transmitting the virtual object by setting a matching rule in advance.
Taking a large conference as an example, when a participant carrying a device (e.g., a mobile phone, smart glasses, etc.) enters a meeting place, the participant can scan an optical tag at the entrance of the meeting place to check in. After receiving the information from the equipment of the participants, the server can set virtual objects related to the participants according to different information from different equipment, for example, after the common participants scan the optical labels through a mobile phone, the server sets the virtual objects of the common participants based on the information of names, occupation, identification numbers, mobile phone numbers and the like of the mobile phone owner; for the speaker, the server may add fields such as units, lecture topics, etc. in its virtual object; for the host worker, the virtual object may include not only name and job number, but also a responsible area, responsible matters, and the like. The server may determine spatial position information of the virtual object associated with each participant device according to the position of the participant device relative to the optical label, for example, the position of the virtual object may be set to be the position where the participant device is located or 1 meter above the position where the participant device is located. After one participant scans the optical label, the server determines whether to send the virtual objects of other participants to the equipment of the participant according to a preset matching rule. For example, the predetermined matching rules may specify that virtual objects for the general participants are only sent to the host staff member in view of personal privacy. At this time, only the device of the host worker can receive and present the virtual object of the general participant, and the general participant cannot receive the virtual objects of other participants after scanning the optical label.
In one embodiment, the predetermined matching rule may, for example, consider one or more of the following: attribute information about the device, location information of the device, time information when the device scans the optical label, and the like. For example, the matching rules may specify that only information about virtual objects within a certain distance range around the device is to be sent to the device.
The railway passenger service is taken as an example for explanation, when a passenger carrying equipment enters a station, the equipment can be used for scanning and identifying a certain optical label arranged in the station, and tickets can be purchased or ticket checking and entry can be performed through the identified optical label marking information. After the server receives the information from the passenger or the device, a virtual object related to the passenger can be set according to the attribute information of the passenger or the device, for example, after the passenger scans an optical label through a mobile phone, the server adds the information such as the name, the identification number, the occupation, the departure site, the destination site, the train number, the carriage, the seat number and the like of the owner of the mobile phone to the virtual object of the passenger. Likewise, the crew member may scan the optical label by using his device. The server, upon receiving the information from the crew device, may add to the virtual object associated with the crew setting based on attribute information of the crew device, such as name, title, number of cars, responsible car, etc. The server may determine spatial location information of the virtual object associated with the traveler/crew based on the location of the traveler/crew device relative to the optical label, for example, the location of the virtual object may be set to be at or 1 meter above the location of the traveler/crew device. When other personnel scan the optical label, the server determines whether to send the virtual object of the passenger/crew member to the equipment of other personnel according to a preset rule. For example, the predetermined matching rules may specify that the virtual object of the regular passenger is only sent to the crew member. At this point, only the crew member can view the passenger's virtual object and verify his identity and ride quality after scanning the optical label. In one embodiment, the passenger may also be specified to accept and view virtual objects for crew members to seek assistance when needed.
FIG. 4 illustrates an optical label-based interaction method according to one embodiment, the method comprising the steps of:
step 410: the server obtains information from the first device, determines attribute information of the first device and position information of the first device relative to the optical label based on the information.
The information from the first device may include attribute information of the first device, location information of the first device, and any other information.
In one embodiment, the attribute information of the first device may include information related to the device, such as a device name, an identification number, a network status, and the like. In one embodiment, the attribute information of the first device may also include information about the user associated with the device, such as the name, occupation, identity, gender, age of the owner of the device, account information for an application on the device, or information about an operation performed by the user using the device, such as a login page, a registered account, purchase information, and so forth. In one embodiment, the attribute information of the first device further includes user-customized information, such as a nickname, avatar, signature, and other personalized settings of the user.
The server may determine the location information of the first device relative to the optical label in various ways. In one embodiment, the server extracts the position information of the first device relative to the optical label from the information from the first device, wherein the first device may determine its position information relative to the optical label by capturing an image comprising the optical label and analyzing the image, and send the position information to the server. The device may determine its position information relative to the optical label in various ways, which may include distance information and direction information of the device relative to the optical label. In one embodiment, the device may determine its position information relative to the optical label by capturing an image that includes the optical label and analyzing the image. For example, the device may determine the relative distance of the optical label from the device (the greater the imaging, the closer the distance; the smaller the imaging, the further the distance) by the imaging size of the optical label in the image and optionally other information (e.g., actual physical dimension information of the optical label, the focal length of the camera of the device). The device may obtain actual physical size information of the optical label from the server using the identification information of the optical label, or the optical label may have a uniform physical size and store the physical size on the device. The device may determine orientation information of the device relative to the optical label by perspective distortion of the optical label imaging in the image including the optical label and optionally other information (e.g., imaging location of the optical label). The device may obtain physical shape information of the optical label from a server using identification information of the optical label, or the optical label may have a uniform physical shape and store the physical shape on the device. In one embodiment, the device may also directly obtain the relative distance of the optical label from the device through a depth camera or a binocular camera or the like mounted thereon. The device may also use any other positioning method known in the art to determine its position information relative to the optical label. In one embodiment, the device may scan the optical label and may determine its pose information relative to the optical label based on the imaging of the optical label, and may consider the second device as currently facing the optical label when the imaging location or imaging area of the optical label is centered in the imaging field of view of the second device. The direction of imaging of the optical label may further be taken into account when determining the pose of the device. As the pose of the device changes, the imaging position and/or imaging direction of the optical label on the device changes accordingly, and therefore pose information of the device relative to the optical label can be obtained from the imaging of the optical label on the device. In one embodiment, the position and pose information of the device relative to the optical label may also be determined as follows. In particular, a coordinate system may be established from the optical label, which may be referred to as the optical label coordinate system. Some points on the optical label may be determined as some spatial points in the optical label coordinate system, and the coordinates of these spatial points in the optical label coordinate system may be determined according to the physical size information and/or the physical shape information of the optical label. Some of the points on the optical label may be, for example, corners of a housing of the optical label, ends of a light source in the optical label, some identification points in the optical label, and so on. According to the object structure features or the geometric structure features of the optical label, image points corresponding to the space points can be found in the image shot by the equipment camera, and the positions of the image points in the image are determined. According to the coordinates of each space point in the optical label coordinate system and the positions of corresponding image points in the image, and by combining the internal reference information of the equipment camera, the pose information (R, t) of the equipment camera in the optical label coordinate system when the image is shot can be obtained through calculation, wherein R is a rotation matrix which can be used for representing the pose information of the equipment camera in the optical label coordinate system, and t is a displacement vector which can be used for representing the position information of the equipment camera in the optical label coordinate system. Methods of calculating R, t are known in the art, and R, t may be calculated using, for example, the 3D-2D PnP (Passive-n-Point) method, and will not be described in detail herein in order not to obscure the invention.
The position information of the device relative to the optical label may be relative position information obtained when the device scans the optical label, or may be new relative position information obtained by the device after scanning the optical label by measuring or tracking using a built-in acceleration sensor, gyroscope, camera, etc. by methods known in the art (e.g., inertial navigation, visual odometer, SLAM, VSLAM, SFM, etc.). In one embodiment, the server may obtain the location information of the first device relative to the optical label by analyzing information from the first device. For example, the information from the first device may include an image taken by the first device containing an optical label, and the server may obtain the location information of the first device relative to the optical label by analyzing the image. In one embodiment, the server may use information from the first device to obtain location information of the first device relative to the optical label through a query. For example, the information from the first device may be two-dimensional code identification information or identification information such as a seat number, based on which the server can obtain position information of the first device with respect to the optical label by inquiry. In one embodiment, the server may obtain the location information (e.g., absolute location information) of the first device from the information from the first device, and obtain the location information of the first device relative to the optical label from the location information and the location information of the optical label. The position information of the first device may be, for example, its GPS position information, which, although the accuracy of the current GPS position information is not very high, may also be applicable in some application scenarios where the accuracy requirement for the virtual object overlay is not high. Any information that can be used to obtain the location of the device (e.g., an image taken by the device containing an optical label, two-dimensional code identification information scanned by the device, a table number transmitted by the device, etc.) may be referred to as information relating to the location of the device.
Step 420: the server sets a virtual object associated with the first device based on the attribute information of the first device, the spatial position information of the virtual object being determined according to the position information of the first device relative to the optical label.
Upon receiving the information from the first device, the server may set a virtual object associated with the first device based on the attribute information of the first device, for example, set related information with the virtual object. The virtual object may be, for example, an icon, a picture, text, a number, an emoticon, a virtual three-dimensional object, a three-dimensional scene model, a piece of animation, a piece of video, and so forth. The information about the virtual object includes information included in the virtual object, such as a picture, a letter, a number, an icon, and the like, and may also include other information describing the virtual object, such as shape information, color information, and size information of the virtual object. In one embodiment, the information related to the virtual object may also include user information associated with the first device, such as the user's avatar, profession, specialty, nationality, and the like.
The related information of the virtual object may further include spatial position information of the virtual object. The server may set spatial location information of the virtual object according to location information of the first device relative to the optical label. In one embodiment, the spatial position information is preferably also position information relative to the optical label, e.g. distance information and direction information relative to the optical label of the virtual object. The spatial position of the virtual object may simply be determined as the position of the first device; the spatial location of the virtual object may also be determined to be other locations, for example, other locations located near the location of the first device; or its position in the scene coordinate system.
In one embodiment, the server may further set the pose information of the virtual object, which may be the pose information of the virtual object with respect to the optical label, or its pose information in the real world coordinate system.
Step 430: based on the predetermined matching rule, the server sends information about the virtual object to the second device, which can be used by the second device to render the virtual object on its display medium based on its position information and pose information relative to the light tag.
After the virtual object is set for the first device, the server may transmit information about the virtual object to other devices based on a predetermined matching rule. The predetermined matching rules may include, for example: to which devices information about virtual objects associated with a first device is to be sent; and/or the second device is to receive information about virtual objects associated with which devices.
In one embodiment, the predetermined matching rules may be customized by a user of the device. Taking a conference scene as an example, a common participant can set a matching rule in a self-defined manner, only sends the related information of the virtual object associated with the common participant to other participants from a specific region or a specific industry, and only sends part of or does not send the virtual object to the peer or competitors. In one embodiment, the predetermined matching rules may also be set by the server. In the case of a passenger train service, the server may send the occupation information of the virtual objects of passengers of a particular occupation (for example, doctors, nurses or emergency personnel) to all passengers of the same train after the passengers have agreed. In one embodiment, the predetermined matching rule may be a default rule of the system, for example, the system default server sends the virtual object to all scanned optical label recognition devices. In one embodiment, the predetermined matching rule may be a matching rule randomly determined by the system, for example, the server randomly sends the virtual object to any identification device that scans the optical label.
In one embodiment, when a plurality of predetermined matching rules exist simultaneously, priorities of different matching rules may be set, and the server applies the different matching rules in order according to the set priorities. For example, in the train passenger service, virtual objects of different passengers are sequentially sent to passengers according to different ticket checking/entering orders to check tickets and enter the station, if a system default matching rule and a matching rule set by a server exist at the same time, wherein the system default matching rule is that passengers in first class cars or sleeping cars check tickets first, passengers in second class cars, the matching rule set by the server is that passengers in military personnel check tickets and enter the station preferentially, and special passengers (such as old passengers, weak passengers, sick passengers, disabled passengers and pregnant passengers) check tickets and enter the station preferentially, the priority of the matching rule set by the server can be specified to be higher than the system default rule, and the server can send the virtual objects of the special passengers in military personnel first, the passengers in first class cars or sleeping passengers, and the passengers in second class cars. Of course, it is also possible to specify that the system default rule is highest priority, followed by the server set rule.
The server may send the information about the virtual object to the second device in a variety of ways. In one embodiment, the server may send the information relating to the virtual object directly to the second device, for example over a wireless link. In one embodiment, the optical tag identification device may identify information (e.g., identification information) conveyed by the optical tag by scanning the optical tag disposed in the scene and use the information to access a server (e.g., by wireless signal access) to obtain information about the virtual object from the server. In one embodiment, the server may send information about the virtual object to the optical label identification device in an optical communication manner using the optical label.
The relevant information of the virtual object can be used by the optical label recognition device to render the virtual object on its display medium based on its position information and pose information determined by the optical label.
In some scenarios, a virtual object associated with the second device may also be presented on a display medium of the first device. FIG. 5 shows a schematic diagram of an optical label-based interaction method according to one embodiment. As shown in fig. 5, virtual objects may be received from one another among users based on optical labels. Taking a conference scene as an example, after the participants use the device to scan the optical tags, the server sets virtual objects for different participants based on different devices, for example, the virtual objects of common participants include names, professions, companies, etc., the virtual objects of the lecturer include names, units, jobs, lecture questions, mobile phones, mailboxes, etc., and the virtual objects of the host staff include names, jobs, job numbers, areas in charge, matters in charge, etc. Based on the set matching rules, the related information of the virtual object of the common attendee can be sent to the equipment of the host staff, the virtual object of the host staff is sent to all attendees, and the related information of the virtual object of the presenter is sent to the common attendee and the host staff. At this time, the ordinary participants can receive the virtual object of the host worker and also can receive the relevant information of the virtual object of the speaker; the host staff member device may receive information about the virtual objects of the general participants and/or speakers.
Fig. 6 shows an interaction method based on optical labels according to an embodiment, which may further present a virtual object associated with a second device on a display medium of a first device, where steps 610 and 630 are similar to steps 410 and 430 of fig. 4 and are not described herein again. The interaction method of fig. 6 further comprises the steps of:
step 640: the server receives information from the second device, determines attribute information of the second device and location information of the second device relative to the optical label based on the information from the second device.
Step 650: setting another virtual object associated with the second device based on the attribute information of the second device, the spatial position information of the other virtual object being determined according to the position information of the second device relative to the optical label.
Step 660: the information related to the other virtual object is sent to the first device, which can be used by the first device to render the other virtual object on its display medium based on its position information and pose information relative to the optical label.
In one embodiment, the matching rule may further include sending all or part of the attributes of the virtual object or another virtual object to the corresponding second device or first device. For example, in the train passenger service, it is possible to transmit all the information on the virtual object of the traveler to all the crew members in the rule, but transmit only part of the information on the virtual object of the crew members, such as the name, the work number, and the responsible car, to the traveler. In one embodiment, the matching rule may further include a setting of content displayed by the virtual object or another virtual object on the corresponding second device or the first device. For example, in the passenger service of train, it is possible to display all attributes of the virtual passenger object to all crew members in the rule, but only part of the information related to the virtual passenger object, such as name, work number, and responsible passenger compartment, is displayed to the passenger
In the case of a virtual object having pose information, the pose of the virtual object may be adjusted with the position and/or pose of the device relative to the virtual object, for example, such that a certain orientation of the virtual object (e.g., the frontal direction of the virtual object) is always directed towards the device. In one embodiment, a direction from the virtual object to the device may be determined in space based on the location of the device and the virtual object, and the pose of the virtual object may be determined based on the direction. By the above method, the same virtual object can actually have respective postures for the devices at different positions.
In one embodiment, after superimposing a virtual object, the device or its user may perform an operation on the virtual object to change a property of the virtual object. For example, the device or its user may move the position of the virtual object, change the pose of the virtual object, change the size or color of the virtual object, add annotations on the virtual object, and so forth. In one embodiment, the server may update and send information about the virtual object it stores to the device based on the modified content. In one embodiment, users may interact by editing virtual objects associated with other users. For example, the user may upload information about the edited virtual object to the server, be sent by the server to a device associated with the virtual object, or be displayed on a virtual object associated with himself or other virtual objects and be visible to other users. In one embodiment, the device or its user may perform a delete operation on the overlaid virtual object and notify the server. In one embodiment, the user may make privacy settings that limit the visible range of their editing operations.
In the above-mentioned conference scenario as an example, during or after the lecture of the speaker, the audience may send a question to the speaker by clicking the virtual object of the speaker or leaving a message to the virtual object, and the question may notify the speaker by mail, short message or other means through the server, or may be displayed on the virtual object associated with the audience or may be known to the speaker in other ways. In addition, the participants can communicate with each other through the virtual object. For example, the participants can edit the virtual objects of other people to leave a telephone, a mailbox, and the like (which is equivalent to exchanging business cards in a traditional business meeting), and after the server receives the messages or labels of the virtual objects, the server can update the related information of the virtual objects and send the related information to the corresponding devices of the participants, or store the related information in the server or the devices.
For example, when the passenger virtual object is received by the crew member device, the passenger can be notified to the server by clicking the passenger virtual object. After receiving the notification, the server can remind the passenger to prepare for getting off by sending a short message, application software/webpage message or other modes. When the passenger needs to make up a ticket during the journey, or purchases food and drink, or cleaning service, or emergency service, the passenger can be called by clicking the virtual object of the passenger. The call may be sent to the crew member device via a server, may be displayed in a virtual object associated with the passenger, or may be known to the crew member in any other manner. When a passenger disembarks, the on-board crew may delete the virtual object associated with the passenger.
In some cases, the information of the first device may change after the server receives the information from the first device. For example, in the passenger service of a train, passengers leave seats to wash rooms or restaurants, or passengers in a coach/hard-seat compartment are upgraded after getting on the train, and are replaced by a coach/sleeper compartment through ticket replenishment. At this time, information from the passenger device is changed. In order to enable the server to know the latest information of the passenger or his device in time, the new information of the passenger device may be sent to the server by scanning the optical label again or in some other way. The passenger device may determine its latest location information relative to the optical label in the various ways mentioned above (e.g., by capturing an image including the optical label and analyzing the image), or may track changes in the location of the passenger device through sensors built into the device (e.g., acceleration sensors, gyroscopes, cameras, etc.). The new location information for the passenger device may be sent to the server periodically, or the sending of the new location information may be initiated when the difference between the new location of the passenger device and the location last sent to the server is greater than some preset threshold. Therefore, the server can know the new position information of the passenger equipment in time and can update the spatial position information of the virtual object correspondingly. The server sends the new information for the virtual object to the crew device, which can use the new information for the virtual object to render or update the virtual object on its display medium accordingly.
Fig. 7 shows an interaction method based on an optical label according to an embodiment, which can implement tracking of the location of the first device, wherein step 710 and step 730 are similar to step 410 and step 430 of fig. 4, and are not described herein again. The interaction method of fig. 7 further comprises the steps of:
step 740: the server receives new information from the first device.
The new information may be any information that can be used to determine the position of the first device relative to the optical label, including displacement information of the first device obtained by tracking by a sensor built in the first device, or may be new attribute information of the first device.
Step 750: the server updates the related information of the virtual object associated with the first device according to the new information from the first device.
Step 760: the server sends the updated related information of the virtual object to the second device so that the second device can present or update the virtual object on its display medium based on its position information and posture information relative to the optical label and the updated related information of the virtual object.
In many scenarios, there may be more than one optical label, but rather an optical label network as shown in fig. 2, where the server may know the location information of the individual optical labels or the relative location relationship between them. In these scenarios, the optical labels scanned by the first device and the second device may not be the same optical label, the first device may scan a plurality of different optical labels at different times to provide or update its location information (providing or updating the location information may send identification information of the associated optical label), and the second device may scan a plurality of different optical labels at different times to determine its location information and pose information.
In some embodiments above, the position information and the posture information relative to the optical tag (i.e. the position information and the posture information in the optical tag coordinate system) are used for description, but this is not limiting, and in some embodiments, the solution of the present invention may also use the position information and the posture information in other coordinate systems, for example, a real world coordinate system or a place coordinate system established for a place (e.g. an airport, a stadium, a hotel, etc.) may be used as long as the relative position and/or posture between the first device and the second device can be determined. In one embodiment, the device may obtain the position information and/or the pose information of the device in the world coordinate system or the venue coordinate system based on the position information and/or the pose information of the device relative to the optical tag and the position information of the optical tag itself in the world coordinate system or the venue coordinate system, wherein the position information of the optical tag itself may be obtained by the device from the server using the identification information of the optical tag. In one embodiment, the server may obtain the position information and/or the posture information of the device in the world coordinate system or the venue coordinate system based on the position information and/or the posture information of the device relative to the optical tag and the position information of the optical tag itself in the world coordinate system or the venue coordinate system, which are transmitted by the device.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g., hard disk, optical disk, flash memory, etc.), which when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logically inconsistent or workable. Expressions appearing herein similar to "according to a", "based on a", "by a" or "using a" mean non-exclusive, i.e. "according to a" may encompass "according to a only", as well as "according to a and B", unless specifically stated or clear from context that the meaning is "according to a only". In the present application, for clarity of explanation, some illustrative operational steps are described in a certain order, but one skilled in the art will appreciate that each of these operational steps is not essential and some of them may be omitted or replaced by others. It is also not necessary that these operations be performed sequentially in the manner shown, but rather that some of these operations be performed in a different order, or in parallel, as desired, provided that the new implementation is not logically or operationally unfeasible.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (23)

1. An interaction method based on an optical communication device comprises the following steps:
the method comprises the steps that a server obtains information from first equipment, and attribute information of the first equipment and position information of the first equipment are determined according to the information from the first equipment;
the server sets a virtual object associated with the first device based on attribute information of the first device, spatial position information of the virtual object being determined according to position information of the first device;
based on a predetermined matching rule, the server sends information related to the virtual object to a second device, wherein the information related to the virtual object is usable by the second device to render the virtual object on its display medium based on its position information and pose information determined by an optical communication means;
the server obtains information from a second device, and determines attribute information of the second device and position information of the second device according to the information from the second device;
the server setting another virtual object associated with the second device based on the attribute information of the second device, the spatial position information of the another virtual object being determined according to the position information of the second device;
the server sends information related to the other virtual object to the first device, wherein the information related to the other virtual object is usable by the first device to render the other virtual object on its display medium based on the position information and the pose information determined by the first device through the optical communication means.
2. The method of claim 1, wherein,
the position information of the first device is position information relative to the optical communication apparatus, position information in a field coordinate system, or position information in a world coordinate system; and/or the presence of a gas in the gas,
the position information and the orientation information of the second device are position information and orientation information with respect to the optical communication apparatus, position information and orientation information in a field coordinate system, or position information and orientation information in a world coordinate system.
3. The method of claim 1 or 2, wherein the predetermined matching rules comprise one or more of:
the matching rule set by the user is defined by the user;
a matching rule set by the server;
a default matching rule; and
and (6) matching rules randomly.
4. The method according to claim 1 or 2, wherein the predetermined matching rule comprises:
to which devices information about virtual objects associated with the first device is to be sent; and/or
The second device is to receive information about virtual objects associated with which devices.
5. The method according to claim 1, wherein the matching rule includes a setting of content displayed by the virtual object or another virtual object on the corresponding second device or first device, or a setting of transmitting all or part of related information of the virtual object or another virtual object to the corresponding second device or first device.
6. The method of claim 1 or 2, wherein the attribute information of the first device comprises information relating to a user of the first device and/or information customized by the user of the first device.
7. The method of claim 2, wherein the optical communication device associated with the location information of the first apparatus and the optical communication device associated with the location information of the second apparatus are the same optical communication device or different optical communication devices, wherein the different optical communication devices have a determined relative positional relationship.
8. The method of claim 1 or 2, wherein determining the location information of the first device from the information from the first device comprises at least one of:
extracting location information of the first device from information from the first device;
obtaining location information of the first device by analyzing information from the first device;
obtaining location information of the first device through a query using information from the first device.
9. The method of claim 1 or 2, wherein the information from the first device comprises position information of the first device relative to an optical communication apparatus, wherein the first device determines its position information relative to the optical communication apparatus by capturing an image comprising the optical communication apparatus using an image capturing device and analyzing the image.
10. The method according to claim 1 or 2, wherein the second device determines its position information and/or pose information by capturing an image comprising an optical communication means using an image capturing means and analyzing the image.
11. The method according to claim 1 or 2, wherein the relevant information of the virtual object further comprises pose information of the virtual object.
12. The method of claim 1 or 2,
the pose of the virtual object is adjustable in accordance with changes in position and/or pose of the second device relative to the virtual object.
13. The method of claim 12, further comprising, always with a certain orientation of the virtual object towards the second device.
14. The method of claim 1 or 2, wherein the virtual object is capable of being edited, the method further comprising:
and the server sends the editing result of the user on the virtual object associated with the first equipment through the second equipment to the first equipment.
15. The method of claim 1 or 2, further comprising:
the server receiving new information from the first device;
updating relevant information of a virtual object associated with the first device according to the new information from the first device;
sending the updated related information of the virtual object to the second device so that the second device can present or update the virtual object on a display medium of the second device based on the position information and the posture information determined by the second device through the optical communication device and the updated related information of the virtual object.
16. An interactive system based on optical communication devices, comprising:
one or more optical communication devices; and
a server configured to implement the method of any one of claims 1-15.
17. An interaction method based on an optical communication device comprises the following steps:
based on a predetermined matching rule, the equipment receives information related to the virtual object from a server, wherein the information comprises spatial position information of the virtual object;
the equipment determines the position information and the posture information of the equipment through an optical communication device; and
the device renders the virtual object on its display medium based on its position information and pose information and the information related to the virtual object;
the device transmits information to the server, the information being used for determining attribute information of the device and position information of the device, wherein the server sets another virtual object associated with the device based on the attribute information of the device, spatial position information of the other virtual object being determined according to the position information of the device, and the server transmits related information of the other virtual object to the other device, wherein the related information of the other virtual object can be used by the other device to present the other virtual object on a display medium thereof based on the position information and posture information thereof determined by the optical communication means.
18. The method of claim 17, wherein,
the spatial position information of the virtual object is spatial position information with respect to the optical communication apparatus, spatial position information in a field coordinate system, or spatial position information in a world coordinate system; and/or the presence of a gas in the gas,
the position information and the orientation information of the apparatus are position information and orientation information with respect to the optical communication device, position information and orientation information in a field coordinate system, or position information and orientation information in a world coordinate system.
19. The method of claim 17 or 18, wherein the spatial location information of the virtual object is determined based on location information of other devices.
20. The method according to claim 17 or 18, wherein the device determines its position information and/or pose information by capturing an image comprising an optical communication means using an image capturing means and analyzing the image.
21. The method of claim 17 or 18,
the pose of the virtual object is adjustable according to a change in position and/or pose of the device relative to the virtual object.
22. A storage medium in which a computer program is stored which, when being executed by a processor, is operative to carry out the method of any one of claims 1-15 and 17-21.
23. An electronic device comprising a processor and a memory, the memory having stored therein a computer program operable, when executed by the processor, to carry out the method of any of claims 1-15 and 17-21.
CN201910918154.1A 2019-06-05 2019-09-26 Interaction method and system based on optical communication device Active CN112565165B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201910918154.1A CN112565165B (en) 2019-09-26 2019-09-26 Interaction method and system based on optical communication device
EP20818510.8A EP3962118A4 (en) 2019-06-05 2020-06-04 Interaction method employing optical communication apparatus, and electronic device
JP2021571443A JP2022535793A (en) 2019-06-05 2020-06-04 Interaction method and electronic device based on optical communication device
PCT/CN2020/094383 WO2020244578A1 (en) 2019-06-05 2020-06-04 Interaction method employing optical communication apparatus, and electronic device
US17/536,703 US20220084258A1 (en) 2019-06-05 2021-11-29 Interaction method based on optical communication apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910918154.1A CN112565165B (en) 2019-09-26 2019-09-26 Interaction method and system based on optical communication device

Publications (2)

Publication Number Publication Date
CN112565165A CN112565165A (en) 2021-03-26
CN112565165B true CN112565165B (en) 2022-03-29

Family

ID=75029854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910918154.1A Active CN112565165B (en) 2019-06-05 2019-09-26 Interaction method and system based on optical communication device

Country Status (1)

Country Link
CN (1) CN112565165B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124221B (en) * 2021-12-06 2023-05-26 成都航天通信设备有限责任公司 Visible light communication coding method based on camera, conference sign-in system and method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104819723A (en) * 2015-04-29 2015-08-05 京东方科技集团股份有限公司 Positioning method and positioning server
CN105579917A (en) * 2013-09-04 2016-05-11 依视路国际集团(光学总公司) Methods and systems for augmented reality
CN105973236A (en) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 Indoor positioning or navigation method and device, and map database generation method
CN106537220A (en) * 2014-03-05 2017-03-22 亚利桑那大学评议会 Wearable 3D augmented reality display with variable focus and/or object recognition
WO2017217595A1 (en) * 2016-06-14 2017-12-21 주식회사 엔토소프트 Server and system for implementing augmented reality image based on positioning information
CN107782314A (en) * 2017-10-24 2018-03-09 张志奇 A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
CN108092950A (en) * 2016-11-23 2018-05-29 金德奎 A kind of location-based AR or MR social contact methods
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108388637A (en) * 2018-02-26 2018-08-10 腾讯科技(深圳)有限公司 A kind of method, apparatus and relevant device for providing augmented reality service
CN108579084A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for information display, device, equipment in virtual environment and storage medium
CN112055034A (en) * 2019-06-05 2020-12-08 北京外号信息技术有限公司 Interaction method and system based on optical communication device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107979628B (en) * 2016-10-24 2020-04-21 腾讯科技(深圳)有限公司 Method, device and system for acquiring virtual article

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105579917A (en) * 2013-09-04 2016-05-11 依视路国际集团(光学总公司) Methods and systems for augmented reality
CN106537220A (en) * 2014-03-05 2017-03-22 亚利桑那大学评议会 Wearable 3D augmented reality display with variable focus and/or object recognition
CN104819723A (en) * 2015-04-29 2015-08-05 京东方科技集团股份有限公司 Positioning method and positioning server
CN105973236A (en) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 Indoor positioning or navigation method and device, and map database generation method
WO2017217595A1 (en) * 2016-06-14 2017-12-21 주식회사 엔토소프트 Server and system for implementing augmented reality image based on positioning information
CN108092950A (en) * 2016-11-23 2018-05-29 金德奎 A kind of location-based AR or MR social contact methods
CN107782314A (en) * 2017-10-24 2018-03-09 张志奇 A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108388637A (en) * 2018-02-26 2018-08-10 腾讯科技(深圳)有限公司 A kind of method, apparatus and relevant device for providing augmented reality service
CN108579084A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for information display, device, equipment in virtual environment and storage medium
CN112055034A (en) * 2019-06-05 2020-12-08 北京外号信息技术有限公司 Interaction method and system based on optical communication device

Also Published As

Publication number Publication date
CN112565165A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
US20150012307A1 (en) Electronic reservation system and method
CN110491315A (en) A kind of science and technology museum intelligence guide system
EP2645751A1 (en) Communication system and method involving the creation of virtual spaces
CN111242704A (en) Method and electronic equipment for superposing live character images in real scene
JP2019121049A (en) Vehicle allocation device, vehicle allocation method, and program for allocating vehicle to predetermined place desired by user
WO2015082717A1 (en) Personalized guidance system
US9424361B2 (en) Information communication method and information communication apparatus
CN112565165B (en) Interaction method and system based on optical communication device
JP4464780B2 (en) Guidance information display device
US8874108B2 (en) Integrating mobile devices into a fixed communication infrastructure
TWI750822B (en) Method and system for setting presentable virtual object for target
EP3342133B1 (en) Method, devices and a system for gathering information for providing personalised augmented location information
WO2021256239A1 (en) Navigation device, navigation system, navigation method, program, and storage medium
JP2009205504A (en) Guide system, server system, guide method and program
CN112055034B (en) Interaction method and system based on optical communication device
CN112055033B (en) Interaction method and system based on optical communication device
KR101719934B1 (en) Integrating mobile devices into a fixed communication infrastructure
JP4978219B2 (en) Information transmission system and information server
CN112581630A (en) User interaction method and system
JP2019077444A (en) Flight body
WO2022121606A1 (en) Method and system for obtaining identification information of device or user thereof in scenario
US20220084258A1 (en) Interaction method based on optical communication apparatus, and electronic device
CN213069940U (en) Novel office building intelligence reception desk
JP7290031B2 (en) Route search server, route search system and route search method
JP7355214B2 (en) Server device, entrance/exit management system, entrance/exit management method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210326

Assignee: Shanghai Guangshi fusion Intelligent Technology Co.,Ltd.

Assignor: BEIJING WHYHOW INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2022110000047

Denomination of invention: Interactive method and system based on optical communication device

Granted publication date: 20220329

License type: Common License

Record date: 20221012

EE01 Entry into force of recordation of patent licensing contract