CN112535392B - Article display system based on optical communication device, information providing method, apparatus and medium - Google Patents

Article display system based on optical communication device, information providing method, apparatus and medium Download PDF

Info

Publication number
CN112535392B
CN112535392B CN201910890857.8A CN201910890857A CN112535392B CN 112535392 B CN112535392 B CN 112535392B CN 201910890857 A CN201910890857 A CN 201910890857A CN 112535392 B CN112535392 B CN 112535392B
Authority
CN
China
Prior art keywords
information
virtual object
optical communication
communication device
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910890857.8A
Other languages
Chinese (zh)
Other versions
CN112535392A (en
Inventor
方俊
牛旭恒
李江亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Whyhow Information Technology Co Ltd
Original Assignee
Beijing Whyhow Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Whyhow Information Technology Co Ltd filed Critical Beijing Whyhow Information Technology Co Ltd
Priority to CN201910890857.8A priority Critical patent/CN112535392B/en
Publication of CN112535392A publication Critical patent/CN112535392A/en
Application granted granted Critical
Publication of CN112535392B publication Critical patent/CN112535392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47FSPECIAL FURNITURE, FITTINGS, OR ACCESSORIES FOR SHOPS, STOREHOUSES, BARS, RESTAURANTS OR THE LIKE; PAYING COUNTERS
    • A47F10/00Furniture or installations specially adapted to particular types of service systems, not otherwise provided for
    • A47F10/02Furniture or installations specially adapted to particular types of service systems, not otherwise provided for for self-service type systems, e.g. supermarkets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an article display system based on an optical communication device, an article information providing method, electronic equipment and a storage medium, wherein the article display system comprises an article display device, an article information providing device and an article information providing device, wherein the article information providing device comprises an optical communication device and an article information providing device; acquiring, from a server, related information of one or more virtual objects associated with the optical communication apparatus using the identification information, the related information of the virtual objects including information for identifying items placed in an item display area corresponding to the virtual object and superimposition position information of the virtual object; and superimposing each virtual object on each corresponding area on the article display presented on the display medium of the device based on the position information and posture information of the device relative to the optical communication apparatus and the superimposed position information of the virtual object, thereby helping a user to quickly determine the related information of the article placed on the article display.

Description

Article display system based on optical communication device, information providing method, apparatus and medium
Technical Field
The present invention relates to augmented reality or virtual reality technologies, and in particular, to an article display system based on an optical communication device, an article information providing method, an electronic device, and a storage medium.
Background
In a store, a supermarket, a library, or the like, a shelf is generally used for displaying articles. In order to enable a user to quickly find a required item, the items are usually displayed on different shelves according to categories, and the categories of the items, such as dried fruits, candies, beverages, etc., are marked on the shelves by physical labels, and in many cases, a shopping mall or a supermarket, etc. also divides different areas for the same shelf, and places the items of different categories or subcategories in the different areas. For example, a shelf for holding beverages may be divided into different regions for holding different types of beverages, such as purified water, mineral water, distilled water, soda water, fruit juice, functional beverages, beer, wine, and the like. Currently, it is cumbersome and time consuming for a user to determine the categories of items placed at different areas of the shelf, typically by observing the item packages. Moreover, for various new articles which come out endlessly, users only need to pack the articles, and the characteristics, the purposes, the advantages and the like of the articles are difficult to know, so that rational selection is difficult to make. In particular, if the user cannot understand the text used on the package of the item (which is a common phenomenon in many foreign visitors), it is difficult to quickly determine which area of the shelf the desired item is located in, and even to know the type or purpose of each item currently viewed on the shelf.
Disclosure of Invention
The invention provides an article display system based on an optical communication device, an article information providing method, an electronic device and a storage medium, which can help a user quickly determine the related information of an article placed on the article display device.
The above purpose is realized by the following technical scheme:
according to a first aspect of embodiments of the present invention, there is provided an optical communication apparatus based article display system, comprising an article display apparatus, an optical communication apparatus associated with the article display apparatus, and a server, wherein the server is configured to: providing information relating to one or more virtual objects associated with an optical communication device to an apparatus for identifying the optical communication device; wherein each virtual object corresponds to one of the areas on the article display, the information relating to the virtual object comprising information identifying an article to be placed in the article display area corresponding to the virtual object and overlay position information for the virtual object, the overlay position information corresponding to or being associated with the position of the article display area, and wherein the information relating to the virtual object is usable by the apparatus to overlay the virtual object on the corresponding area on the article display presented by the display medium of the apparatus based on its position information and attitude information relative to the optical communication device.
In some embodiments of the invention, the superimposed position information of the virtual object may be determined from the position of the product display region to which the virtual object corresponds relative to the optical communication device.
In some embodiments of the invention, the information relating to the virtual object may further comprise information identifying the extent of the item display region corresponding to the virtual object.
In some embodiments of the present invention, the information identifying the item may be described in a variety of languages.
In some embodiments of the invention, the server may obtain, from a device with which it interacts, a language option associated with the device and provide information identifying the item in a corresponding language based on the language option.
In some embodiments of the present invention, the article display system may further comprise means for identifying the optical communication device for obtaining identification information communicated by the optical communication device from the captured image containing the optical communication device; acquiring related information of one or more virtual objects associated with the optical communication device from a server by using the identification information; and superimposing each virtual object on a respective area on the article display presented on the display medium of the device based on the positional and pose information of the device relative to the optical communication apparatus and the superimposed positional information of the one or more virtual objects.
In some embodiments of the invention, the apparatus may determine position information and pose information of the apparatus relative to the optical communication device from the captured image containing the optical communication device.
In some embodiments of the invention, said superimposing, based on the position information and the posture information of the apparatus relative to the optical communication device and the superimposed position information of the one or more virtual objects, each virtual object on each respective area on the article display presented by the display medium of the apparatus comprises: determining imaging positions and imaging sizes of the virtual objects on a display medium of the device based on the position information and the posture information of the device relative to the optical communication device and the superposed position information of the one or more virtual objects; and rendering each virtual object on a display medium of the device based on the imaging position and imaging size.
In some embodiments of the invention, the apparatus may be further operative to, after rendering the virtual object: determining again the position information of the apparatus relative to the optical communication device; determining the attitude information of the equipment again; and correcting presentation of the virtual object on a display medium of the device based on the re-determined position information and the pose information and the superimposed position information of the virtual object.
In some embodiments of the present invention, the related information of the virtual object may further include superimposed posture information of the virtual object, which is posture information of the virtual object with respect to the optical communication apparatus associated therewith.
In some embodiments of the invention, the information related to the virtual object may further comprise overlay pose information of the virtual object, and wherein the apparatus may be further configured to: determining imaging poses of the respective virtual objects on a display medium of the device based on position information and pose information of the device relative to the optical communication apparatus and the superimposed pose information of the one or more virtual objects; and rendering virtual objects on a display medium of the device based on the imaging position, imaging size, and imaging pose.
In some embodiments of the invention, the optical communication device may be mounted on the article display or located in a position near the article display.
According to a second aspect of the embodiments of the present invention, there is also provided an article information providing method based on an optical communication apparatus, including: identifying identification information conveyed by an optical communication device associated with an article display based on an image captured by a device containing the optical communication device; acquiring related information of one or more virtual objects associated with the optical communication device by using the identification information; wherein each virtual object corresponds to one of the areas on the item display device, the information about the virtual object including information identifying an item placed in the item display area corresponding to the virtual object and overlay location information for the virtual object, the overlay location information corresponding to or associated with the location of the item display area; and superimposing each virtual object on a respective area on the article display presented on the display medium of the device based on the positional and pose information of the device relative to the optical communication apparatus and the superimposed positional information of the one or more virtual objects.
In some embodiments of the invention, the superimposed position information of the virtual object may be determined from the position of the product display region to which the virtual object corresponds relative to the optical communication device.
In some embodiments of the present invention, said superimposing, based on the position information and the posture information of the apparatus relative to the optical communication device and the superimposed position information of the one or more virtual objects, each virtual object on each corresponding area on the article display device presented by the display medium of the apparatus comprises: determining imaging positions and imaging sizes of the virtual objects on a display medium of the device based on the position information and the posture information of the device relative to the optical communication device and the superposed position information of the one or more virtual objects; and rendering each virtual object on a display medium of the device based on the imaging position and imaging size.
In some embodiments of the invention, the method may further comprise, after rendering the virtual object: determining again the position information of the apparatus relative to the optical communication device; determining the attitude information of the equipment again; and correcting presentation of the virtual object on a display medium of the device based on the re-determined position information and the pose information and the superimposed position information of the virtual object.
According to a third aspect of embodiments of the present invention, there is also provided a storage medium in which a computer program is stored, which, when being executed by a processor, is operable to carry out the method according to the second aspect of embodiments of the present invention.
According to a fourth aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory, in which a computer program is stored, which, when executed by the processor, can be used to implement the method according to the second aspect of the embodiments of the present invention.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary optical label;
FIG. 2 illustrates an exemplary optical label network;
FIG. 3 shows a flow diagram of a method for superimposing virtual objects in a real scene based on optical labels, according to an embodiment of the invention;
FIG. 4 shows a schematic of a configuration of an optical label based shelving system in accordance with one embodiment of the invention;
FIG. 5 illustrates an example scenario for providing shelf item information to a user using a shelving system in accordance with one embodiment of the invention;
FIG. 6 shows a flow diagram of a method for providing information on shelf items based on optical labels according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
For convenience of description, first, technologies related to the present invention will be briefly described to help understanding of embodiments of the present invention, but it should be noted that the technical descriptions do not necessarily constitute prior art.
Augmented Reality technology (AR), also known as mixed Reality technology, applies virtual objects to a real scene through computer technology such that the real scene and virtual objects are rendered in real time into the same picture or space, thereby enhancing the user's perception of the real world. In an augmented reality application, data information may be superimposed at a fixed location in the field of view, for example, a pilot may be wearing a display helmet to view flight data superimposed on a real scene while learning to fly an airplane, which data is typically displayed at a fixed location in the field of view (e.g., always in the upper left corner). Such augmented reality techniques lack sufficient flexibility. In another augmented reality application, a real object in a real scene may be first identified, and then a virtual object superimposed on or near the real object displayed on the screen. However, current augmented reality technologies have difficulty in superimposing virtual objects at precise locations in a real scene, particularly when the virtual objects are superimposed at a relatively large distance from the identified real objects.
Virtual Reality (VR) is a computer simulation technology that creates and experiences a Virtual world, and uses a computer to generate an interactive Virtual scene, into which a simulation system of the physical behavior of the user can immerse the user. In the virtual scene, there are usually many virtual objects, and the superimposed position or the rendering position of these virtual objects usually changes according to the change of the position or posture of the user. However, current virtual reality technologies generally track the position or posture of a user based on sensors (e.g., acceleration sensors, gyroscopes, etc.) inside the device, which has errors that gradually accumulate over time, making it difficult to accurately superimpose virtual objects according to the actual position or posture of the user.
Optical communication devices are also referred to as optical labels, and these two terms are used interchangeably herein. The optical label can transmit information through different light emitting modes, has the advantages of long identification distance and loose requirements on visible light conditions, and the information transmitted by the optical label can change along with time, so that large information capacity and flexible configuration capacity can be provided.
An optical label may typically include a controller and at least one light source, the controller may drive the light source through different driving modes to communicate different information to the outside. Fig. 1 shows an exemplary optical label 100 comprising three light sources (first light source 101, second light source 102, third light source 103, respectively). Optical label 100 further comprises a controller (not shown in fig. 1) for selecting a respective driving mode for each light source in dependence on the information to be communicated. For example, in different driving modes, the controller may control the manner in which the light source emits light using different driving signals, such that when the optical label 100 is photographed using the imaging-enabled device, the image of the light source therein may take on different appearances (e.g., different colors, patterns, brightness, etc.). By analyzing the imaging of the light sources in the optical label 100, the driving pattern of each light source at the moment can be analyzed, so that the information transmitted by the optical label 100 at the moment can be analyzed.
In order to provide a corresponding service to a subscriber based on the optical labels, each optical label may be assigned identification Information (ID) for uniquely identifying or identifying the optical label by a manufacturer, manager, user, or the like of the optical label. Generally, the light source may be driven by a controller in the optical tag to transmit the identification information outwards, and a user may use the device to perform image acquisition on the optical tag to obtain the identification information transmitted by the optical tag, so that a corresponding service may be accessed based on the identification information, for example, accessing a web page associated with the identification information of the optical tag, acquiring other information associated with the identification information (e.g., location information of the optical tag corresponding to the identification information), and so on. The devices referred to herein may be, for example, devices carried or controlled by a user (e.g., cell phones with cameras, tablets, smart glasses, AR glasses, smart helmets, smart watches, etc.), or machines capable of autonomous movement (e.g., drones, unmanned cars, robots, etc.). The device can acquire an image containing the optical label by performing image acquisition on the optical label through a camera on the device, and analyze the image of the optical label (or each light source in the optical label) in the image through a built-in application program to identify the information transmitted by the optical label.
The optical label may be installed in a fixed or variable location and may store identification Information (ID) of the optical label and any other information (e.g., location information) in the server. In reality, a large number of optical labels may be constructed into an optical label network. FIG. 2 illustrates an exemplary optical label network including a plurality of optical labels and at least one server, wherein information associated with each optical label may be maintained on the server. For example, identification Information (ID) or any other information of each optical label, such as service information related to the optical label, description information or attributes related to the optical label, such as position information, model information, physical size information, physical shape information, attitude or orientation information, etc. of the optical label may be saved on the server. The optical label may also have uniform or default physical size information and physical shape information, etc. The device may use the identification information of the identified optical label to obtain other information related to the optical label from a server query. The position information of the optical label may refer to an actual position of the optical label in the physical world, which may be indicated by geographical coordinate information. A server may be a software program running on a computing device, or a cluster of computing devices. The optical label may be offline, i.e., the optical label does not need to communicate with the server. Of course, it will be appreciated that an online optical tag capable of communicating with a server is also possible.
The optical labels may be used as anchor points to achieve the superimposition of virtual objects into real or virtual scenes. The virtual object may be, for example, an icon, a picture, text, an emoticon, a virtual three-dimensional object, a three-dimensional scene model, an animation, a video, a jumpable web link, etc. In the following, the virtual object is superimposed in the real scene as an example, but it should be noted that this is not a limitation, and the solution of the present invention is also applicable to superimposing the virtual object in the virtual scene. In one embodiment, a virtual scene may be rendered without relying on a light label, and other virtual objects may be superimposed in this virtual scene with the light label in the camera field of view as an anchor point. In another embodiment, the virtual scene may also be presented with the optical label as an anchor point and further superimposed with other virtual objects in the virtual scene. For example, the position and/or posture etc. of the rendered virtual scene may be determined based on the position and/or posture etc. of the optical labels, the virtual scene itself may be seen as one virtual object and may be superimposed or rendered in the inventive solution.
Fig. 3 shows a method for superimposing virtual objects in a real scene based on optical labels, according to an embodiment, the method comprising the steps of:
step 301: the device obtains identification information of the optical label.
For example, the device may identify its identification information conveyed by the optical label by capturing and analyzing an image of the optical label. The identification information may be associated with one or more virtual objects.
Step 302: the device uses the identification information of the optical label to perform inquiry so as to obtain a virtual object to be superposed and superposition information of the virtual object, wherein the superposition information comprises superposition position information.
After recognizing the identification information of the optical tag, the device may use the identification information to issue a query request to the server. Information related to the optical label may be pre-stored at the server, which may include, for example, identification information of the optical label, description information of one or more virtual objects associated with the optical label (or the identification information of the optical label), overlay location information of each virtual object, and the like. The description information of the virtual object is related information for describing the virtual object, and may include, for example, a picture, a text, an icon, identification information of the virtual object, shape information, color information, size information, and the like included in the virtual object. Based on the description information, the device may present the corresponding virtual object. The superimposition position information of the virtual object may be position information with respect to the optical label (for example, distance information of the superimposition position of the virtual object with respect to the optical label and direction information with respect to the optical label), which indicates the superimposition position of the virtual object. The device can obtain the description information of the virtual object to be superimposed in the real scene currently presented by the device and the superimposition information of the virtual object by sending a query request to the server. In one embodiment, the virtual object description information stored at the server may simply be identification information of the virtual object, which the device, after obtaining the identification information, may use to obtain more detailed description information for the presentation of the virtual object, either locally at the device or from a third party. In one embodiment, the overlay information of the virtual object may further include overlay attitude information or overlay time information of the virtual object, and the overlay attitude information may be attitude information of the virtual object relative to the optical label and may also be attitude information of the virtual object in a real-world coordinate system.
In order to determine the superimposition attitude of the virtual object, it is not necessary to use the superimposition attitude information of the virtual object, but the superimposition attitude of the virtual object may be determined using the superimposition position information of the virtual object. For example, for a virtual object, overlay position information for several points thereon can be determined, and the overlay position information for these different points can be used to determine the pose of the virtual object with respect to the light tags or in the real world coordinate system.
In one embodiment, the superimposition position information of the virtual object may be determined based on the positions of other objects in the real world, which are located near the optical label, relative to the optical label, for example, the position of the virtual object relative to the optical label may be determined as the position of a certain object relative to the optical label, or the position of the virtual object relative to the optical label may be determined as being located near the position of a certain object relative to the optical label, so that when superimposing the virtual objects, the virtual objects may cover the objects in the real scene, or related virtual objects may be presented around or near the objects, thereby achieving an accurate augmented reality effect.
Step 303: the device determines its position information relative to the optical label.
The device may determine its position information relative to the optical label in various ways, which may include distance information and direction information of the device relative to the optical label. Typically, the position information of the device relative to the optical label is actually the position information of the image capturing means of the device relative to the optical label. In one embodiment, the device may determine its position information relative to the optical label by capturing an image that includes the optical label and analyzing the image. For example, the device may determine the relative distance of the optical label from the identification device (the greater the imaging, the closer the distance; the smaller the imaging, the further the distance) by the size of the optical label imaging in the image and optionally other information (e.g., actual physical dimension information of the optical label, the focal length of the camera of the device). The device may obtain actual physical size information of the optical label from the server using the identification information of the optical label, or the optical label may have a uniform physical size and store the physical size on the device. The device may determine orientation information of the device relative to the optical label by perspective distortion of the optical label imaging in the image including the optical label and optionally other information (e.g., imaging location of the optical label). The device may obtain physical shape information of the optical label from a server using identification information of the optical label, or the optical label may have a uniform physical shape and store the physical shape on the device. In one embodiment, the device may also directly obtain the relative distance between the optical label and the identification device through a depth camera or a binocular camera mounted thereon. The device may also use any other positioning method known in the art to determine its position information relative to the optical label.
Step 304: the device determines its pose information.
The device may determine its pose information, which may be used to determine the extent or boundaries of the real scene captured by the device. Typically, the pose information of the device is actually pose information of an image capture device of the device. In one embodiment, the device may determine its pose information with respect to the optical label, e.g., the device may determine its pose information with respect to the optical label based on an image of the optical label, and may consider the device to be currently facing the optical label when the imaging position or imaging area of the optical label is centered in the imaging field of view of the device. The direction of imaging of the optical label may further be taken into account when determining the pose of the device. As the pose of the device changes, the imaging position and/or imaging direction of the optical label on the device changes accordingly, and therefore pose information of the device relative to the optical label can be obtained from the imaging of the optical label on the device.
In one embodiment, the position and pose information (which may be collectively referred to as pose information) of the device relative to the optical labels may also be determined as follows. In particular, a coordinate system may be established from the optical label, which may be referred to as the optical label coordinate system. Some points on the optical label may be determined as some spatial points in the optical label coordinate system, and the coordinates of these spatial points in the optical label coordinate system may be determined according to the physical size information and/or the physical shape information of the optical label. Some of the points on the optical label may be, for example, corners of a housing of the optical label, ends of a light source in the optical label, some identification points in the optical label, and so on. According to the object structure features or the geometric structure features of the optical label, image points corresponding to the space points can be found in the image shot by the equipment camera, and the positions of the image points in the image are determined. According to the coordinates of each space point in the optical label coordinate system and the positions of corresponding image points in the image, and by combining the internal reference information of the equipment camera, the pose information (R, t) of the equipment camera in the optical label coordinate system when the image is shot can be obtained through calculation, wherein R is a rotation matrix which can be used for representing the pose information of the equipment camera in the optical label coordinate system, and t is a displacement vector which can be used for representing the position information of the equipment camera in the optical label coordinate system. Methods of calculating R, t are known in the art, and R, t may be calculated, for example, using the 3D-2D PnP (inclusive-n-Point) method, and will not be described in detail herein in order not to obscure the present invention. The rotation matrix R and the displacement vector t may actually describe how the coordinates of a certain point are transformed between the optical label coordinate system and the device camera coordinate system. For example, by rotating the matrix R and the displacement vector t, the coordinates of a certain point in the optical label coordinate system can be converted into coordinates in the device camera coordinate system and can further be converted into the position of an image point in the image. In this way, for a virtual object having a plurality of feature points (a plurality of points on the outline of the virtual object), the coordinates of the plurality of feature points in the optical label coordinate system (i.e., position information with respect to the optical label) may be included in the superimposition information of the virtual object, and the coordinates of the plurality of feature points in the device camera coordinate system may be determined based on the coordinates of the plurality of feature points in the optical label coordinate system, so that respective imaging positions of the feature points on the device may be determined. Once the respective imaging positions of the plurality of feature points of the virtual object are determined, the position, size, or posture of imaging of the entire virtual object can be determined accordingly.
Step 305: presenting the virtual object on a display medium of the device based on overlay information of the virtual object, position information of the device relative to the light label, and attitude information of the device, thereby overlaying the virtual object in the real scene.
The superimposition position information of the virtual object reflects the position information of the virtual object to be superimposed relative to the optical label. After the superimposition position information of the virtual object and the position information of the device with respect to the optical label are obtained through the above steps, a three-dimensional spatial coordinate system with the optical label as an origin can be actually created, in which the device and the virtual object to be superimposed each have accurate spatial coordinates in the coordinate system. In one embodiment, the position information of the virtual object to be superimposed with respect to the device may also be determined based on the superimposition position information of the virtual object and the position information of the device with respect to the optical label. On the basis of the above, the virtual object may be superimposed in the real scene based on the pose information of the device. For example, the imaging size of the virtual object to be superimposed may be determined based on the relative distance of the device and the virtual object to be superimposed, and the imaging position of the virtual object to be superimposed on the device may be determined based on the relative direction of the device and the virtual object to be superimposed and the posture information of the device. Based on the imaging position and the imaging size, accurate superposition of the virtual object can be realized in a real scene. In one embodiment, the virtual object to be superimposed may have a default imaging size, in which case only the imaging position of the virtual object to be superimposed on the device may be determined without determining its imaging size. In a case where the superimposition information includes superimposition posture information of the virtual object, the posture of the superimposed virtual object may be further determined. In one embodiment, the position, size, or pose, etc., of the imaging of the virtual object to be superimposed on the device may be determined from the pose information (R, t) of the device (more precisely, the camera of the device) with respect to the light labels calculated above. In one case, if it is determined that the virtual object to be superimposed is not currently in the field of view of the device (e.g., the imaging position of the virtual object is outside the display screen), the virtual object is not displayed.
In the above embodiments, the optical labels are actually used as anchor points, based on which an accurate overlay of the virtual objects in the real scene is achieved. Moreover, even when the superimposition position of the virtual object is distant from the optical label, accurate superimposition can be achieved.
It will be appreciated by those skilled in the art that the device may also query the virtual object to be overlaid using the identification information of the optical label after determining its position information and/or pose information. In one embodiment, the device may, after determining its position information and/or pose information, perform a query using the identification information of the optical label and the position information and/or pose information of the device to determine the virtual object to be superimposed and the superimposition information of the virtual object. Therefore, the virtual objects needing to be overlaid can be screened according to the position and/or the posture of the equipment, and therefore network traffic needing to be transmitted is reduced.
In one embodiment, the device may present the real world scene using a variety of possible ways. For example, the device may capture real world information via a camera and use the information to render the real world scene on a display screen on which an image of a virtual object may be superimposed. The device (e.g., smart glasses) may also reproduce the real scene not through the display screen, but simply through a prism, lens, mirror, transparent object (e.g., glass), etc., into which the image of the virtual object may be optically superimposed. The above-described display screen, prism, lens, mirror, transparent object, etc. may be collectively referred to as a display medium of the device on which the virtual object may be presented. For example, in one type of optical see-through augmented reality device, a user observes a real scene through a particular lens, while the lens may reflect an image of a virtual object into the user's eyes. In one embodiment, a user of the device may directly observe a real scene or part thereof, which does not need to be reproduced via any medium before being observed by the user's eyes, and virtual objects may be optically superimposed into the real scene. Thus, the real scene or a part thereof does not necessarily need to be rendered or reproduced by the device before being observed by the eyes of the user.
After superimposing the virtual object, the device may be translated and/or rotated, in which case its position and/or attitude changes may be measured or tracked by methods known in the art (e.g., inertial navigation, visual odometer, SLAM, VSLAM, SFM, etc.), e.g., using a built-in acceleration sensor, gyroscope, camera, etc., to adjust the display of the virtual object, e.g., to change its imaging position, imaging size, viewing angle, virtual object into the device field of view, virtual object out of the device field of view, etc. This is known in the art and will not be described in further detail. However, due to the accuracy of the built-in sensors of the device, the lack of texture in some scenes (e.g., dark night with poor lighting, white walls without texture, blue sky, etc.), and the problems of the algorithm itself, the above-described methods for tracking the position and/or orientation of the device tend to cause drift in the superimposed virtual objects. For example, after a period of translation and/or rotation of the device, when the virtual object appears in the field of view again, it is found that its current overlay position deviates from the initial overlay position. This deviation will generally become more severe over time.
In one embodiment, the device may re-determine its position information relative to the optical tag and its pose information relative to the optical tag (e.g., when the optical tag re-enters the device field of view after leaving the device field of view, or at regular intervals while the optical tag remains in the device field of view) and re-determine the imaging position and/or imaging size of the virtual object based on the overlay position information of the virtual object, the position information of the device relative to the optical tag, and the pose information of the device relative to the optical tag, thereby correcting the overlay of the virtual object in the real scene. For example, if the imaging position or imaging size of the virtual object currently displayed by the device differs from the re-determined imaging position or imaging size or the difference exceeds a preset threshold, the device may superimpose the virtual object according to the re-determined imaging position and imaging size. In this way, the position of the superimposed virtual object can be prevented from drifting along with the rotation or movement of the device.
In some cases, there may be multiple virtual objects associated with the optical label, and overlapping, occlusion, etc. situations may occur when superimposing the virtual objects. In one embodiment, overlapping, occlusion, etc. situations between virtual objects may be considered when overlaying the plurality of virtual objects, and only un-occluded virtual objects or un-occluded portions of virtual objects are overlaid or rendered in the real scene. In another embodiment, it is also contemplated to set virtual objects or portions thereof that occlude other virtual objects to be semi-transparent and also overlay or render the occluded virtual objects or portions thereof so that the device user can view all of the virtual objects.
In one embodiment, when superimposing virtual objects, the device may superimpose only a portion of the virtual objects within its current field of view, rather than all of the virtual objects within the field of view, as desired. For example, for some virtual objects whose overlay location is very close to the device location, if the virtual object is overlaid in the real scene observed by the device, the virtual object may appear to have a very large size (large near and small far) and may occlude a large number of other objects, thereby affecting the use experience of the device user. For some virtual objects that are superimposed at locations that are very far from the device location, if the virtual object is superimposed in a real scene viewed by the device, the virtual object may appear to be of very small size and difficult to view, and no superimposition is required. Some virtual objects located at the edge of the field of view of the device, or some virtual objects that are occluded or partially occluded by real objects or other virtual objects, may not be superimposed.
In one embodiment, after superimposing a virtual object, the device or its user may perform an operation on the virtual object to change the properties of the virtual object. For example, the device or its user may move the position of the virtual object, change the pose of the virtual object, change the size or color of the virtual object, add annotations to the virtual object, and so forth. In one embodiment, after the device or its user changes the properties of the virtual object, the modified property information of the virtual object may be uploaded to the server. The server may modify the description information and the overlay information of the virtual object stored by the server based on the modified attribute information. In this manner, the modified virtual object may be superimposed in the real scene when other users later use their devices to scan the optical label.
In the above, the virtual object is superimposed in the real scene as an example, but it is understood that the solution of the present invention is also applicable to superimposing the virtual object in the virtual scene. Various virtual objects may be included in the virtual scene presented or displayed by the device, such as virtual three-dimensional scene models, virtual objects, virtual characters, and so forth. When various virtual objects are overlaid, the device can identify the optical label through the camera and overlay the various virtual objects by taking the optical label as an anchor point, and by the mode, the position or the posture of the user can be accurately tracked and the various virtual objects can be accurately overlaid. The light labels or icons or logos showing the location of the light labels may or may not be displayed in the virtual scene.
Referring now to fig. 4, a schematic structural diagram of an embodiment of a shelf system based on an optical communication device, which is implemented by using the method for superimposing a virtual object in a real scene or a virtual scene based on an optical label according to the above embodiment of the present invention, is shown. As shown in fig. 4, the shelf system includes an optical communication device 401 (also referred to as an optical label 401), a shelf 402 associated with the optical communication device 401, and a server 404. Also shown in fig. 4 is a user device 403 capable of interacting with the shelving system, where the user device 403 can identify information conveyed by the optical labels 401 in the shelving system and can also interact with a server 404 in the shelving system via a network. In one embodiment, the user device 403 may also be part of a shelf system. Optical labels 401 may be mounted on shelves 402 or at other locations near shelves 402 for scanning by an image capture device on the device carried by the user. The shelf 402 may be divided into a plurality of areas, such as area 1, area 2 … …, and area n, where different areas may hold different items. When a user wants to know and acquire information about items placed on the shelf 402, the user can scan the optical label 401 associated with the shelf 402 through an image acquisition device on a device (e.g., user device 403) carried by the user to acquire an image containing the optical label 401; then analyzing the collected image containing the optical label 401 to obtain the identification information transmitted by the optical label 401; the identification information conveyed by the optical label 401 may then be used to query the server 404 for and retrieve information related to one or more virtual objects associated with the optical label 401. Where each virtual object may correspond to one of the areas on the shelf 402. In some embodiments, the information related to the virtual object may include information related to identifying items placed in the shelf area corresponding to the virtual object and description information of the virtual object described above. The related information of the article includes description and introduction of the article name, usage, features, advantages, using method, manufacturer and production date, etc. In one embodiment, the relevant information may be described in multiple languages, and the relevant information may be presented using different languages alternately. In one embodiment, the server 404 may provide the item description information in the corresponding language according to the user language option acquired from the user device 403. The user language options obtained from the user device 403 may come from, for example, manual input or selection by the user, or from system configuration information of the user device 403 (e.g., language configuration information of the user device 403). For example, if the server 404 knows that the operating system of the user device 403 is using Chinese, a virtual object containing Chinese may be provided to the user device 403; if the server 404 knows that the operating system of the user device 403 is using Japanese, then a virtual object containing Japanese may be provided to the user device 403. In this way, virtual objects that can be easily understood by different users can be presented. Description information of the virtual object such as shape information, color information, size information, pictures, texts, icons, and the like of the virtual object mentioned above, so as to facilitate the device to present the corresponding virtual object. In some embodiments, a virtual object may exhibit information about items placed in the shelf area corresponding to the virtual object in a combination of one or more of the following forms: text, graphics, images, animations, audio, video, web page links, etc. For example, the virtual object may be in the form of a text label to identify the related information of the article, or in the form of an animation to identify the related information of the article. Or a combination of one or more of the above, such as in a geometric figure with an icon that when clicked plays an audio, animation or video segment. The specific form of the virtual object is not limited, and any form capable of displaying information related to the article can be adopted by the embodiment of the invention. The user device 403 may display the obtained information related to the one or more virtual objects on the display medium of the device carried by the user according to the arrangement order of the areas on the shelf 402, so as to facilitate the user to obtain the information of the corresponding items in the areas on the shelf 402.
In some preferred embodiments, the information related to the virtual object includes at least information identifying an item placed in a shelf area corresponding to the virtual object and overlay position information of the virtual object. The superimposition position information of the virtual object may be determined based on the position of the shelf area corresponding to the virtual object with respect to the optical label 401. For example, the superimposition position information of the virtual object may be set as the position information (including the distance information and the direction information with respect to the optical label 401) of a certain position point (for example, the center point) in the shelf area corresponding to the virtual object with respect to the optical label 401. In some embodiments, the relevant information of the virtual object may also include superimposed pose information of the virtual object, which may be pose information of the virtual object relative to the optical label 401 associated therewith. In some embodiments of the present invention, the information related to the virtual object may further include information for identifying a range of the shelf area corresponding to the virtual object, such as a geometric shape or a color block covering each area, so as to better distinguish each area on the shelf 402 in the rendered scene picture. The information related to the optical label 401 and all the information related to the one or more virtual objects associated with the optical label 401 may be preset on the server 404, or preset and stored in a third party, and may be obtained by the server 404 based on the identification information of the optical label 401.
As described above in conjunction with fig. 3, with the superimposition position information, superimposition attitude information, and the like of a virtual object, the virtual object can be superimposed in a corresponding area on the shelf 402 in the real scene screen including the shelf 402 presented via the display medium of the device carried by the user. More specifically, user device 403 may determine its position information and pose information relative to optical label 401 from the captured image that includes optical label 401; then, based on the position information and the posture information and the superposed position information of one or more virtual objects acquired from the server 404, the imaging position and the imaging size of each virtual object on the display medium of the device are determined; thereby rendering each virtual object on a display medium of the device based on the determined imaging position and imaging size. In some embodiments where the relevant information for the virtual object also includes the superimposed pose information for the virtual object, the user device 403 may also determine the imaging pose of each virtual object on the display medium of the device based on its position information and pose information relative to the optical label 401 and the superimposed pose information for one or more virtual objects; and rendering each virtual object on a display medium of the user device based on the determined imaging position, imaging size, and imaging pose of each virtual object.
In the above-described embodiment, the superimposition position information of the virtual object is determined based on the position of the shelf region corresponding to the virtual object with respect to the optical label 401. As shown in fig. 5, for the user, on the real scene image viewed through the display medium of the device carried by the user, corresponding virtual objects are respectively superimposed on each region of the shelf 402 to describe or show the related information of the items placed in each region. Through these virtual objects superimposed on the shelf 402, the user can not only quickly determine in which area of the shelf the desired item is located, but also know the usage, characteristics, advantages, etc. of each item on the shelf, thereby facilitating the user to quickly make a reasonable selection. This is particularly convenient for users in foreign or unfamiliar environments.
After superimposing the corresponding virtual objects for the various regions of shelf 402, the position and/or pose of the user device relative to optical label 401 may change as the user moves, in which case the real scene rendered on the device changes, and at the same time the rendered virtual objects change accordingly. In still other embodiments, in response to these changes, the user device 403 may also re-determine the position information and the posture information of the device carried by the user with respect to the optical communication apparatus, and correct the presentation of the respective virtual objects on the display medium of the device based on the re-determined position information and posture information of the device and the superimposed position information of the virtual objects, thereby bringing a better visual experience to the user.
Referring now to fig. 6, a flowchart of an optical communication device based shelf item information providing method according to an embodiment of the present invention is shown. As shown in fig. 6, in step S501, identification information delivered by an optical communication device associated with a shelf is identified based on an image collected by an apparatus containing the optical communication device. Acquiring related information of one or more virtual objects associated with the optical communication apparatus using the identified identification information at step S502; wherein each virtual object corresponds to one of the areas on the shelf. The related information of the virtual object includes information for identifying an item placed in a shelf area corresponding to the virtual object and superimposition position information of the virtual object. Wherein the superimposed position information of the virtual object may be determined according to a position of the shelf area corresponding to the virtual object relative to the optical communication device. For the above description of various information, reference may be made to the detailed description above in conjunction with fig. 4, which is not repeated herein. In step S503, the virtual objects are respectively superimposed on the corresponding areas on the shelf presented by the display medium of the apparatus based on the position information and the posture information of the apparatus with respect to the optical communication device and the superimposed position information of the one or more virtual objects. For specific implementation of this step, reference may be made to the method for superimposing a virtual object in a real scene or a virtual scene based on an optical label introduced above with reference to fig. 3, which is not described herein again. In still other embodiments, the method may further comprise, after rendering the respective virtual object: determining the position information and the posture information of the equipment carried by the user relative to the optical communication device again; and correcting the presentation of the respective virtual object on the display medium of the device based on the re-determined position information and attitude information and the superimposed position information of the respective virtual object.
The shelf system has been described as an example, but it is to be understood that the present invention is applicable to any article display system capable of displaying different articles in different areas, and is not limited to shelf systems. The article display system may have article displays therein, which may be, for example, one or more shelves, storage racks, containers, lockers, wardrobes, filing cabinets, drawers, etc., and even the wall or floor on which the articles are displayed may be considered an article display.
The device referred to herein may be a device carried or controlled by a user (e.g., a cell phone, a tablet, smart glasses, AR glasses, a smart helmet, a smart watch, etc.), but it is understood that the device may also be a machine capable of autonomous movement, e.g., a drone, an unmanned automobile, a robot, etc. The device may have mounted thereon an image capture device (e.g., a camera) and a display medium (e.g., a display screen).
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g., hard disk, optical disk, flash memory, etc.), which when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logical or operational. Expressions appearing herein similar to "according to a", "based on a", "by a" or "using a" are meant to be non-exclusive, i.e. "according to a" may encompass "according to a only", as well as "according to a and B", unless specifically stated or clearly known from the context that the meaning is "according to a only". In the present application, for clarity of explanation, some illustrative operational steps are described in a certain order, but one skilled in the art will appreciate that each of these operational steps is not essential and some of them may be omitted or replaced by others. It is also not necessary that these operations be performed sequentially in the manner shown, but rather that some of these operations be performed in a different order, or in parallel, as desired, provided that the new implementation is not logically or operationally unfeasible.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (14)

1. An optical communication device based item display system comprising an item display device, a unique optical communication device associated with the item display device, and a server, wherein:
the server is for providing information relating to one or more virtual objects associated with the optical communication apparatus to the apparatus for identifying the optical communication apparatus associated with an item display apparatus; wherein each virtual object corresponds to one of the areas on the article display, the information relating to the virtual object includes information for identifying an article placed in the article display area corresponding to the virtual object, superimposition position information and superimposition attitude information of the virtual object, the superimposition position information of the virtual object being determined in accordance with the position of the article display area corresponding to the virtual object with respect to the optical communication device, and
wherein the information related to the virtual object is usable by the apparatus to determine an imaging position, an imaging size, and an imaging pose of the virtual object on a display medium of the apparatus based on position information and pose information of the apparatus relative to the optical communication device associated with the article display and overlay position information and overlay pose information of the virtual object, and to overlay the virtual object on a corresponding area on the article display presented by the display medium of the apparatus based on the determined imaging position, imaging size, and imaging pose.
2. The item display system of claim 1, wherein the information relating to the virtual object further comprises information identifying a range of an item display region corresponding to the virtual object.
3. The item display system of claim 1, wherein the information identifying the item is described in multiple languages.
4. The item display system of claim 3, wherein the server obtains a language option associated with a device with which it interacts from the device and provides information identifying the item in a corresponding language based on the language option.
5. The item display system of claim 1, further comprising:
apparatus for identifying an optical communication device for:
acquiring identification information transmitted by the optical communication device through the acquired image containing the optical communication device;
acquiring related information of one or more virtual objects associated with the optical communication device from a server by using the identification information;
and superimposing each virtual object on each corresponding area on the article display presented on the display medium of the device based on the position information and the posture information of the device relative to the optical communication device and the superimposed position information of the one or more virtual objects.
6. The article display system of claim 5, wherein the device determines position information and pose information of the device relative to an optical communication device from the captured image containing the optical communication device.
7. The item display system of claim 5, wherein said superimposing each virtual object on each respective area on the item display presented on the display medium of the device based on the position and pose information of the device relative to the optical communication device and the superimposed position information of the one or more virtual objects comprises:
determining imaging positions and imaging sizes of the virtual objects on a display medium of the device based on the position information and the posture information of the device relative to the optical communication device and the superposed position information of the one or more virtual objects; and
rendering virtual objects on a display medium of the device based on the imaging positions and imaging sizes.
8. The item display system of claim 5, wherein the device is further for, after rendering the virtual object:
determining again the position information of the apparatus relative to the optical communication device;
determining the attitude information of the equipment again; and
correcting presentation of the virtual object on a display medium of the device based on the re-determined position information and the pose information and the superimposed position information of the virtual object.
9. The item display system of claim 1, wherein the information about the virtual object further comprises superimposed pose information of the virtual object, the superimposed pose information being pose information of the virtual object relative to an optical communication device associated therewith.
10. The article display system of claim 1, wherein the optical communication device is mounted on or located near the article display.
11. An article information providing method based on an optical communication device comprises
Identifying identification information conveyed by an optical communication device associated with the article display based on an image captured by the apparatus containing a unique one of the optical communication devices;
acquiring related information of one or more virtual objects associated with the optical communication device by using the identification information; wherein each virtual object corresponds to one of the areas on the article display device, the information about the virtual object includes information for identifying an article placed in the article display area corresponding to the virtual object, superimposition position information and superimposition attitude information of the virtual object, the superimposition position information of the virtual object being determined according to the position of the article display area corresponding to the virtual object with respect to the optical communication device; and
determining imaging poses of the respective virtual objects on a display medium of the device based on position information and pose information of the device relative to the optical communication apparatus and the superimposed pose information of the one or more virtual objects;
determining imaging positions and imaging sizes of the respective virtual objects on a display medium of the apparatus based on position information and attitude information of the apparatus relative to the optical communication device associated with the article display and the superimposed position information of the one or more virtual objects, an
Based on the determined imaging position, imaging size and imaging pose, the respective virtual objects are superimposed on respective corresponding areas on an article display presented on a display medium of the apparatus.
12. The method of claim 11, further comprising, after rendering the virtual object:
determining again the position information of the apparatus relative to the optical communication device;
determining the attitude information of the equipment again; and
correcting presentation of the virtual object on a display medium of the device based on the re-determined position information and the pose information and the superimposed position information of the virtual object.
13. An electronic device comprising a processor and a memory, in which a computer program is stored which, when being executed by the processor, is operative to carry out the method of any one of claims 11-12.
14. A storage medium in which a computer program is stored which, when being executed by a processor, is operative to carry out the method of any one of claims 11-12.
CN201910890857.8A 2019-09-20 2019-09-20 Article display system based on optical communication device, information providing method, apparatus and medium Active CN112535392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910890857.8A CN112535392B (en) 2019-09-20 2019-09-20 Article display system based on optical communication device, information providing method, apparatus and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910890857.8A CN112535392B (en) 2019-09-20 2019-09-20 Article display system based on optical communication device, information providing method, apparatus and medium

Publications (2)

Publication Number Publication Date
CN112535392A CN112535392A (en) 2021-03-23
CN112535392B true CN112535392B (en) 2023-03-31

Family

ID=75012278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910890857.8A Active CN112535392B (en) 2019-09-20 2019-09-20 Article display system based on optical communication device, information providing method, apparatus and medium

Country Status (1)

Country Link
CN (1) CN112535392B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009190881A (en) * 2008-02-18 2009-08-27 Toshiba Tec Corp Goods control system and information processor
JP4914528B1 (en) * 2010-08-31 2012-04-11 新日鉄ソリューションズ株式会社 Augmented reality providing system, information processing terminal, information processing apparatus, augmented reality providing method, information processing method, and program
WO2016060637A1 (en) * 2014-10-13 2016-04-21 Kimberly-Clark Worldwide, Inc. Systems and methods for providing a 3-d shopping experience to online shopping environments
US20170286993A1 (en) * 2016-03-31 2017-10-05 Verizon Patent And Licensing Inc. Methods and Systems for Inserting Promotional Content into an Immersive Virtual Reality World
CN206039629U (en) * 2016-05-19 2017-03-22 常州市筑友展示器材有限公司 Intelligence floor show machine
CN107784541A (en) * 2016-08-26 2018-03-09 阿里巴巴集团控股有限公司 The method and device of data object information is provided
WO2018151910A1 (en) * 2017-02-16 2018-08-23 Walmart Apollo, Llc Virtual retail showroom system
CN108805635A (en) * 2017-04-26 2018-11-13 联想新视界(北京)科技有限公司 A kind of virtual display methods and virtual unit of object
CN107818375B (en) * 2017-11-09 2021-10-01 陕西外号信息技术有限公司 Service reservation method and system with flow guide function based on optical label
JP7049809B2 (en) * 2017-11-10 2022-04-07 東芝テック株式会社 Information providing equipment and programs
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium

Also Published As

Publication number Publication date
CN112535392A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
US10083540B2 (en) Virtual light in augmented reality
US11257233B2 (en) Volumetric depth video recording and playback
US20200388080A1 (en) Displaying content in an augmented reality system
US20200090338A1 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
EP3112986B1 (en) Content browsing
US11030808B2 (en) Generating time-delayed augmented reality content
CN104871214A (en) User interface for augmented reality enabled devices
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
CN102521852A (en) Showing method for target label independent of three-dimensional scene space
US20200211243A1 (en) Image bounding shape using 3d environment representation
US20200097068A1 (en) Method and apparatus for providing immersive reality content
EP3038061A1 (en) Apparatus and method to display augmented reality data
CN112535392B (en) Article display system based on optical communication device, information providing method, apparatus and medium
CN113168228A (en) Systems and/or methods for parallax correction in large area transparent touch interfaces
US20230377279A1 (en) Space and content matching for augmented and mixed reality
US11900621B2 (en) Smooth and jump-free rapid target acquisition
CN112055033B (en) Interaction method and system based on optical communication device
CN112055034B (en) Interaction method and system based on optical communication device
US11543665B2 (en) Low motion to photon latency rapid target acquisition
EP3510440B1 (en) Electronic device and operation method thereof
WO2020244576A1 (en) Method for superimposing virtual object on the basis of optical communication apparatus, and corresponding electronic device
TWI759764B (en) Superimpose virtual object method based on optical communitation device, electric apparatus, and computer readable storage medium
CN112053451A (en) Method for superimposing virtual objects based on optical communication means and corresponding electronic device
CN112053444A (en) Method for superimposing virtual objects based on optical communication means and corresponding electronic device
CN112417904B (en) Method and electronic device for presenting information related to an optical communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant