CN111162840A - Method and system for setting virtual objects around optical communication device - Google Patents

Method and system for setting virtual objects around optical communication device Download PDF

Info

Publication number
CN111162840A
CN111162840A CN202010252657.2A CN202010252657A CN111162840A CN 111162840 A CN111162840 A CN 111162840A CN 202010252657 A CN202010252657 A CN 202010252657A CN 111162840 A CN111162840 A CN 111162840A
Authority
CN
China
Prior art keywords
information
optical communication
communication device
virtual object
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010252657.2A
Other languages
Chinese (zh)
Other versions
CN111162840B (en
Inventor
方俊
李江亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Whyhow Information Technology Co Ltd
Original Assignee
Beijing Whyhow Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Whyhow Information Technology Co Ltd filed Critical Beijing Whyhow Information Technology Co Ltd
Priority to CN202010252657.2A priority Critical patent/CN111162840B/en
Publication of CN111162840A publication Critical patent/CN111162840A/en
Application granted granted Critical
Publication of CN111162840B publication Critical patent/CN111162840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and system for setting virtual objects around an optical communication device, comprising: determining a first optical communication device to set a virtual object related to the first optical communication device; obtaining scene information related to the first optical communication device; determining position information and attitude information of the apparatus relative to the second optical communication device; setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus and the scene information related to the first optical communication apparatus.

Description

Method and system for setting virtual objects around optical communication device
Technical Field
The present invention relates to the field of augmented reality technologies, and in particular, to a method and system for setting virtual objects around an optical communication device.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art for the purposes of describing the present disclosure.
Augmented Reality (Augmented Reality) technology is a technology that skillfully fuses virtual information with a real world, and can place a virtual object generated by a computer into the real world observed through an electronic device (e.g., a mobile phone, a tablet computer, smart glasses, AR glasses, a smart helmet, a smart watch, etc.), so that the virtual object and the real world complement each other, and "augmentation" of the real world is achieved. In order to achieve a good augmented reality effect, in one scheme, a virtual object located in a real scene around the optical communication device may be set by using the optical communication device as an anchor point. In order to accurately set the position of the virtual object in the space and to make the virtual object better blend with the surrounding real scene, a worker is usually required to go to the optical communication device to perform field operation, which is tedious and inefficient.
Therefore, a method for conveniently and quickly setting the position of the virtual object in the space is needed.
Disclosure of Invention
One aspect of the present invention relates to a method for setting a virtual object around an optical communication apparatus, comprising: determining a first optical communication device to set a virtual object related to the first optical communication device; obtaining scene information related to the first optical communication device; determining position information and attitude information of the apparatus relative to the second optical communication device; setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus and the scene information related to the first optical communication apparatus.
Optionally, wherein the setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus and the scene information related to the first optical communication apparatus comprises: using the position information and the attitude information of the apparatus relative to the second optical communication device as the position information and the attitude information of the apparatus relative to the first optical communication device; setting a virtual object related to the first optical communication apparatus based on position information and posture information of the apparatus with respect to the first optical communication apparatus and scene information related to the first optical communication apparatus.
Optionally, wherein the setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus relative to the first optical communication apparatus and the scene information related to the first optical communication apparatus comprises: determining a scene that is observable when the device is in a respective position and orientation relative to the first optical communication apparatus based on the position information and orientation information of the device relative to the first optical communication apparatus and scene information associated with the first optical communication apparatus; presenting a scene observable by the device on a display medium of the device; and setting a virtual object associated with the first optical communication device based on a scene presented on a display medium of the apparatus.
Optionally, wherein the determining the position information and the posture information of the apparatus relative to the second optical communication device comprises: acquiring an image containing the second optical communication device by an image acquisition device of the apparatus; and determining position information and pose information of the apparatus relative to the second optical communication device by analyzing the image.
Optionally, wherein the setting of the virtual object associated with the first optical communication device comprises setting at least one of: position information of the virtual object in a scene related to the first optical communication device, attitude information of the virtual object in the scene related to the first optical communication device, description information of the virtual object, and presentation time information of the virtual object.
Optionally, wherein the position information and/or posture information of the virtual object includes: position information and/or pose information of the virtual object relative to the first optical communication device; or position information and/or pose information of the virtual object in a spatial coordinate system.
Optionally, the method further comprises: new position information and pose information of the apparatus relative to the second optical communication device is determined.
Optionally, wherein the new position information and attitude information of the apparatus relative to the second optical communication device is determined by: acquiring a new image containing the second optical communication device by an image acquisition means of the apparatus and analyzing the image to determine new position information and attitude information of the apparatus relative to the second optical communication device; or determining new position and attitude information of the device relative to the second optical communication apparatus from the initial position and attitude information of the device relative to the second optical communication apparatus by tracking the position and attitude changes of the device
Optionally, the method further comprises: obtaining information about an existing virtual object associated with the first optical communication device, and wherein the setting the virtual object associated with the first optical communication device based on the position information and the posture information of the apparatus with respect to the second optical communication device and the scene information associated with the first optical communication device comprises: setting a new virtual object related to the first optical communication apparatus based on position information and posture information of the apparatus with respect to the second optical communication apparatus, scene information related to the first optical communication apparatus, and related information of an existing virtual object related to the first optical communication apparatus.
Optionally, wherein the scene information related to the first optical communication device comprises a picture or a three-dimensional model of a scene.
Optionally, wherein the scene information related to the first optical communication device comprises different scene information associated with different times.
Optionally, wherein the setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus and the scene information related to the first optical communication apparatus comprises: setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus, the scene information related to the first optical communication apparatus, and the time information.
Another aspect of the invention relates to a system for setting virtual objects around an optical communication device, comprising: a first optical communication device installed at a first location, and a second optical communication device installed at a second location; an apparatus having a display medium and an image capture device mounted thereon, the image capture device capable of capturing an image containing the optical communication device, wherein the apparatus is configured to implement the method described above.
Another aspect of the invention relates to a storage medium in which a computer program is stored which, when being executed by a processor, can be used for carrying out the above-mentioned method.
Yet another aspect of the invention relates to an electronic device comprising a processor and a memory, in which a computer program is stored which, when being executed by the processor, is operative to carry out the method as described above.
By adopting the scheme of the invention, a user can conveniently, quickly and remotely set the virtual object in the specific scene through the electronic equipment without setting the virtual object related to the scene on site; in addition, the user can flexibly set the virtual object according to different situations of the real scene.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1A illustrates an exemplary optical label;
FIG. 1B illustrates an exemplary optical label network;
FIG. 2 illustrates a system for setting virtual objects around a light label according to one embodiment;
FIG. 3 illustrates a method for setting virtual objects around a tag, according to one embodiment;
FIG. 4A shows a schematic diagram of a first coordinate system and a second coordinate system according to one embodiment;
FIG. 4B illustrates a schematic diagram of the pose information of the device relative to the second optical label as the pose information of the device relative to the first optical label, according to one embodiment;
FIG. 4C illustrates a schematic diagram of scene information that can be observed by a device according to one embodiment;
FIG. 4D illustrates a diagram of remotely setting location information of virtual objects around a light label via a cell phone, according to one embodiment;
FIG. 4E illustrates a diagram that sets description information of virtual objects around a light label according to scene information, according to one embodiment;
FIG. 5 illustrates a method of setting virtual objects around a light label according to one embodiment; and
FIG. 6 illustrates a method of setting virtual objects around a light label, according to one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Optical communication devices are also referred to as optical labels, and these two terms are used interchangeably herein. The optical label can transmit information by emitting different lights, has the advantages of long identification distance, loose requirements on visible light conditions and strong directivity, and the information transmitted by the optical label can change along with time, thereby providing large information capacity and flexible configuration capability.
An optical label may typically include a controller and at least one light source, the controller may drive the light source through different driving modes to communicate different information to the outside. Fig. 1A shows an exemplary optical label 100 that includes three light sources (first light source 101, second light source 102, and third light source 103, respectively). Optical label 100 also includes a controller (not shown in FIG. 1A) for selecting a respective drive mode for each light source based on the information to be communicated. For example, in different driving modes, the controller may control the light emitting manner of the light source using different driving signals, so that when the optical label 100 is photographed using the apparatus having the image capture device, the image of the light source therein may present different appearances (e.g., different colors, patterns, brightness, etc.). By analyzing the imaging of the light sources in the optical label 100, the driving pattern of each light source at the moment can be analyzed, so that the information transmitted by the optical label 100 at the moment can be analyzed. Fig. 1A is used merely as an example, and an optical label may have a different shape than the example shown in fig. 1A, and may have a different number and/or different shape of light sources than the example shown in fig. 1A.
In order to provide a corresponding service to a user based on the optical labels, each optical label may be assigned an identification Information (ID). In general, the light source may be driven by a controller in the optical label to transmit the identification information outwards, the image acquisition device may perform image acquisition on the optical label to obtain one or more images containing the optical label, and identify the identification information transmitted by the optical label by analyzing the image of the optical label (or each light source in the optical label) in the images, and then may acquire other information associated with the identification information, for example, position information of the optical label corresponding to the identification information.
Information associated with each optical label may be stored in a server. In reality, a large number of optical labels can be constructed into an optical label network. FIG. 1B illustrates an exemplary optical label network including a plurality of optical labels and at least one server. Identification Information (ID) or other information of each optical label, such as service information related to the optical label, description information or attribute information related to the optical label, such as position information, model information, physical size information, physical shape information, attitude or orientation information, etc. of the optical label may be maintained on the server. The optical label may also have uniform or default physical size information and physical shape information, etc. The device may use the identification information of the identified optical label to obtain further information related to the optical label from the server query. The position information of the optical label may refer to an actual position of the optical label in the physical world, which may be indicated by geographical coordinate information. A server may be a software program running on a computing device, or a cluster of computing devices.
The optical tag may be used as an anchor point in real space and based on the optical tag some virtual objects are arranged in its surrounding scene, which may have specific position and/or pose information with respect to the optical tag. A user may scan the optical label through a device (e.g., a cell phone) to determine position and pose information of the device relative to the optical label so that virtual objects located in a scene around the optical label may be displayed at appropriate locations on a display medium of the device. The virtual object may be, for example, an icon, a picture, text, an emoticon, a virtual three-dimensional object, a three-dimensional scene model, an animation, a video, a jumpable web link, etc. In the present invention, when setting a virtual object in a real scene where a certain optical label is located, a user may not be in the real scene, but may remotely set the virtual object in the real scene through another optical label outside the real scene. Devices mentioned in this application may include, for example, cell phones, tablets, smart glasses, AR/VR helmets, smart watches, and the like. The device may comprise an image acquisition device (e.g. a camera), a display medium (e.g. an electronic screen), and a data processing system for storage, calculation, output or display of data, etc., e.g. comprising volatile or non-volatile memory, one or more processors. The apparatus may further include a communication device for wired or wireless communication with an external system or other devices (e.g., a server) to perform transmission and reception of data.
An embodiment of the present invention is described below with a mobile phone as an example device, an exhibition area of a museum as an example real scene in which a virtual object is to be set, and an office area as an example remote setting place, but it is understood that the solution of the present invention is equally applicable to any other device and any other scene.
Fig. 2 shows a system for setting virtual objects around an optical label according to an embodiment, comprising a first optical label 101, a second optical label 201, a server 202 and a device 203. The first optical label 101 is located in an exhibition area 100 of a museum having objects A, B and C, the second optical label 201 is located in an office area 200 outside the exhibition area of the museum, a user 204 sets virtual objects in a real scene (i.e. the exhibition area 100) around the first optical label 101 in the office area 200 by scanning the second optical label 201 using the device 203, and the device 203 can communicate with the server 202. In another embodiment, all or part of the functionality of server 202 may be integrated into device 203, such that server 202 may not be included in the system.
FIG. 3 illustrates a method for setting virtual objects around a light label, according to one embodiment, the method comprising the steps of:
s310, a first optical label of a virtual object related to the first optical label is determined to be set.
The first optical label to which the virtual object associated therewith is to be set may be determined in a number of ways.
In one embodiment, the device may determine, through identification Information (ID) of the first optical label, the first optical label to which the virtual object associated therewith is to be set. The server may have stored thereon identification information, location information, or any other information for each optical label. Each optical label may uniquely correspond to its identification information. The device may use the identification information of the first optical label to query from the server to obtain the location information of the first optical label (e.g., the location information in the scene coordinate system or the location information in the world coordinate system), and may further determine the scene information related to the first optical label according to the location information of the first optical label. In one embodiment, the server may also store therein scene information related to the optical label, and the device may directly query and obtain the scene information related to the first optical label from the server using the identification information of the first optical label. The scene information associated with the optical label may include, for example, one or more scene pictures (e.g., scene pictures taken at different locations and/or perspectives), a three-dimensional scene model, or a map, among others. Taking a museum exhibit as an example, the scene information associated with the first optical label may include, for example, a picture of all or a portion of an exhibit (e.g., object A, B, C shown in fig. 2) within the exhibit, a three-dimensional scene model or map of the exhibit, information of neighboring exhibits and surrounding facilities, and so forth. In one embodiment, the scene information associated with the first optical label may further include a relative positional relationship of a specific object in the scene to the first optical label.
In one embodiment, the device may also determine a first optical label to be provided with the virtual object related thereto by using the position information of the first optical label, where the position information of the first optical label may be specific position information of the first optical label, and for example, may be coordinate information in a specific scene coordinate system or in a world coordinate system; or it may be approximate location information, e.g. in a certain exhibition area/areas of a certain designated museum. In one embodiment, the device may use the location information of the first optical label to query and obtain the identification information of the first optical label from the server, and then determine the specific location information of the first optical label and/or the scene information related to the first optical label. In another embodiment, the device may determine scene information associated with the first optical label directly from its location information.
In one embodiment, the device may also determine, from a real scene surrounding the first light label, a first light label to which to set a virtual object associated therewith. The device may determine identification information or location information of the first optical label by comparing real scene information (e.g., a picture of a scene) around the first optical label with scene information (e.g., a photograph, three-dimensional model, map, etc. of the scene) stored in the server in relation to the respective optical labels.
In one embodiment, the first optical label to which the virtual object associated therewith is to be set may also be determined by the server.
In the museum exhibition area 100 where the first optical label is located, a three-dimensional space coordinate system (hereinafter referred to as a first coordinate system) with the first optical label as an origin may be established, where the coordinate position of the first optical label may be an origin O (0,0, 0), and a relative position relationship between an object located in a surrounding scene of the first optical label and the first optical label may be represented as a coordinate position of the object in the first coordinate system. FIG. 4A shows a schematic diagram of a first coordinate system and a second coordinate system according to one embodiment. As shown in fig. 4A, the relative positional relationship between object A, B, C and first optical label 101 can be expressed as the coordinate position of object A, B, C in the first coordinate system, i.e., a (0,10,10), B (10,0,10), C (0,0, 10).
And S320, obtaining scene information related to the first optical label.
As described above, the device may obtain scene information related to the first optical label through identification information and/or location information of the first optical label.
S330, determining the position information and the posture information of the equipment relative to the second optical label.
The device may determine its position information relative to the optical label in various ways, which may include distance information and direction information of the device relative to the optical label. Typically, the positional information of the device relative to the optical label is actually the positional information of the image capturing means of the device relative to the optical label. In one embodiment, the device may determine its position information relative to the optical label by capturing an image that includes the optical label and analyzing the image. For example, the device may determine the relative distance of the optical label from the identification device (the greater the imaging, the closer the distance; the smaller the imaging, the further the distance) by the size of the optical label imaging in the image and optionally other information (e.g., actual physical dimension information of the optical label, the focal length of the camera of the device). The device may obtain actual physical size information of the optical label from the server using the identification information of the optical label, or the optical label may have a uniform physical size and store the physical size on the device. In one embodiment, the device may also directly obtain the relative distance between the optical label and the identification device through a depth camera or a binocular camera mounted thereon. The device may determine orientation information of the device relative to the optical label by perspective distortion of the optical label imaging in the image including the optical label and optionally other information (e.g., imaging location of the optical label). The device may obtain physical shape information of the optical label from a server using identification information of the optical label, or the optical label may have a uniform physical shape and store the physical shape on the device. The device may also use any other positioning method known in the art to determine its position information relative to the optical label.
A three-dimensional space coordinate system (hereinafter referred to as a second coordinate system) may be created with the second optical label as an origin, where the coordinate position of the second optical label may be an origin O' (0,0, 0), and the position information of the device relative to the second optical label may be represented as the coordinate position of the device in the second coordinate system. As shown in fig. 4A, the position information of the device 203 relative to the second optical label 201 may be represented as the device's coordinates D' (10,10,0) in the second coordinate system.
The device may also determine its pose information, which may be used to determine the extent or boundaries of the real scene captured by the device. Typically, the pose information of the device is actually pose information of an image capture device of the device. In one embodiment, the device may determine its pose information with respect to the optical label, e.g., the device may determine its pose information with respect to the optical label based on an image of the optical label, and may consider the device to be currently facing the optical label when the imaging position or imaging area of the optical label is centered in the imaging field of view of the device. The direction of imaging of the optical label may further be taken into account when determining the pose of the device. As the pose of the device changes, the imaging position and/or imaging direction of the optical label on the device changes accordingly, and therefore pose information of the device relative to the optical label can be obtained from the imaging of the optical label on the device.
In one embodiment, the device may also send the captured image including the optical label to a server, which analyzes the image to determine position information and/or pose information of the device relative to the optical label.
And S340, setting a virtual object related to the first optical label based on the pose information of the equipment relative to the second optical label and the scene information related to the first optical label.
In one embodiment, the pose information of the device relative to the second optical label can be used as the pose information of the device relative to the first optical label, and the virtual object related to the first optical label can be set based on the pose information of the device relative to the first optical label and the scene information related to the first optical label, and the specific steps are as follows:
and S341, taking the pose information of the device relative to the second optical label as the pose information of the device relative to the first optical label.
Taking the position information of the device relative to the second optical label as the position information of the device relative to the first optical label may actually be seen as taking the coordinate position of the device in the second coordinate system as the coordinate position of the device in the first coordinate system.
FIG. 4B illustrates a schematic diagram of using pose information of a device relative to a second optical label as pose information of a device relative to a first optical label, according to one embodiment. As shown in fig. 4B, the second coordinate system may be translated into the first coordinate system, wherein the coordinate position O 'of the second optical label 201 coincides with the coordinate position O of the first optical label 101, and thus, the coordinate position D' (10,10,0) of the device 203 in the second coordinate system is also translated into the first coordinate system, which is the coordinate position D (10,10,0) in the first coordinate system, which is the position information of the device 203 relative to the first optical label 101.
In one embodiment, the pose information of the device relative to the second optical label may also be taken as the pose information of the device relative to the first optical label. As shown in fig. 4B, if the attitude of the device 203 relative to the second optical label 201 is at a front elevation angle of 45 °, then the attitude of the device 203 relative to the first optical label 101 is also at a front elevation angle of 45 °.
And S342, determining a scene which can be observed when the equipment is in the pose according to the pose information of the equipment relative to the first optical label and the scene information related to the first optical label, and presenting the scene on a display medium of the equipment.
The field of view of an image capture device (e.g., a camera) of the device may be determined from pose information of the device relative to the first optical signature. If the scene related to the first optical label is located in the visual field range of the equipment, the equipment can observe the scene; if the scene associated with the first optical label is outside the field of view of the device, the device cannot observe the scene. The device may present a scene that can be observed on its display medium.
FIG. 4C shows a schematic view of a scene that can be observed by a device according to one embodiment. As shown in fig. 4C, based on the position of the device 203 relative to the first optical label 101, i.e., the coordinate position D (10,10,0) of the device in the first coordinate system, and the attitude, e.g., the elevation angle of 45 °, of the front of the device 203 relative to the first optical label 101, it can be determined that the object C in the scene is within the field of view of the device 203, and that both objects A, B are outside the field of view of the device 203, so that the device 203 can render only the object C in its display medium.
S343, the virtual object associated with the first optical label is set based on the scene presented on the display medium of the device.
The user may set information related to the virtual object through a scene presented on a display medium of the device. In one embodiment, the information related to the virtual object may comprise position information of the virtual object in a scene associated with the first optical label. The position of the virtual object may be a position of the virtual object relative to the optical label (for example, distance information and direction information of the virtual object relative to the optical label), or may be a position of the virtual object in a spatial coordinate system of the real scene. In one embodiment, the position of the virtual object may be determined based on the position of the object presented on the display medium of the device, for example, the position of an object in the scene (i.e., the coordinate position of the object in the first coordinate system) may be set as the position of the virtual object, at which time the virtual object presented on the display medium of the device may overlay the corresponding object in the real scene. In one embodiment, the position of the virtual object may also be set to be located near the position of an object, in which case the virtual object presented on the display medium of the device is located around or near the corresponding object, thereby achieving an accurate augmented reality effect.
In one embodiment, the information related to the virtual object may further include pose information of the virtual object in a scene related to the first optical label, where the pose may be a pose of the virtual object with respect to the optical label, a pose of the virtual object with respect to the device, or a pose of the virtual object in a spatial coordinate system of the real world.
In one embodiment, the user may set the position or pose of the virtual object by performing an operation (e.g., clicking, double-clicking, sliding, rotating, etc.) on the display medium. FIG. 4D illustrates a diagram of remotely setting location information of virtual objects around a light label via a cell phone, according to one embodiment. As shown in fig. 4D, the user 204 selects the position of the object C as the position of the virtual object (identified by a cross in fig. 4C) by clicking on the upper portion of the screen of the device 203. In another embodiment, the position or posture of the virtual object, which can be selected by the user through gestures or voice, is suitable for devices such as smart glasses which are inconvenient to operate on the display medium.
In one embodiment, the information related to the virtual object may further include description information of the virtual object, such as a picture, a text, an icon, identification information of the virtual object, shape information, color information, size information, and the like, included in the virtual object. Based on the description information, the device is able to render the corresponding virtual object. The user may set description information of the virtual objects in the scene according to scene information associated with the first optical label. Fig. 4E shows a schematic diagram of setting description information of virtual objects around a light label according to scene information according to an embodiment. As shown in fig. 4E, the user 204 may set the description information of the virtual object corresponding to C as "blue and white porcelain" according to the attribute of the object C.
In one embodiment, the information related to the virtual object may also include presentation time information of the virtual object to present different virtual objects according to different times. The presentation time of the virtual object may be, for example, a time period, which may be represented by a presentation start time and a presentation end time, for indicating the lifetime of the virtual object in the real scene. And presenting the virtual objects in the real scene or deleting the virtual objects according to the presentation time information of each virtual object over time. For example, a virtual object may be rendered in a real scene when its lifetime begins, and deleted from the real scene when its lifetime ends. Thus, the flexibility and customizability of the virtual object can be greatly improved.
Information relating to the virtual object may be associated with the first optical label. In one embodiment, the device may send information regarding the set virtual object associated with the first optical label to the server, and the server may store such information in association with other information regarding the first optical label (e.g., identification information, location information, etc. of the first optical label). In this way, other users may obtain identification information conveyed by the optical label by image capturing the first optical label using their devices, and access the server based on the identification information to obtain information related to the first optical label, including position information, pose information, description information, presentation time information, etc. of one or more virtual objects associated with the optical label. The device may present the corresponding virtual object on its display medium based on the information associated with the first optical label.
When the virtual object is set, the pose information of the device relative to the second optical label can be changed by translating and/or rotating the device, so that the visual field range or the visual angle of the device is changed, and the virtual object can be set better or a new virtual object can be set.
Fig. 5 shows a method for setting a virtual object around a light label according to an embodiment, wherein steps 510 and 540 are similar to step 310 and 340 of fig. 3, and will not be described in detail herein. The method specifically comprises the following steps:
s510, a first optical label of a virtual object related to the first optical label is determined to be set.
And S520, obtaining scene information related to the first optical label.
S530, determining the pose information of the equipment relative to the second optical label.
And S540, setting the virtual object related to the first optical label based on the pose information of the equipment relative to the second optical label and the scene information related to the first optical label.
And S550, acquiring new pose information of the equipment relative to the second optical label.
In one embodiment, a new image containing the second optical label may be captured by the image capture device of the device and analyzed to determine new position information and pose information of the device relative to the second optical label. In another embodiment, new position and orientation information of the device relative to the second optical label may be determined from the initial position and orientation information of the device relative to the second optical label and by tracking changes in the position and orientation of the device. The device may use its built-in acceleration sensors, gyroscopes, visual odometers, etc. to track its position changes as well as attitude changes.
And S560, adjusting the virtual object related to the first optical label or setting a new virtual object based on the new pose information of the device relative to the second optical label and the scene information related to the first optical label.
As the pose of the device changes, the angle or range of view of the device changes accordingly. In one embodiment, information about the virtual object that has been set, such as position or pose information of the virtual object in the real scene, may be adjusted based on different perspectives of the device. In one embodiment, new virtual objects may be set based on changes in the field of view of the device. As the field of view of the device changes, some of the scene information associated with the first optical label may move out of the field of view of the image capture device (e.g., camera) of the device, while some other scene information may move into the field of view of the image capture device of the device and be presented on the display medium of the device. The user can set a corresponding new virtual object through the new scene information presented on the display medium of the device, including position information, posture information, description information, and presentation time information of the virtual object, and so on.
Information about the set new virtual object associated with the first optical label may be associated with the first optical label and stored in the server. Other users can acquire the identification information transmitted by the optical label by using the device to acquire the image of the first optical label, so as to acquire a new virtual object related to the first optical label.
In one embodiment, when the virtual object associated with the first optical label is set by the second optical label, the virtual object is already set in the scene associated with the first optical label, and at this time, a new virtual object of the scene associated with the first optical label may be set based on pose information of the device with respect to the second optical label, scene information associated with the first optical label, and information about the virtual object existing in the scene (e.g., position information, pose information, description information, presentation time information, etc. of the virtual object). Fig. 6 shows a method for setting a virtual object around a light label according to an embodiment of the present invention, which includes the following specific steps:
s610, a first optical label of a virtual object related to the first optical label is determined to be set.
S620, acquiring scene information related to the first optical label and related information of the existing virtual object related to the first optical label.
And S630, determining the pose information of the equipment relative to the second optical label.
And S640, setting a new virtual object related to the first optical label based on the scene information related to the first optical label, the related information of the existing virtual object related to the first optical label and the pose information of the equipment relative to the second optical label.
In some cases, the scene around the same optical label may change significantly due to different times (e.g., day and night). In view of this, in one embodiment, the scene information (e.g., scene pictures and three-dimensional scene models) associated with the first optical label may include different scene information (e.g., daytime scene information and nighttime scene information) associated with different times. In this way, when setting the virtual object around the first optical label, time information may be further considered in order to select scene information corresponding to the time information. The time information may be current time information or time information selected by the user. For example, the user may set a virtual object to be presented in the daytime according to scene information around the first light label in the daytime and set a virtual object to be presented at night according to scene information around the first light label in the night. As such, in one embodiment, step S340 illustrated in fig. 3 may include: setting a virtual object associated with the first optical label based on the position information and the pose information of the device relative to the second optical label, the scene information associated with the first optical label, and the time information.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g., hard disk, optical disk, flash memory, etc.), which when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logically inconsistent or workable. Expressions appearing herein similar to "according to a", "based on a", "by a" or "using a" mean non-exclusive, i.e. "according to a" may cover "according to a only", and also "according to a and B", unless it is specifically stated that the meaning is "according to a only". In the present application, for clarity of explanation, some illustrative operational steps are described in a certain order, but one skilled in the art will appreciate that each of these operational steps is not essential and some of them may be omitted or replaced by others. It is also not necessary that these operations be performed sequentially in the manner shown, but rather that some of these operations be performed in a different order, or in parallel, as desired, provided that the new implementation is not logically or operationally unfeasible. For example, in some embodiments, the distance or depth of the virtual object relative to the electronic device may be set prior to determining the orientation of the virtual object relative to the electronic device.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (15)

1. A method for setting virtual objects around an optical communication device, comprising:
determining a first optical communication device to set a virtual object related to the first optical communication device;
obtaining scene information related to the first optical communication device;
determining position information and attitude information of the apparatus relative to the second optical communication device;
setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus and the scene information related to the first optical communication apparatus.
2. The method of claim 1, wherein the setting a virtual object associated with the first optical communication device based on the position information and the pose information of the apparatus relative to the second optical communication device and the scene information associated with the first optical communication device comprises:
using the position information and the attitude information of the apparatus relative to the second optical communication device as the position information and the attitude information of the apparatus relative to the first optical communication device;
setting a virtual object related to the first optical communication apparatus based on position information and posture information of the apparatus with respect to the first optical communication apparatus and scene information related to the first optical communication apparatus.
3. The method of claim 2, wherein the setting a virtual object associated with the first optical communication device based on the position information and the pose information of the apparatus relative to the first optical communication device and the scene information associated with the first optical communication device comprises:
determining a scene that is observable when the device is in a respective position and orientation relative to the first optical communication apparatus based on the position information and orientation information of the device relative to the first optical communication apparatus and scene information associated with the first optical communication apparatus;
presenting a scene observable by the device on a display medium of the device; and
setting a virtual object associated with the first optical communication device based on a scene presented on a display medium of the apparatus.
4. The method of claim 1, wherein the determining position information and pose information of the device relative to the second optical communication apparatus comprises:
acquiring an image containing the second optical communication device by an image acquisition device of the apparatus; and
determining positional information and pose information of the apparatus relative to the second optical communication device by analyzing the image.
5. The method of claim 1, wherein the setting a virtual object associated with the first optical communication device comprises setting at least one of:
position information of the virtual object in a scene related to the first optical communication device, attitude information of the virtual object in the scene related to the first optical communication device, description information of the virtual object, and presentation time information of the virtual object.
6. The method of claim 5, wherein the position information and/or pose information of the virtual object comprises:
position information and/or pose information of the virtual object relative to the first optical communication device; or
Position information and/or pose information of the virtual object in a spatial coordinate system.
7. The method of claim 1, further comprising:
new position information and pose information of the apparatus relative to the second optical communication device is determined.
8. The method of claim 7, wherein the new position information and attitude information of the device relative to the second optical communication device is determined by:
acquiring a new image containing the second optical communication device by an image acquisition means of the apparatus and analyzing the image to determine new position information and attitude information of the apparatus relative to the second optical communication device; or
Determining new position and orientation information of the device relative to the second optical communication apparatus from the initial position and orientation information of the device relative to the second optical communication apparatus and by tracking changes in the position and orientation of the device.
9. The method of claim 1, further comprising:
obtaining information about an existing virtual object associated with the first optical communication device,
and wherein the setting of the virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus and the scene information related to the first optical communication apparatus comprises:
setting a new virtual object related to the first optical communication apparatus based on position information and posture information of the apparatus with respect to the second optical communication apparatus, scene information related to the first optical communication apparatus, and related information of an existing virtual object related to the first optical communication apparatus.
10. The method of claim 1, wherein the scene information related to the first optical communication device comprises a picture or a three-dimensional model of a scene.
11. The method of claim 1, wherein the context information related to the first optical communication device comprises different context information associated with different times.
12. The method of claim 11, wherein the setting a virtual object associated with the first optical communication device based on the position information and the pose information of the apparatus relative to the second optical communication device and the scene information associated with the first optical communication device comprises:
setting a virtual object related to the first optical communication apparatus based on the position information and the posture information of the apparatus with respect to the second optical communication apparatus, the scene information related to the first optical communication apparatus, and the time information.
13. A system for setting virtual objects around an optical communication device, comprising:
a first optical communication device installed at a first location;
a second optical communication device installed at a second location;
an apparatus having a display medium mounted thereon and an image capture device capable of capturing an image containing the optical communication device, wherein the apparatus is configured to implement the method of any of claims 1-12.
14. A storage medium in which a computer program is stored which, when being executed by a processor, is operative to carry out the method of any one of claims 1-12.
15. An electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, is operable to carry out the method of any of claims 1-12.
CN202010252657.2A 2020-04-02 2020-04-02 Method and system for setting virtual objects around optical communication device Active CN111162840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010252657.2A CN111162840B (en) 2020-04-02 2020-04-02 Method and system for setting virtual objects around optical communication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010252657.2A CN111162840B (en) 2020-04-02 2020-04-02 Method and system for setting virtual objects around optical communication device

Publications (2)

Publication Number Publication Date
CN111162840A true CN111162840A (en) 2020-05-15
CN111162840B CN111162840B (en) 2020-09-29

Family

ID=70567758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010252657.2A Active CN111162840B (en) 2020-04-02 2020-04-02 Method and system for setting virtual objects around optical communication device

Country Status (1)

Country Link
CN (1) CN111162840B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793936A (en) * 2012-10-31 2014-05-14 波音公司 Automated frame of reference calibration for augmented reality
CN104102678A (en) * 2013-04-15 2014-10-15 腾讯科技(深圳)有限公司 Method and device for realizing augmented reality
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN106127858A (en) * 2016-06-24 2016-11-16 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN106355153A (en) * 2016-08-31 2017-01-25 上海新镜科技有限公司 Virtual object display method, device and system based on augmented reality
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN107765894A (en) * 2016-08-17 2018-03-06 杨博 A kind of method and device of dummy object in real-time calibration real world
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN109358427A (en) * 2018-11-28 2019-02-19 宫春洁 It comes out of retirement and takes up an official post a kind of day entertaining augmented reality and virtual reality device
CN109700550A (en) * 2019-01-22 2019-05-03 雅客智慧(北京)科技有限公司 A kind of augmented reality method and device for dental operation
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN110471580A (en) * 2018-05-09 2019-11-19 北京外号信息技术有限公司 Information equipment exchange method and system based on optical label
CN110737326A (en) * 2018-07-20 2020-01-31 广东虚拟现实科技有限公司 Virtual object display method and device, terminal equipment and storage medium
CN110914873A (en) * 2019-10-17 2020-03-24 深圳盈天下视觉科技有限公司 Augmented reality method, device, mixed reality glasses and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793936A (en) * 2012-10-31 2014-05-14 波音公司 Automated frame of reference calibration for augmented reality
CN104102678A (en) * 2013-04-15 2014-10-15 腾讯科技(深圳)有限公司 Method and device for realizing augmented reality
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN106127858A (en) * 2016-06-24 2016-11-16 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107765894A (en) * 2016-08-17 2018-03-06 杨博 A kind of method and device of dummy object in real-time calibration real world
CN106355153A (en) * 2016-08-31 2017-01-25 上海新镜科技有限公司 Virtual object display method, device and system based on augmented reality
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN110471580A (en) * 2018-05-09 2019-11-19 北京外号信息技术有限公司 Information equipment exchange method and system based on optical label
TW201947457A (en) * 2018-05-09 2019-12-16 大陸商北京外號信息技術有限公司 Optical tag based information apparatus interaction method and system
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN110737326A (en) * 2018-07-20 2020-01-31 广东虚拟现实科技有限公司 Virtual object display method and device, terminal equipment and storage medium
CN109358427A (en) * 2018-11-28 2019-02-19 宫春洁 It comes out of retirement and takes up an official post a kind of day entertaining augmented reality and virtual reality device
CN109700550A (en) * 2019-01-22 2019-05-03 雅客智慧(北京)科技有限公司 A kind of augmented reality method and device for dental operation
CN110914873A (en) * 2019-10-17 2020-03-24 深圳盈天下视觉科技有限公司 Augmented reality method, device, mixed reality glasses and storage medium

Also Published As

Publication number Publication date
CN111162840B (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US20180286098A1 (en) Annotation Transfer for Panoramic Image
RU2670784C9 (en) Orientation and visualization of virtual object
US10403044B2 (en) Telelocation: location sharing for users in augmented and virtual reality environments
EP2207113B1 (en) Automated annotation of a view
CN110060614B (en) Head-mounted display device, control method thereof, and display system
US11228737B2 (en) Output control apparatus, display terminal, remote control system, control method, and non-transitory computer-readable medium
US9939263B2 (en) Geodetic surveying system
JP2017212510A (en) Image management device, program, image management system, and information terminal
KR20110070210A (en) Mobile terminal and method for providing augmented reality service using position-detecting sensor and direction-detecting sensor
CN105917329B (en) Information display device and information display program
JP6110780B2 (en) Additional information display system
KR20070087317A (en) Digital apparatus capable of displaying imaginary object on real image and method thereof
TWI764366B (en) Interactive method and system based on optical communication device
KR101762349B1 (en) Method for providing augmented reality in outdoor environment, augmented reality providing server performing the same, and storage medium storing the same
CN111162840B (en) Method and system for setting virtual objects around optical communication device
CN111242107B (en) Method and electronic device for setting virtual object in space
CN106055108B (en) Virtual touch screen control method and system
CN112055034B (en) Interaction method and system based on optical communication device
CN112051919B (en) Interaction method and interaction system based on position
KR20110119179A (en) Method for updating panoramic image and location search service using the same
WO2020244576A1 (en) Method for superimposing virtual object on the basis of optical communication apparatus, and corresponding electronic device
US20240087157A1 (en) Image processing method, recording medium, image processing apparatus, and image processing system
TWI759764B (en) Superimpose virtual object method based on optical communitation device, electric apparatus, and computer readable storage medium
CN113015018B (en) Bullet screen information display method, bullet screen information display device, bullet screen information display system, electronic equipment and storage medium
CN112055033B (en) Interaction method and system based on optical communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200515

Assignee: Shanghai Guangshi fusion Intelligent Technology Co.,Ltd.

Assignor: BEIJING WHYHOW INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2022110000047

Denomination of invention: Method and system for setting virtual objects around optical communication devices

Granted publication date: 20200929

License type: Common License

Record date: 20221012

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200515

Assignee: Beijing Intellectual Property Management Co.,Ltd.

Assignor: BEIJING WHYHOW INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2023110000069

Denomination of invention: Method and system for setting up virtual objects around optical communication devices

Granted publication date: 20200929

License type: Common License

Record date: 20230531

EE01 Entry into force of recordation of patent licensing contract