Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Optical communication devices are also referred to as optical labels, and these two terms are used interchangeably herein. The optical label can transmit information by emitting different lights, has the advantages of long identification distance, loose requirements on visible light conditions and strong directivity, and the information transmitted by the optical label can change along with time, thereby providing large information capacity and flexible configuration capability.
An optical label may typically include a controller and at least one light source, the controller may drive the light source through different driving modes to communicate different information to the outside. Fig. 1A shows an exemplary optical label 100 that includes three light sources (first light source 101, second light source 102, and third light source 103, respectively). Optical label 100 also includes a controller (not shown in FIG. 1A) for selecting a respective drive mode for each light source based on the information to be communicated. For example, in different driving modes, the controller may control the light emitting manner of the light source using different driving signals, so that when the optical label 100 is photographed using the apparatus having the image capture device, the image of the light source therein may present different appearances (e.g., different colors, patterns, brightness, etc.). By analyzing the imaging of the light sources in the optical label 100, the driving pattern of each light source at the moment can be analyzed, so that the information transmitted by the optical label 100 at the moment can be analyzed. Fig. 1A is used merely as an example, and an optical label may have a different shape than the example shown in fig. 1A, and may have a different number and/or different shape of light sources than the example shown in fig. 1A.
In order to provide a corresponding service to a user based on the optical labels, each optical label may be assigned an identification Information (ID). In general, the light source may be driven by a controller in the optical label to transmit the identification information outwards, the image acquisition device may perform image acquisition on the optical label to obtain one or more images containing the optical label, and identify the identification information transmitted by the optical label by analyzing the image of the optical label (or each light source in the optical label) in the images, and then may acquire other information associated with the identification information, for example, position information of the optical label corresponding to the identification information.
Information associated with each optical label may be stored in a server. In reality, a large number of optical labels can be constructed into an optical label network. FIG. 1B illustrates an exemplary optical label network including a plurality of optical labels and at least one server. Identification Information (ID) or other information of each optical label, such as service information related to the optical label, description information or attribute information related to the optical label, such as position information, model information, physical size information, physical shape information, attitude or orientation information, etc. of the optical label may be maintained on the server. The optical label may also have uniform or default physical size information and physical shape information, etc. The device may use the identification information of the identified optical label to obtain further information related to the optical label from the server query. The position information of the optical label may refer to an actual position of the optical label in the physical world, which may be indicated by geographical coordinate information. A server may be a software program running on a computing device, or a cluster of computing devices.
The optical tag may be used as an anchor point in real space and based on the optical tag some virtual objects are arranged in its surrounding scene, which may have specific position and/or pose information with respect to the optical tag. A user may scan the optical label through a device (e.g., a cell phone) to determine position and pose information of the device relative to the optical label so that virtual objects located in a scene around the optical label may be displayed at appropriate locations on a display medium of the device. The virtual object may be, for example, an icon, a picture, text, an emoticon, a virtual three-dimensional object, a three-dimensional scene model, an animation, a video, a jumpable web link, etc. In the present invention, when setting a virtual object in a real scene where a certain optical label is located, a user may not be in the real scene, but may remotely set the virtual object in the real scene through another optical label outside the real scene. Devices mentioned in this application may include, for example, cell phones, tablets, smart glasses, AR/VR helmets, smart watches, and the like. The device may comprise an image acquisition device (e.g. a camera), a display medium (e.g. an electronic screen), and a data processing system for storage, calculation, output or display of data, etc., e.g. comprising volatile or non-volatile memory, one or more processors. The apparatus may further include a communication device for wired or wireless communication with an external system or other devices (e.g., a server) to perform transmission and reception of data.
An embodiment of the present invention is described below with a mobile phone as an example device, an exhibition area of a museum as an example real scene in which a virtual object is to be set, and an office area as an example remote setting place, but it is understood that the solution of the present invention is equally applicable to any other device and any other scene.
Fig. 2 shows a system for setting virtual objects around an optical label according to an embodiment, comprising a first optical label 101, a second optical label 201, a server 202 and a device 203. The first optical label 101 is located in an exhibition area 100 of a museum having objects A, B and C, the second optical label 201 is located in an office area 200 outside the exhibition area of the museum, a user 204 sets virtual objects in a real scene (i.e. the exhibition area 100) around the first optical label 101 in the office area 200 by scanning the second optical label 201 using the device 203, and the device 203 can communicate with the server 202. In another embodiment, all or part of the functionality of server 202 may be integrated into device 203, such that server 202 may not be included in the system.
FIG. 3 illustrates a method for setting virtual objects around a light label, according to one embodiment, the method comprising the steps of:
s310, a first optical label of a virtual object related to the first optical label is determined to be set.
The first optical label to which the virtual object associated therewith is to be set may be determined in a number of ways.
In one embodiment, the device may determine, through identification Information (ID) of the first optical label, the first optical label to which the virtual object associated therewith is to be set. The server may have stored thereon identification information, location information, or any other information for each optical label. Each optical label may uniquely correspond to its identification information. The device may use the identification information of the first optical label to query from the server to obtain the location information of the first optical label (e.g., the location information in the scene coordinate system or the location information in the world coordinate system), and may further determine the scene information related to the first optical label according to the location information of the first optical label. In one embodiment, the server may also store therein scene information related to the optical label, and the device may directly query and obtain the scene information related to the first optical label from the server using the identification information of the first optical label. The scene information associated with the optical label may include, for example, one or more scene pictures (e.g., scene pictures taken at different locations and/or perspectives), a three-dimensional scene model, or a map, among others. Taking a museum exhibit as an example, the scene information associated with the first optical label may include, for example, a picture of all or a portion of an exhibit (e.g., object A, B, C shown in fig. 2) within the exhibit, a three-dimensional scene model or map of the exhibit, information of neighboring exhibits and surrounding facilities, and so forth. In one embodiment, the scene information associated with the first optical label may further include a relative positional relationship of a specific object in the scene to the first optical label.
In one embodiment, the device may also determine a first optical label to be provided with the virtual object related thereto by using the position information of the first optical label, where the position information of the first optical label may be specific position information of the first optical label, and for example, may be coordinate information in a specific scene coordinate system or in a world coordinate system; or it may be approximate location information, e.g. in a certain exhibition area/areas of a certain designated museum. In one embodiment, the device may use the location information of the first optical label to query and obtain the identification information of the first optical label from the server, and then determine the specific location information of the first optical label and/or the scene information related to the first optical label. In another embodiment, the device may determine scene information associated with the first optical label directly from its location information.
In one embodiment, the device may also determine, from a real scene surrounding the first light label, a first light label to which to set a virtual object associated therewith. The device may determine identification information or location information of the first optical label by comparing real scene information (e.g., a picture of a scene) around the first optical label with scene information (e.g., a photograph, three-dimensional model, map, etc. of the scene) stored in the server in relation to the respective optical labels.
In one embodiment, the first optical label to which the virtual object associated therewith is to be set may also be determined by the server.
In the museum exhibition area 100 where the first optical label is located, a three-dimensional space coordinate system (hereinafter referred to as a first coordinate system) with the first optical label as an origin may be established, where the coordinate position of the first optical label may be an origin O (0,0, 0), and a relative position relationship between an object located in a surrounding scene of the first optical label and the first optical label may be represented as a coordinate position of the object in the first coordinate system. FIG. 4A shows a schematic diagram of a first coordinate system and a second coordinate system according to one embodiment. As shown in fig. 4A, the relative positional relationship between object A, B, C and first optical label 101 can be expressed as the coordinate position of object A, B, C in the first coordinate system, i.e., a (0,10,10), B (10,0,10), C (0,0, 10).
And S320, obtaining scene information related to the first optical label.
As described above, the device may obtain scene information related to the first optical label through identification information and/or location information of the first optical label.
S330, determining the position information and the posture information of the equipment relative to the second optical label.
The device may determine its position information relative to the optical label in various ways, which may include distance information and direction information of the device relative to the optical label. Typically, the positional information of the device relative to the optical label is actually the positional information of the image capturing means of the device relative to the optical label. In one embodiment, the device may determine its position information relative to the optical label by capturing an image that includes the optical label and analyzing the image. For example, the device may determine the relative distance of the optical label from the identification device (the greater the imaging, the closer the distance; the smaller the imaging, the further the distance) by the size of the optical label imaging in the image and optionally other information (e.g., actual physical dimension information of the optical label, the focal length of the camera of the device). The device may obtain actual physical size information of the optical label from the server using the identification information of the optical label, or the optical label may have a uniform physical size and store the physical size on the device. In one embodiment, the device may also directly obtain the relative distance between the optical label and the identification device through a depth camera or a binocular camera mounted thereon. The device may determine orientation information of the device relative to the optical label by perspective distortion of the optical label imaging in the image including the optical label and optionally other information (e.g., imaging location of the optical label). The device may obtain physical shape information of the optical label from a server using identification information of the optical label, or the optical label may have a uniform physical shape and store the physical shape on the device. The device may also use any other positioning method known in the art to determine its position information relative to the optical label.
A three-dimensional space coordinate system (hereinafter referred to as a second coordinate system) may be created with the second optical label as an origin, where the coordinate position of the second optical label may be an origin O' (0,0, 0), and the position information of the device relative to the second optical label may be represented as the coordinate position of the device in the second coordinate system. As shown in fig. 4A, the position information of the device 203 relative to the second optical label 201 may be represented as the device's coordinates D' (10,10,0) in the second coordinate system.
The device may also determine its pose information, which may be used to determine the extent or boundaries of the real scene captured by the device. Typically, the pose information of the device is actually pose information of an image capture device of the device. In one embodiment, the device may determine its pose information with respect to the optical label, e.g., the device may determine its pose information with respect to the optical label based on an image of the optical label, and may consider the device to be currently facing the optical label when the imaging position or imaging area of the optical label is centered in the imaging field of view of the device. The direction of imaging of the optical label may further be taken into account when determining the pose of the device. As the pose of the device changes, the imaging position and/or imaging direction of the optical label on the device changes accordingly, and therefore pose information of the device relative to the optical label can be obtained from the imaging of the optical label on the device.
In one embodiment, the device may also send the captured image including the optical label to a server, which analyzes the image to determine position information and/or pose information of the device relative to the optical label.
And S340, setting a virtual object related to the first optical label based on the pose information of the equipment relative to the second optical label and the scene information related to the first optical label.
In one embodiment, the pose information of the device relative to the second optical label can be used as the pose information of the device relative to the first optical label, and the virtual object related to the first optical label can be set based on the pose information of the device relative to the first optical label and the scene information related to the first optical label, and the specific steps are as follows:
and S341, taking the pose information of the device relative to the second optical label as the pose information of the device relative to the first optical label.
Taking the position information of the device relative to the second optical label as the position information of the device relative to the first optical label may actually be seen as taking the coordinate position of the device in the second coordinate system as the coordinate position of the device in the first coordinate system.
FIG. 4B illustrates a schematic diagram of using pose information of a device relative to a second optical label as pose information of a device relative to a first optical label, according to one embodiment. As shown in fig. 4B, the second coordinate system may be translated into the first coordinate system, wherein the coordinate position O 'of the second optical label 201 coincides with the coordinate position O of the first optical label 101, and thus, the coordinate position D' (10,10,0) of the device 203 in the second coordinate system is also translated into the first coordinate system, which is the coordinate position D (10,10,0) in the first coordinate system, which is the position information of the device 203 relative to the first optical label 101.
In one embodiment, the pose information of the device relative to the second optical label may also be taken as the pose information of the device relative to the first optical label. As shown in fig. 4B, if the attitude of the device 203 relative to the second optical label 201 is at a front elevation angle of 45 °, then the attitude of the device 203 relative to the first optical label 101 is also at a front elevation angle of 45 °.
And S342, determining a scene which can be observed when the equipment is in the pose according to the pose information of the equipment relative to the first optical label and the scene information related to the first optical label, and presenting the scene on a display medium of the equipment.
The field of view of an image capture device (e.g., a camera) of the device may be determined from pose information of the device relative to the first optical signature. If the scene related to the first optical label is located in the visual field range of the equipment, the equipment can observe the scene; if the scene associated with the first optical label is outside the field of view of the device, the device cannot observe the scene. The device may present a scene that can be observed on its display medium.
FIG. 4C shows a schematic view of a scene that can be observed by a device according to one embodiment. As shown in fig. 4C, based on the position of the device 203 relative to the first optical label 101, i.e., the coordinate position D (10,10,0) of the device in the first coordinate system, and the attitude, e.g., the elevation angle of 45 °, of the front of the device 203 relative to the first optical label 101, it can be determined that the object C in the scene is within the field of view of the device 203, and that both objects A, B are outside the field of view of the device 203, so that the device 203 can render only the object C in its display medium.
S343, the virtual object associated with the first optical label is set based on the scene presented on the display medium of the device.
The user may set information related to the virtual object through a scene presented on a display medium of the device. In one embodiment, the information related to the virtual object may comprise position information of the virtual object in a scene associated with the first optical label. The position of the virtual object may be a position of the virtual object relative to the optical label (for example, distance information and direction information of the virtual object relative to the optical label), or may be a position of the virtual object in a spatial coordinate system of the real scene. In one embodiment, the position of the virtual object may be determined based on the position of the object presented on the display medium of the device, for example, the position of an object in the scene (i.e., the coordinate position of the object in the first coordinate system) may be set as the position of the virtual object, at which time the virtual object presented on the display medium of the device may overlay the corresponding object in the real scene. In one embodiment, the position of the virtual object may also be set to be located near the position of an object, in which case the virtual object presented on the display medium of the device is located around or near the corresponding object, thereby achieving an accurate augmented reality effect.
In one embodiment, the information related to the virtual object may further include pose information of the virtual object in a scene related to the first optical label, where the pose may be a pose of the virtual object with respect to the optical label, a pose of the virtual object with respect to the device, or a pose of the virtual object in a spatial coordinate system of the real world.
In one embodiment, the user may set the position or pose of the virtual object by performing an operation (e.g., clicking, double-clicking, sliding, rotating, etc.) on the display medium. FIG. 4D illustrates a diagram of remotely setting location information of virtual objects around a light label via a cell phone, according to one embodiment. As shown in fig. 4D, the user 204 selects the position of the object C as the position of the virtual object (identified by a cross in fig. 4C) by clicking on the upper portion of the screen of the device 203. In another embodiment, the position or posture of the virtual object, which can be selected by the user through gestures or voice, is suitable for devices such as smart glasses which are inconvenient to operate on the display medium.
In one embodiment, the information related to the virtual object may further include description information of the virtual object, such as a picture, a text, an icon, identification information of the virtual object, shape information, color information, size information, and the like, included in the virtual object. Based on the description information, the device is able to render the corresponding virtual object. The user may set description information of the virtual objects in the scene according to scene information associated with the first optical label. Fig. 4E shows a schematic diagram of setting description information of virtual objects around a light label according to scene information according to an embodiment. As shown in fig. 4E, the user 204 may set the description information of the virtual object corresponding to C as "blue and white porcelain" according to the attribute of the object C.
In one embodiment, the information related to the virtual object may also include presentation time information of the virtual object to present different virtual objects according to different times. The presentation time of the virtual object may be, for example, a time period, which may be represented by a presentation start time and a presentation end time, for indicating the lifetime of the virtual object in the real scene. And presenting the virtual objects in the real scene or deleting the virtual objects according to the presentation time information of each virtual object over time. For example, a virtual object may be rendered in a real scene when its lifetime begins, and deleted from the real scene when its lifetime ends. Thus, the flexibility and customizability of the virtual object can be greatly improved.
Information relating to the virtual object may be associated with the first optical label. In one embodiment, the device may send information regarding the set virtual object associated with the first optical label to the server, and the server may store such information in association with other information regarding the first optical label (e.g., identification information, location information, etc. of the first optical label). In this way, other users may obtain identification information conveyed by the optical label by image capturing the first optical label using their devices, and access the server based on the identification information to obtain information related to the first optical label, including position information, pose information, description information, presentation time information, etc. of one or more virtual objects associated with the optical label. The device may present the corresponding virtual object on its display medium based on the information associated with the first optical label.
When the virtual object is set, the pose information of the device relative to the second optical label can be changed by translating and/or rotating the device, so that the visual field range or the visual angle of the device is changed, and the virtual object can be set better or a new virtual object can be set.
Fig. 5 shows a method for setting a virtual object around a light label according to an embodiment, wherein steps 510 and 540 are similar to step 310 and 340 of fig. 3, and will not be described in detail herein. The method specifically comprises the following steps:
s510, a first optical label of a virtual object related to the first optical label is determined to be set.
And S520, obtaining scene information related to the first optical label.
S530, determining the pose information of the equipment relative to the second optical label.
And S540, setting the virtual object related to the first optical label based on the pose information of the equipment relative to the second optical label and the scene information related to the first optical label.
And S550, acquiring new pose information of the equipment relative to the second optical label.
In one embodiment, a new image containing the second optical label may be captured by the image capture device of the device and analyzed to determine new position information and pose information of the device relative to the second optical label. In another embodiment, new position and orientation information of the device relative to the second optical label may be determined from the initial position and orientation information of the device relative to the second optical label and by tracking changes in the position and orientation of the device. The device may use its built-in acceleration sensors, gyroscopes, visual odometers, etc. to track its position changes as well as attitude changes.
And S560, adjusting the virtual object related to the first optical label or setting a new virtual object based on the new pose information of the device relative to the second optical label and the scene information related to the first optical label.
As the pose of the device changes, the angle or range of view of the device changes accordingly. In one embodiment, information about the virtual object that has been set, such as position or pose information of the virtual object in the real scene, may be adjusted based on different perspectives of the device. In one embodiment, new virtual objects may be set based on changes in the field of view of the device. As the field of view of the device changes, some of the scene information associated with the first optical label may move out of the field of view of the image capture device (e.g., camera) of the device, while some other scene information may move into the field of view of the image capture device of the device and be presented on the display medium of the device. The user can set a corresponding new virtual object through the new scene information presented on the display medium of the device, including position information, posture information, description information, and presentation time information of the virtual object, and so on.
Information about the set new virtual object associated with the first optical label may be associated with the first optical label and stored in the server. Other users can acquire the identification information transmitted by the optical label by using the device to acquire the image of the first optical label, so as to acquire a new virtual object related to the first optical label.
In one embodiment, when the virtual object associated with the first optical label is set by the second optical label, the virtual object is already set in the scene associated with the first optical label, and at this time, a new virtual object of the scene associated with the first optical label may be set based on pose information of the device with respect to the second optical label, scene information associated with the first optical label, and information about the virtual object existing in the scene (e.g., position information, pose information, description information, presentation time information, etc. of the virtual object). Fig. 6 shows a method for setting a virtual object around a light label according to an embodiment of the present invention, which includes the following specific steps:
s610, a first optical label of a virtual object related to the first optical label is determined to be set.
S620, acquiring scene information related to the first optical label and related information of the existing virtual object related to the first optical label.
And S630, determining the pose information of the equipment relative to the second optical label.
And S640, setting a new virtual object related to the first optical label based on the scene information related to the first optical label, the related information of the existing virtual object related to the first optical label and the pose information of the equipment relative to the second optical label.
In some cases, the scene around the same optical label may change significantly due to different times (e.g., day and night). In view of this, in one embodiment, the scene information (e.g., scene pictures and three-dimensional scene models) associated with the first optical label may include different scene information (e.g., daytime scene information and nighttime scene information) associated with different times. In this way, when setting the virtual object around the first optical label, time information may be further considered in order to select scene information corresponding to the time information. The time information may be current time information or time information selected by the user. For example, the user may set a virtual object to be presented in the daytime according to scene information around the first light label in the daytime and set a virtual object to be presented at night according to scene information around the first light label in the night. As such, in one embodiment, step S340 illustrated in fig. 3 may include: setting a virtual object associated with the first optical label based on the position information and the pose information of the device relative to the second optical label, the scene information associated with the first optical label, and the time information.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g., hard disk, optical disk, flash memory, etc.), which when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logically inconsistent or workable. Expressions appearing herein similar to "according to a", "based on a", "by a" or "using a" mean non-exclusive, i.e. "according to a" may cover "according to a only", and also "according to a and B", unless it is specifically stated that the meaning is "according to a only". In the present application, for clarity of explanation, some illustrative operational steps are described in a certain order, but one skilled in the art will appreciate that each of these operational steps is not essential and some of them may be omitted or replaced by others. It is also not necessary that these operations be performed sequentially in the manner shown, but rather that some of these operations be performed in a different order, or in parallel, as desired, provided that the new implementation is not logically or operationally unfeasible. For example, in some embodiments, the distance or depth of the virtual object relative to the electronic device may be set prior to determining the orientation of the virtual object relative to the electronic device.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.