WO2020192543A1 - Method for presenting information related to optical communication apparatus, and electronic device - Google Patents

Method for presenting information related to optical communication apparatus, and electronic device Download PDF

Info

Publication number
WO2020192543A1
WO2020192543A1 PCT/CN2020/080160 CN2020080160W WO2020192543A1 WO 2020192543 A1 WO2020192543 A1 WO 2020192543A1 CN 2020080160 W CN2020080160 W CN 2020080160W WO 2020192543 A1 WO2020192543 A1 WO 2020192543A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
communication device
optical communication
image
information
Prior art date
Application number
PCT/CN2020/080160
Other languages
French (fr)
Chinese (zh)
Inventor
方俊
牛旭恒
王强
李江亮
Original Assignee
北京外号信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京外号信息技术有限公司 filed Critical 北京外号信息技术有限公司
Publication of WO2020192543A1 publication Critical patent/WO2020192543A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present invention belongs to the field of optical information technology, and in particular relates to a method and electronic equipment for presenting information related to an optical communication device.
  • Optical communication devices are also called optical tags, and these two terms can be used interchangeably in this article.
  • Optical tags can transmit information by emitting different lights. They have the advantages of long recognition distance, relaxed requirements for visible light conditions, and strong directivity, and the information transmitted by optical tags can change over time, which can provide large information capacity and flexibility Configuration capabilities. Compared with the traditional two-dimensional code, the optical tag has a longer recognition distance and stronger information interaction ability, which can provide huge convenience for users and businesses.
  • the optical label recognition device can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.), or a machine that can move autonomously (for example, Drones, driverless cars, robots, etc.).
  • a user for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.
  • a machine that can move autonomously for example, Drones, driverless cars, robots, etc.
  • the identification device needs to use the camera on it to capture the image of the optical tag in a specific optical tag recognition mode (for example, low exposure mode). Obtain the image of the optical label, and analyze these images through the built-in application to identify the information conveyed by the optical label.
  • an image containing a light tag captured in a specific light tag recognition mode usually cannot reproduce the environmental information around the light tag well, which is very unfavorable for user experience or subsequent user interaction.
  • the brightness of the image surrounding the light tag is usually very low, or even pitch black.
  • Figure 1 shows an exemplary image containing a light label taken in low exposure mode. The middle position of the upper part of the figure has the image of the light label.
  • the objects around the light label are usually low in brightness, It is difficult to distinguish objects around the light tag from the image taken in this low exposure mode.
  • the image obtained in the normal shooting mode can show the environmental information around the light tag, because the image is not shot in the light tag recognition mode, the recognition of the light tag cannot be achieved based on the image (that is, the light cannot be recognized).
  • the information transmitted by the label or the optical label so the interactive information corresponding to the optical label (for example, an interactive icon) cannot be presented on the image.
  • One aspect of the present invention relates to a method for presenting information related to an optical communication device, including: using a first camera to obtain a first image containing an optical communication device in an optical communication device recognition mode; based on the first image Obtain the position information of the optical communication device relative to the first camera; obtain the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera; use the second camera to obtain A second image of the optical communication device; and presenting information related to the optical communication device on the second image according to position information of the optical communication device relative to the second camera.
  • obtaining the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera includes: according to the position information of the optical communication device relative to the first camera , Using the rotation matrix and displacement vector between the first camera and the second camera to obtain the position information of the optical communication device relative to the second camera.
  • obtaining the position information of the optical communication device relative to the first camera based on the first image includes: obtaining the relative position of the optical communication device based on the imaging of the optical communication device in the first image Location information of the first camera.
  • the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: based on the information of the optical communication device in the first image Obtain the distance of the optical communication device relative to the first camera based on the imaging size; obtain the direction of the optical communication device relative to the first camera based on the imaging position of the optical communication device in the first image; The distance and direction of a camera are used to obtain the position information of the optical communication device relative to the first camera.
  • the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: according to some points on the optical communication device in the optical communication device The coordinates in the coordinate system and the imaging positions of these points in the first image are combined with the internal parameter information of the first camera to obtain the position information of the optical communication device relative to the first camera.
  • the above method further includes: obtaining posture information of the optical communication device relative to the first camera based on the first image; obtaining the optical communication device according to posture information of the optical communication device relative to the first camera The posture information relative to the second camera, and wherein the presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes: according to the The position information and posture information of the optical communication device relative to the second camera present information related to the optical communication device on the second image.
  • obtaining the posture information of the optical communication device relative to the first camera based on the first image includes: obtaining the relative position of the optical communication device based on imaging of the optical communication device in the first image The posture information of the first camera.
  • the obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: determining the optical communication device in the first image The perspective deformation of the imaging to obtain the posture information of the optical communication device relative to the first camera.
  • the obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: according to some points on the optical communication device The coordinates in the coordinate system and the imaging positions of these points in the first image obtain posture information of the optical communication device relative to the first camera.
  • the coordinates of some points on the optical communication device in the optical communication device coordinate system are determined according to the physical size information and/or physical shape information of the optical communication device.
  • presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes: according to the position information of the optical communication device relative to the second camera Determine the imaging position in the second image corresponding to the position information, and present information related to the optical communication device at the imaging position in the second image.
  • the second image is a real scene image with normal exposure.
  • the above method further includes: obtaining identification information of the optical communication device based on the first image.
  • Another aspect of the present invention relates to a method for presenting information related to an optical communication device, including: using a first camera to obtain a first image containing an optical communication device in an optical communication device recognition mode; and obtaining the optical communication device The first imaging position of the device in the first image; using a second camera to obtain a second image containing the optical communication device; and presenting at the second imaging position in the second image according to the first imaging position Information related to the optical communication device, wherein the first camera and the second camera are installed on the same plane and have the same posture and internal parameters.
  • the second imaging position is the same as the first imaging position.
  • the first imaging position the Z coordinate of the optical communication device in the first camera coordinate system or the second camera coordinate system, or The vertical distance between the optical communication device and the installation plane of the first camera and the second camera determines the second imaging position.
  • the second camera is determined according to the relative offset between the first camera and the second camera, the first imaging position, and the distance from the optical communication device to the first camera or the second camera. Imaging position.
  • Another aspect of the present invention relates to a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, it can be used to implement the above method.
  • Another aspect of the present invention relates to an electronic device, which includes a processor and a memory, and a computer program is stored in the memory.
  • a computer program is stored in the memory.
  • the solution of the present invention provides a method for presenting information related to an optical communication device.
  • an optical label can be displayed on an image taken in a non-optical label recognition mode (for example, a real image taken in a normal mode).
  • Related information therefore, enables users not only to interact with the optical tag through the image presented by the device, but also to perceive the environmental information around the optical tag, thereby improving the interaction efficiency and the interaction experience.
  • Figure 1 shows an exemplary image containing a light tag taken in a low exposure mode
  • Figure 2 shows an exemplary optical label
  • Figure 3 shows an image of a light label taken by a rolling shutter imaging device in a low exposure mode
  • Figure 4 shows a method for presenting information related to optical tags according to an embodiment of the present invention
  • Figure 5 shows an exemplary normally exposed image containing a light label
  • Fig. 6 shows an exemplary image presented according to an embodiment of the present invention.
  • Fig. 7 shows a method for presenting information related to an optical tag according to another embodiment of the present invention.
  • the optical tag usually includes a controller and at least one light source, and the controller can drive the light source through different driving modes to transmit different information outward.
  • each light tag can be assigned an identification information (ID), which is used to uniquely identify or identify the manufacturer, manager, or user of the light tag Light label.
  • ID an identification information
  • the controller in the optical tag can drive the light source to transmit the identification information outward, and the user can use the optical tag identification device to perform continuous image collection on the optical tag to obtain the identification information transmitted by the optical tag, which can be based on the identification.
  • Information to access corresponding services for example, accessing a web page associated with the identification information of the optical tag, obtaining other information associated with the identification information (for example, location information of the optical tag corresponding to the identification information), and so on.
  • Fig. 2 shows an exemplary optical label 100, which includes three light sources (respectively a first light source 101, a second light source 102, and a third light source 103).
  • the optical label 100 also includes a controller (not shown in FIG. 2) for selecting a corresponding driving mode for each light source according to the information to be transmitted.
  • the controller can use driving signals with different frequencies to control the turning on and off of the light source, so that when a rolling shutter imaging device (such as a CMOS imaging device) is used to photograph the light label 100 in a low exposure mode
  • a rolling shutter imaging device such as a CMOS imaging device
  • FIG 3 shows an image of the optical label 100 taken by the rolling shutter imaging device in the low exposure mode when the optical label 100 is transmitting information, in which the image of the first light source 101 shows relatively narrow stripes.
  • the images of the second light source 102 and the third light source 103 exhibit relatively wide fringes.
  • the optical label may additionally include one or more positioning marks located near the light source for transmitting information.
  • the positioning marks may be, for example, lights of a specific shape or color, and the lights may, for example, remain on during operation.
  • the optical label recognition device can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.), or a machine that can move autonomously (for example, Drones, driverless cars, robots, etc.).
  • the optical label recognition device can obtain multiple images containing the optical label through continuous image acquisition of the optical label through the camera on it, and analyze the imaging of the optical label (or each light source in the optical label) in each image. Identify the information transmitted by the optical tag.
  • the identification information (ID) of the optical tag and any other information can be stored in the server.
  • the other information is, for example, service information related to the optical tag, description information or attribute information related to the optical tag, such as location information of the optical tag, Physical size information, physical shape information, orientation information, etc.
  • the optical label may also have unified or default physical size information and physical shape information.
  • the device can use the identified identification information of the optical label to query the server to obtain other information related to the optical label.
  • the server may be a software program running on a computing device, a computing device, or a cluster composed of multiple computing devices.
  • the optical tag may be offline, that is, the optical tag does not need to communicate with the server. Of course, it can be understood that online optical tags that can communicate with the server are also feasible.
  • Figure 4 shows a method for presenting information related to a light tag according to an embodiment of the present invention.
  • the method can be executed by a device that uses two cameras (referred to as the first camera and the second camera, respectively) , Wherein the first camera is used to operate in the light tag recognition mode to recognize the light tag, and the second camera is used to capture an image containing the light tag (for example, a real-life image under normal exposure).
  • the position and posture of the first camera and the second camera may have a fixed relative relationship.
  • the first camera and the second camera may be installed on the same device (for example, a mobile phone with at least two cameras). However, either or both of the first camera and the second camera may not be installed on the device, but may be communicatively connected with the device.
  • the rotation matrix R0 and the displacement vector t0 between the first camera and the second camera (also referred to as the first camera coordinate system and the second camera coordinate system) and the internal parameter information of the two cameras can be determined in advance.
  • the rotation matrix R0 is used to represent the relative posture information between the two cameras
  • the displacement vector t0 is used to represent the relative displacement information between the two cameras.
  • the position information in the first camera coordinate system can be converted into the position information in the second camera coordinate system through a rotation operation and a displacement operation.
  • the rotation matrix R0 is the identity matrix, so in fact, two cameras are not required. Rotation operations between coordinate systems.
  • the method includes the following steps:
  • Step 401 Use the first camera to obtain a first image containing the optical label in the optical label recognition mode.
  • the first camera When the first camera works in the optical tag recognition mode, it can take a first image containing the optical tag (for example, the image shown in FIG. 1). By analyzing the first image, the imaging position of the optical tag and the information transmitted by the optical tag can be obtained.
  • the optical label recognition mode is usually different from the normal shooting mode of the camera.
  • the camera in the optical label recognition mode, the camera can be set to a predetermined low exposure mode so that the information conveyed by the optical label can be recognized from the first image taken.
  • interactive information for example, an interactive icon
  • the optical tag may be presented at the imaging position of the optical tag for the user to operate.
  • the first image may not be user-friendly, because it may be difficult for the user to obtain useful other information, such as information about the surrounding environment of the light tag, when observing the first image with naked eyes. Therefore, directly presenting the interactive information corresponding to the light tag on the image obtained by using the light tag recognition mode cannot provide a good user experience.
  • the optical label recognition mode is usually different from the normal shooting mode of the camera, the present invention does not exclude a solution in which the optical label recognition mode is the same or substantially the same as the normal shooting mode.
  • Step 402 Obtain position information of the optical label relative to the first camera based on the first image.
  • the position information of the optical label relative to the first camera can be obtained by analyzing the imaging of the optical label in the first image, which can be expressed as the position information of the optical label in the first camera coordinate system.
  • the position information may be represented by coordinates (X1, Y1, Z1) in a coordinate system with the first camera as the origin, and may be referred to as a displacement vector t1.
  • the position information of the optical label can be represented by the position information of one point, for example, the position information of the optical label can be represented by the position information of the center point of the optical label; the position information of the optical label can also be represented by the position information of multiple points.
  • the position information of the optical label can be represented by the position information of multiple points that can define the rough outline of the optical label; the position information of the optical label can also be represented by the position information of a region; and so on.
  • the distance and direction of the optical label relative to the first camera can be determined by analyzing the imaging of the optical label in the first image, thereby determining its position information relative to the first camera.
  • the relative distance between the optical label and the first camera can be determined by the imaging size of the optical label in the first image and optional other information (for example, the actual physical size information of the optical label, camera internal parameters) (the larger the image, the greater the distance The closer; the smaller the image, the farther the distance).
  • the device where the first camera is located may obtain the actual physical size information of the optical label from the server, or the optical label may have a default uniform physical size (which may be stored on the device).
  • the direction of the optical label relative to the first camera can be determined by analyzing the imaging position of the optical label in the first image.
  • the perspective deformation of the optical label imaging can be determined by analyzing the imaging of the optical label in the first image, thereby determining the posture information of the optical label relative to the first camera (also referred to as direction information). Or orientation information), for example, the posture information of the optical tag in the first camera coordinate system.
  • the device where the first camera is located may obtain the actual physical shape information of the optical label from the server, or the optical label may have a default unified physical shape (which may be stored on the device).
  • the determined posture information of the optical tag relative to the first camera can be represented by a rotation matrix R1.
  • the rotation matrix is known in the imaging field, and in order not to obscure the present invention, it will not be described in detail here.
  • a coordinate system can be established based on the optical tag, and the coordinate system can be called the world coordinate system or the optical tag coordinate system.
  • Some points on the optical label can be determined as some spatial points in the world coordinate system, and the coordinates of these spatial points in the world coordinate system can be determined according to the physical size information and/or physical shape information of the optical label.
  • the device where the first camera is located can obtain the physical size information and/or physical shape information of the optical tag from the server, or the optical tag can have default unified physical size information and/or physical shape information, and the device can store the physical size information And/or physical shape information.
  • Some points on the optical label may be, for example, the corner of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on.
  • the image points corresponding to these spatial points can be found in the first image, and the position of each image point in the first image can be determined.
  • the position information of the optical tag in the first camera coordinate system can be calculated. It can be represented by the displacement vector t1.
  • the posture information of the optical tag in the first camera coordinate system can also be calculated. It can be represented by the rotation matrix R1.
  • the combination (R1, t1) of the rotation matrix R1 and the displacement vector t1 is the pose information (that is, position and posture information) of the optical tag in the first camera coordinate system.
  • the method of calculating the rotation matrix R and the displacement vector t according to the coordinates of each spatial point in the world coordinate system and the position of the corresponding image point in the image is known in the prior art. For example, 3D-2D can be used The PnP (Perspective-n-Point) method is used to calculate R and t.
  • the rotation matrix R and the displacement vector t can actually describe how to transform the coordinates of a certain point between the world coordinate system and the camera coordinate system. For example, through the rotation matrix R and the displacement vector t, the coordinates of a certain space point in the world coordinate system can be converted to the coordinates in the camera coordinate system, and can be further converted to the position of the image point in the image.
  • the information conveyed by the optical label may be further obtained based on the imaging of the optical label in the first image, for example, identification information of the optical label.
  • Step 403 Obtain the position information of the optical label relative to the second camera according to the position information of the optical label relative to the first camera.
  • the relative pose information between the first camera and the second camera for example, The rotation matrix R0 and the displacement vector t0 between the first camera and the second camera are obtained to obtain the position information of the optical label relative to the second camera.
  • the position information may be, for example, the position information of the optical label in the second camera coordinate system. And can be represented by the displacement vector t2.
  • the first camera and the second camera can be used according to the posture information.
  • the relative posture information between the cameras for example, the rotation matrix R0 between the first camera and the second camera
  • the posture information of the optical label relative to the second camera for example, the posture of the optical label in the second camera coordinate system Information
  • the posture information can be represented by the rotation matrix R2.
  • Step 404 Use the second camera to obtain a second image containing the light tag.
  • the second image containing the light tag obtained by the second camera may be, for example, a normal exposure real scene image, which contains information about the light tag and its surrounding environment.
  • Figure 5 shows an exemplary normally exposed image containing a light label, which corresponds to the low exposure image shown in Figure 1, which shows a restaurant door, and a rectangular light label above the door .
  • the second image can show the environmental information around the optical tag (that is, the second image is user-friendly), since the second image is not captured in the optical tag recognition mode, it cannot be based on the second image.
  • the recognition of the optical label is realized (that is, the optical label or the information transmitted by the optical label cannot be identified), and thus the information corresponding to the optical label (for example, an interactive icon) cannot be presented on the second image.
  • the second image containing the light label obtained by the second camera is preferably a real-life image with normal exposure, but this is not a limitation. According to actual needs, the second image can also be an image obtained by the camera in other shooting modes, such as grayscale Images etc.
  • Step 405 According to the position information of the optical label relative to the second camera, present information related to the optical label on the second image.
  • the second image captured by the optical label on the second camera can be calculated using the imaging formula based on the position information and the internal parameter information of the second camera.
  • the display position where the above should be. It is well known in the art to use an imaging formula to calculate the imaging position of a certain point based on its position information relative to the camera, and will not be described in detail here to avoid obscuring the present invention. According to the calculated display position, information related to the light tag may be presented (for example, superimposed, embedded, covered, etc.) at a suitable position on the second image, and the suitable position is preferably the calculated display position.
  • the information related to the light tag can be various information, for example, the image of the light tag, the logo of the light tag, the identification information of the light tag, the icon associated with the light tag or its identification information, and the light tag or its identification information.
  • the name of the associated store any other information associated with the light label or its identification information, and various combinations of them.
  • the posture information of the optical tag relative to the second camera is also obtained in step 403, when the information related to the optical tag is presented on the second image, the posture information may be further based on the posture information.
  • the posture information of these images, logos, icons, etc. can be set based on the posture information of the light tag in the second camera coordinate system. This is advantageous, especially when the logo or icon or the like corresponding to the light label is a three-dimensional virtual object.
  • Fig. 6 shows an exemplary image presented according to an embodiment of the present invention, which is superimposed on the image shown in Fig. 5 with a circular icon associated with the light label therein.
  • the display position is the actual imaging position of the optical label.
  • the icon can have an interactive function. After clicking the icon, the user can access the information of the corresponding restaurant, and can perform operations such as reservation, queuing, and ordering. In this way, the user can not only interact with the optical tag through the image presented by the device, but also perceive the environmental information around the optical tag.
  • steps of obtaining the first image and obtaining the second image described above can be performed in any suitable order and can be performed concurrently. In addition, these steps can also be repeated as needed to continuously update the scene displayed by the camera and the display position of the light tag.
  • the first camera and the second camera of the device are installed on the same plane (that is, the coordinate system of the first camera and the coordinate system of the second camera are not offset in the Z-axis direction), and have the same Internal parameters and the same posture (that is, the posture of the first camera coordinate system and the second camera coordinate system are the same, and the rotation matrix between the two is the identity matrix, so there is no need to perform the rotation operation between the two camera coordinate systems)
  • the orientation of the two cameras is also the same, only the installation position is somewhat different (for example, the installation position of the two is offset by a few millimeters).
  • the imaging position of the optical tag in the images captured by the two cameras is basically the same (especially when the optical tag is far from the camera), and there is only an insignificant offset.
  • the imaging position of the optical label in the image captured by the first camera (working in optical label recognition mode) can be directly used to determine the optical label in the second camera (working in non- Optical label recognition mode) the imaging position in the captured image.
  • Fig. 7 shows a method for presenting information related to an optical tag according to this embodiment, which may include the following steps:
  • Step 701 Use the first camera to obtain a first image containing the optical label in the optical label recognition mode.
  • Step 702 Obtain the first imaging position of the optical label in the first image.
  • the imaging position of the light tag in the image can be represented by the position information of a point, for example, the imaging position of the light tag in the image can be represented by the position information of the center point of the light tag; the imaging position of the light tag in the image is also It can be represented by the position information of multiple points, for example, the imaging position of the light tag in the image can be represented by the position information of multiple points that can define the approximate outline of the light tag; the imaging position of the light tag in the image can also be It is represented by the location information of an area; etc.
  • Step 703 Use the second camera to obtain a second image containing the optical tag.
  • Step 704 Present information related to the light tag at a second imaging position in the second image according to the first imaging position.
  • the second imaging position may be derived based on the first imaging position according to the imaging formula.
  • the two camera coordinate systems only have an offset d in the X-axis direction.
  • the displacement vector T between the two camera coordinate systems It is (d, 0, 0).
  • a point (X, Y, Z) in the coordinate system of the first camera its coordinates in the coordinate system of the second camera are (X+d, Y, Z).
  • the imaging position (u1, v1) of the point in the image taken by the first camera and the imaging position (u2, v2) in the image taken by the second camera can be calculated as:
  • fx, fy are the focal length of the camera in the x and y directions; (cx, cy) are the coordinates of the camera aperture in the camera coordinate system.
  • the offset d can be preset Determination), for their imaging positions (u1, v1) and (u2, v2), exist as well.
  • fx and fy are the parameter values inherent to the camera. Therefore, as long as the relative offset (including the offset direction and the offset distance) between the first camera and the second camera can be known in advance, after obtaining a certain point in the first camera coordinate system or the second camera coordinate system After the Z coordinate, the imaging position of the point in the image captured by another camera can be derived based on the imaging position of the point in the image captured by another camera.
  • the point is in the first camera coordinate system or the second camera
  • the Z coordinate in the coordinate system is the same, which is roughly equal to the vertical distance from the point to the installation plane of the camera. Therefore, in an embodiment, the Z coordinate of the optical label in the first camera coordinate system or the second camera coordinate system (or the vertical distance between the optical label and the installation plane of the two cameras) and the optical label in the first camera
  • the first imaging position in the captured first image is used to determine the second imaging position of the light tag in the second image captured by the second camera. Any method described in step 402 above can be used to obtain the Z coordinate of the optical label in the camera coordinate system or the vertical distance from the optical label to the installation plane of the camera.
  • the optical label when scanning and identifying the optical label, is usually located near the center of the screen.
  • the distance between the optical label and the first camera or the second camera can be used as the distance between the optical label and the two cameras.
  • the distance from the optical label to the camera is easier to determine.
  • the distance from the optical label to the camera can be determined by the imaging size of the optical label as described above, or the distance from the optical label to the camera can be measured by a binocular camera. In this way, the second imaging position of the optical tag in the second image captured by the second camera can be determined more conveniently or faster with an acceptable error.
  • the second imaging position may be set to be the same as the first imaging position.
  • fx, fy are the focal lengths of the camera in the x and y directions
  • dx, dy are the offsets of the two camera coordinate systems in the X and Y directions.
  • fx, fy, dx, dy are usually much smaller than Z (usually a distance of a few meters to tens of meters). Therefore, in some applications that do not require high accuracy, the sum can be considered to be approximately equal to 0, and the second The imaging position is set to be the same as the first imaging position.
  • This method of directly using the imaging position of the light tag in the first image as its imaging position in the second image will bring some errors, but it improves the efficiency and reduces the amount of calculation. Therefore, in some cases, the accuracy requirements are not required. It is very advantageous in high applications. In particular, for the case where the optical tag is recognized at a long distance (Z is large), the error caused by the above method is actually very small, and will not affect the user experience.
  • the device mentioned in this article may be a device carried by a user (for example, a mobile phone, a tablet computer, smart glasses, a smart helmet, a smart watch, etc.), but it is understood that the device may also be a machine that can move autonomously, for example, UAVs, unmanned vehicles, robots, etc., which are equipped with image acquisition devices, such as cameras.
  • a user for example, a mobile phone, a tablet computer, smart glasses, a smart helmet, a smart watch, etc.
  • the device may also be a machine that can move autonomously, for example, UAVs, unmanned vehicles, robots, etc., which are equipped with image acquisition devices, such as cameras.
  • the present invention can be implemented in the form of a computer program.
  • the computer program can be stored in various storage media (for example, a hard disk, an optical disk, a flash memory, etc.), and when the computer program is executed by a processor, it can be used to implement the method of the present invention.
  • the present invention can be implemented in the form of an electronic device.
  • the electronic device includes a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, it can be used to implement the method of the present invention.
  • references to "various embodiments”, “some embodiments”, “one embodiment”, or “an embodiment” herein refer to the specific features, structures, or properties described in connection with the embodiments included in In at least one embodiment. Therefore, the appearances of the phrases “in various embodiments”, “in some embodiments”, “in one embodiment”, or “in an embodiment” in various places throughout this document do not necessarily refer to the same implementation example.
  • specific features, structures, or properties can be combined in any suitable manner in one or more embodiments. Therefore, a specific feature, structure, or property shown or described in one embodiment can be combined in whole or in part with the feature, structure, or property of one or more other embodiments without limitation, as long as the combination is not non-limiting. Logical or not working.

Abstract

Provided are a method for presenting information related to an optical communication apparatus, and an electronic device. The method comprises: using a first camera to obtain, in an optical communication apparatus recognition mode, a first image containing an optical communication apparatus; obtaining, based on the first image, position information of the optical communication apparatus relative to the first camera; obtaining, according to the position information of the optical communication apparatus relative to the first camera, position information of the optical communication apparatus relative to a second camera; using the second camera to obtain a second image containing the optical communication apparatus; and presenting, according to the position information of the optical communication apparatus relative to the second camera and on the second image, information related to the optical communication apparatus.

Description

用于呈现与光通信装置有关的信息的方法和电子设备Method and electronic equipment for presenting information related to optical communication device 技术领域Technical field
本发明属于光信息技术领域,尤其涉及一种用于呈现与光通信装置有关的信息的方法和电子设备。The present invention belongs to the field of optical information technology, and in particular relates to a method and electronic equipment for presenting information related to an optical communication device.
背景技术Background technique
本部分的陈述仅仅是为了提供与本发明相关的背景信息,以帮助理解本发明,这些背景信息并不一定构成现有技术。The statements in this section are only to provide background information related to the present invention to help understand the present invention, and this background information does not necessarily constitute prior art.
光通信装置也称为光标签,这两个术语在本文中可以互换使用。光标签能够通过发出不同的光来传递信息,其具有识别距离远、可见光条件要求宽松、指向性强的优势,并且光标签所传递的信息可以随时间变化,从而可以提供大的信息容量和灵活的配置能力。相比于传统的二维码,光标签具有更远的识别距离和更强的信息交互能力,从而可以为用户和商家提供巨大的便利性。Optical communication devices are also called optical tags, and these two terms can be used interchangeably in this article. Optical tags can transmit information by emitting different lights. They have the advantages of long recognition distance, relaxed requirements for visible light conditions, and strong directivity, and the information transmitted by optical tags can change over time, which can provide large information capacity and flexibility Configuration capabilities. Compared with the traditional two-dimensional code, the optical tag has a longer recognition distance and stronger information interaction ability, which can provide huge convenience for users and businesses.
光标签识别设备例如可以是用户携带或控制的设备(例如,带有摄像头的手机、平板电脑、智能眼镜、智能头盔、智能手表、汽车等等),也可以是能够自主移动的机器(例如,无人机、无人驾驶汽车、机器人等等)。在很多情况下,为了识别光标签传递的信息或者为了避免环境光的干扰,识别设备需要通过其上的摄像头在特定的光标签识别模式(例如,低曝光模式)下对光标签进行图像采集来获得光标签的图像,并通过内置的应用程序来分析这些图像以识别出光标签传递的信息。但是,以特定的光标签识别模式拍摄的包含光标签的图像通常不能良好地再现光标签周围的环境信息,这对于用户体验或后续的用户交互是非常不利的。例如,对于在低曝光模式下拍摄的包含光标签的图像,光标签周围环境的成像的亮度通常非常低,甚至一团漆黑。当在光标签识别设备的显示屏幕上显示这样的图像时,会影响用户的交互体验。图1示出了一个示例性的在低曝光模式下拍摄的包含光标签的图像,在该图的上部的中间位置具有光标签的成 像,但是,由于光标签周围的物体通常亮度较低,因此很难从该低曝光模式下拍摄的图像中分辨出光标签周围的物体。在正常拍摄模式下获得的图像虽然能够示出光标签周围的环境信息,但是,由于该图像不是在光标签识别模式下拍摄的,因此基于该图像无法实现光标签的识别(也即,无法识别出光标签或者光标签传递的信息),因此无法在该图像上呈现与光标签对应的交互信息(例如,交互图标)。The optical label recognition device can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.), or a machine that can move autonomously (for example, Drones, driverless cars, robots, etc.). In many cases, in order to identify the information transmitted by the optical tag or to avoid interference from ambient light, the identification device needs to use the camera on it to capture the image of the optical tag in a specific optical tag recognition mode (for example, low exposure mode). Obtain the image of the optical label, and analyze these images through the built-in application to identify the information conveyed by the optical label. However, an image containing a light tag captured in a specific light tag recognition mode usually cannot reproduce the environmental information around the light tag well, which is very unfavorable for user experience or subsequent user interaction. For example, for an image containing a light tag captured in a low exposure mode, the brightness of the image surrounding the light tag is usually very low, or even pitch black. When such an image is displayed on the display screen of the optical tag recognition device, it will affect the user's interactive experience. Figure 1 shows an exemplary image containing a light label taken in low exposure mode. The middle position of the upper part of the figure has the image of the light label. However, since the objects around the light label are usually low in brightness, It is difficult to distinguish objects around the light tag from the image taken in this low exposure mode. Although the image obtained in the normal shooting mode can show the environmental information around the light tag, because the image is not shot in the light tag recognition mode, the recognition of the light tag cannot be achieved based on the image (that is, the light cannot be recognized). The information transmitted by the label or the optical label), so the interactive information corresponding to the optical label (for example, an interactive icon) cannot be presented on the image.
因此,需要一种改善的用于呈现与光标签有关的信息的方法和电子设备。Therefore, there is a need for an improved method and electronic device for presenting information related to optical tags.
发明内容Summary of the invention
本发明的一个方面涉及一种用于呈现与光通信装置有关的信息的方法,包括:使用第一摄像头在光通信装置识别模式下获得包含光通信装置的第一图像;基于所述第一图像获得所述光通信装置相对于第一摄像头的位置信息;根据所述光通信装置相对于第一摄像头的位置信息获得所述光通信装置相对于第二摄像头的位置信息;使用第二摄像头获得包含所述光通信装置的第二图像;以及根据所述光通信装置相对于第二摄像头的位置信息在所述第二图像上呈现与所述光通信装置有关的信息。One aspect of the present invention relates to a method for presenting information related to an optical communication device, including: using a first camera to obtain a first image containing an optical communication device in an optical communication device recognition mode; based on the first image Obtain the position information of the optical communication device relative to the first camera; obtain the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera; use the second camera to obtain A second image of the optical communication device; and presenting information related to the optical communication device on the second image according to position information of the optical communication device relative to the second camera.
可选地,其中,根据所述光通信装置相对于第一摄像头的位置信息获得所述光通信装置相对于第二摄像头的位置信息包括:根据所述光通信装置相对于第一摄像头的位置信息,使用第一摄像头与第二摄像头之间的旋转矩阵和位移向量,获得所述光通信装置相对于第二摄像头的位置信息。Optionally, wherein, obtaining the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera includes: according to the position information of the optical communication device relative to the first camera , Using the rotation matrix and displacement vector between the first camera and the second camera to obtain the position information of the optical communication device relative to the second camera.
可选地,其中,基于所述第一图像获得所述光通信装置相对于第一摄像头的位置信息包括:基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的位置信息。Optionally, wherein obtaining the position information of the optical communication device relative to the first camera based on the first image includes: obtaining the relative position of the optical communication device based on the imaging of the optical communication device in the first image Location information of the first camera.
可选地,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的位置信息包括:基于所述第一图像中的光通信装置的成像大小来获得光通信装置相对于第一摄像头的距离;基于所述第一图像中的光通信装置的成像位置来获得光通信装置相对于第一摄像头的方向;以及通过光通信装置相对于第一摄像头的距离和方向来获得所述光通信装置相对于第一摄像头的位置信息。Optionally, wherein the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: based on the information of the optical communication device in the first image Obtain the distance of the optical communication device relative to the first camera based on the imaging size; obtain the direction of the optical communication device relative to the first camera based on the imaging position of the optical communication device in the first image; The distance and direction of a camera are used to obtain the position information of the optical communication device relative to the first camera.
可选地,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的位置信息包括:根据光通信装置上的一些点在光通信装置坐标系中的坐标以及这些点在所述第一图像中的成像位置,并结合第一摄像头的内参信息,获得所述光通信装置相对于第一摄像头的位置信息。Optionally, wherein the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: according to some points on the optical communication device in the optical communication device The coordinates in the coordinate system and the imaging positions of these points in the first image are combined with the internal parameter information of the first camera to obtain the position information of the optical communication device relative to the first camera.
可选地,上述方法还包括:基于所述第一图像获得所述光通信装置相对于第一摄像头的姿态信息;根据所述光通信装置相对于第一摄像头的姿态信息获得所述光通信装置相对于第二摄像头的姿态信息,以及其中,所述根据所述光通信装置相对于第二摄像头的位置信息在所述第二图像上呈现与所述光通信装置有关的信息包括:根据所述光通信装置相对于第二摄像头的位置信息和姿态信息在所述第二图像上呈现与所述光通信装置有关的信息。Optionally, the above method further includes: obtaining posture information of the optical communication device relative to the first camera based on the first image; obtaining the optical communication device according to posture information of the optical communication device relative to the first camera The posture information relative to the second camera, and wherein the presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes: according to the The position information and posture information of the optical communication device relative to the second camera present information related to the optical communication device on the second image.
可选地,其中,基于所述第一图像获得所述光通信装置相对于第一摄像头的姿态信息包括:基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的姿态信息。Optionally, wherein, obtaining the posture information of the optical communication device relative to the first camera based on the first image includes: obtaining the relative position of the optical communication device based on imaging of the optical communication device in the first image The posture information of the first camera.
可选地,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的姿态信息包括:通过确定所述第一图像中的光通信装置的成像的透视变形,来获得所述光通信装置相对于第一摄像头的姿态信息。Optionally, wherein the obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: determining the optical communication device in the first image The perspective deformation of the imaging to obtain the posture information of the optical communication device relative to the first camera.
可选地,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的姿态信息包括:根据光通信装置上的一些点在光通信装置坐标系中的坐标以及这些点在所述第一图像中的成像位置,获得所述光通信装置相对于第一摄像头的姿态信息。Optionally, wherein the obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: according to some points on the optical communication device The coordinates in the coordinate system and the imaging positions of these points in the first image obtain posture information of the optical communication device relative to the first camera.
可选地,其中,根据光通信装置的物理尺寸信息和/或物理形状信息来确定光通信装置上的一些点在光通信装置坐标系中的坐标。Optionally, wherein the coordinates of some points on the optical communication device in the optical communication device coordinate system are determined according to the physical size information and/or physical shape information of the optical communication device.
可选地,其中,根据所述光通信装置相对于第二摄像头的位置信息在所述第二图像上呈现与所述光通信装置有关的信息包括:根据所述光通信装置相对于第二摄像头的位置信息,确定与该位置信息对应的在所述第二图像中的成像位置,并在所述第二图像中的所述成像位置处呈现与所述光通信装置有关的信息。Optionally, wherein, presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes: according to the position information of the optical communication device relative to the second camera Determine the imaging position in the second image corresponding to the position information, and present information related to the optical communication device at the imaging position in the second image.
可选地,其中,所述第二图像为正常曝光的实景图像。Optionally, wherein the second image is a real scene image with normal exposure.
可选地,上述方法还包括:基于所述第一图像获得所述光通信装置的标识信息。Optionally, the above method further includes: obtaining identification information of the optical communication device based on the first image.
本发明的另一个方面涉及一种用于呈现与光通信装置有关的信息的方法,包括:使用第一摄像头在光通信装置识别模式下获得包含光通信装置的第一图像;获得所述光通信装置在第一图像中的第一成像位置;使用第二摄像头获得包含所述光通信装置的第二图像;以及根据所述第一成像位置在所述第二图像中的第二成像位置处呈现与所述光通信装置有关的信息,其中,所述第一摄像头和第二摄像头安装于同一平面并具有相同的姿态和内参。Another aspect of the present invention relates to a method for presenting information related to an optical communication device, including: using a first camera to obtain a first image containing an optical communication device in an optical communication device recognition mode; and obtaining the optical communication device The first imaging position of the device in the first image; using a second camera to obtain a second image containing the optical communication device; and presenting at the second imaging position in the second image according to the first imaging position Information related to the optical communication device, wherein the first camera and the second camera are installed on the same plane and have the same posture and internal parameters.
可选地,其中,所述第二成像位置与所述第一成像位置相同。Optionally, wherein the second imaging position is the same as the first imaging position.
可选地,其中,根据第一摄像头和第二摄像头之间的相对偏移、所述第一成像位置、所述光通信装置在第一摄像头坐标系或第二摄像头坐标系中的Z坐标或者所述光通信装置到第一摄像头和第二摄像头的安装平面的垂直距离,来确定所述第二成像位置。Optionally, wherein, according to the relative offset between the first camera and the second camera, the first imaging position, the Z coordinate of the optical communication device in the first camera coordinate system or the second camera coordinate system, or The vertical distance between the optical communication device and the installation plane of the first camera and the second camera determines the second imaging position.
可选地,其中,根据第一摄像头和第二摄像头之间的相对偏移、所述第一成像位置、所述光通信装置到第一摄像头或第二摄像头的距离,来确定所述第二成像位置。Optionally, wherein the second camera is determined according to the relative offset between the first camera and the second camera, the first imaging position, and the distance from the optical communication device to the first camera or the second camera. Imaging position.
本发明的另一个方面涉及一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,能够用于实现上述的方法。Another aspect of the present invention relates to a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, it can be used to implement the above method.
本发明的再一个方面涉及一种电子设备,其中包括处理器和存储器,在存储器中存储有计算机程序,当该计算机程序被处理器执行时,能够用于实现上述的方法。Another aspect of the present invention relates to an electronic device, which includes a processor and a memory, and a computer program is stored in the memory. When the computer program is executed by the processor, it can be used to implement the above method.
本发明的方案提供一种用于呈现与光通信装置有关的信息的方法,通过该方法,可以在非光标签识别模式下拍摄的图像(例如正常模式下拍摄的实景图像)上呈现与光标签有关的信息,因此,使得用户不仅能够通过设备呈现的图像与光标签进行交互操作,也能够感知光标签周围的环境信息,从而提高了交互效率,改善了交互体验。The solution of the present invention provides a method for presenting information related to an optical communication device. By this method, an optical label can be displayed on an image taken in a non-optical label recognition mode (for example, a real image taken in a normal mode). Related information, therefore, enables users not only to interact with the optical tag through the image presented by the device, but also to perceive the environmental information around the optical tag, thereby improving the interaction efficiency and the interaction experience.
附图说明Description of the drawings
以下参照附图对本发明实施例作进一步说明,其中:The following further describes the embodiments of the present invention with reference to the drawings, in which:
图1示出了一个示例性的在低曝光模式下拍摄的包含光标签的图像;Figure 1 shows an exemplary image containing a light tag taken in a low exposure mode;
图2示出了一种示例性的光标签;Figure 2 shows an exemplary optical label;
图3示出了由滚动快门成像设备在低曝光模式下拍摄的光标签的一张图像;Figure 3 shows an image of a light label taken by a rolling shutter imaging device in a low exposure mode;
图4示出了根据本发明的一个实施例的用于呈现与光标签有关的信息的方法;Figure 4 shows a method for presenting information related to optical tags according to an embodiment of the present invention;
图5示出了一个示例性的包含光标签的正常曝光的图像;Figure 5 shows an exemplary normally exposed image containing a light label;
图6示出了一个示例性的根据本发明的实施例呈现的图像;以及Fig. 6 shows an exemplary image presented according to an embodiment of the present invention; and
图7示出了根据本发明的另一个实施例的用于呈现与光标签有关的信息的方法。Fig. 7 shows a method for presenting information related to an optical tag according to another embodiment of the present invention.
具体实施方式detailed description
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图通过具体实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions, and advantages of the present invention clearer, the following further describes the present invention in detail through specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention.
光标签中通常可以包括控制器和至少一个光源,该控制器可以通过不同的驱动模式来驱动光源,以向外传递不同的信息。为了基于光标签向用户和商家提供相应的服务,每个光标签可以被分配一个标识信息(ID),该标识信息用于由光标签的制造者、管理者或使用者等唯一地识别或标识光标签。通常,可由光标签中的控制器驱动光源以向外传递该标识信息,而用户可以使用光标签识别设备对光标签进行连续的图像采集来获得该光标签传递的标识信息,从而可以基于该标识信息来访问相应的服务,例如,访问与光标签的标识信息相关联的网页、获取与标识信息相关联的其他信息(例如,与该标识信息对应的光标签的位置信息)、等等。The optical tag usually includes a controller and at least one light source, and the controller can drive the light source through different driving modes to transmit different information outward. In order to provide corresponding services to users and businesses based on light tags, each light tag can be assigned an identification information (ID), which is used to uniquely identify or identify the manufacturer, manager, or user of the light tag Light label. Generally, the controller in the optical tag can drive the light source to transmit the identification information outward, and the user can use the optical tag identification device to perform continuous image collection on the optical tag to obtain the identification information transmitted by the optical tag, which can be based on the identification. Information to access corresponding services, for example, accessing a web page associated with the identification information of the optical tag, obtaining other information associated with the identification information (for example, location information of the optical tag corresponding to the identification information), and so on.
图2示出了一种示例性的光标签100,其包括三个光源(分别是第一光源101、第二光源102、第三光源103)。光标签100还包括控制器(在图2中未示出),其用于根据要传递的信息为每个光源选择相应的驱动模式。例如,在不同的驱动模式下,控制器可以使用具有不同频率的驱动信号来控制光源的开启和关闭,从而使得当使用滚动快门成像设备(例如 CMOS成像设备)在低曝光模式下拍摄光标签100时,其中的光源的图像可以呈现出不同的条纹。图3示出了当光标签100在传递信息时由滚动快门成像设备在低曝光模式下拍摄的光标签100的一张图像,其中,第一光源101的图像呈现出相对较窄的条纹,第二光源102和第三光源103的图像呈现出相对较宽的条纹。通过分析光标签100中的光源的成像,可以解析出各个光源此刻的驱动模式,从而解析出光标签100此刻传递的信息。Fig. 2 shows an exemplary optical label 100, which includes three light sources (respectively a first light source 101, a second light source 102, and a third light source 103). The optical label 100 also includes a controller (not shown in FIG. 2) for selecting a corresponding driving mode for each light source according to the information to be transmitted. For example, in different driving modes, the controller can use driving signals with different frequencies to control the turning on and off of the light source, so that when a rolling shutter imaging device (such as a CMOS imaging device) is used to photograph the light label 100 in a low exposure mode When, the image of the light source can show different stripes. FIG. 3 shows an image of the optical label 100 taken by the rolling shutter imaging device in the low exposure mode when the optical label 100 is transmitting information, in which the image of the first light source 101 shows relatively narrow stripes. The images of the second light source 102 and the third light source 103 exhibit relatively wide fringes. By analyzing the imaging of the light source in the optical label 100, the driving mode of each light source at the moment can be analyzed, so as to analyze the information transmitted by the optical label 100 at the moment.
光标签中还可以另外包括位于用于传递信息的光源附近的一个或多个定位标识,该定位标识例如可以是特定形状或颜色的灯,该灯例如可以在工作时保持常亮。The optical label may additionally include one or more positioning marks located near the light source for transmitting information. The positioning marks may be, for example, lights of a specific shape or color, and the lights may, for example, remain on during operation.
光标签识别设备例如可以是用户携带或控制的设备(例如,带有摄像头的手机、平板电脑、智能眼镜、智能头盔、智能手表、汽车等等),也可以是能够自主移动的机器(例如,无人机、无人驾驶汽车、机器人等等)。光标签识别设备可以通过其上的摄像头对光标签进行连续的图像采集来获得包含光标签的多张图像,并通过分析每张图像中的光标签(或光标签中的各个光源)的成像以识别出光标签传递的信息。The optical label recognition device can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.), or a machine that can move autonomously (for example, Drones, driverless cars, robots, etc.). The optical label recognition device can obtain multiple images containing the optical label through continuous image acquisition of the optical label through the camera on it, and analyze the imaging of the optical label (or each light source in the optical label) in each image. Identify the information transmitted by the optical tag.
可以将光标签的标识信息(ID)以及任何其他信息存储于服务器中,该其他信息例如是与光标签相关的服务信息、与光标签相关的描述信息或属性信息,如光标签的位置信息、物理尺寸信息、物理形状信息、朝向信息等。光标签也可以具有统一的或默认的物理尺寸信息和物理形状信息等。设备可以使用识别出的光标签的标识信息来从服务器查询获得与该光标签有关的其他信息。服务器可以是在计算装置上运行的软件程序、一台计算装置或者由多台计算装置构成的集群。光标签可以是离线的,也即,光标签不需要与服务器进行通信。当然,可以理解,能够与服务器进行通信的在线光标签也是可行的。The identification information (ID) of the optical tag and any other information can be stored in the server. The other information is, for example, service information related to the optical tag, description information or attribute information related to the optical tag, such as location information of the optical tag, Physical size information, physical shape information, orientation information, etc. The optical label may also have unified or default physical size information and physical shape information. The device can use the identified identification information of the optical label to query the server to obtain other information related to the optical label. The server may be a software program running on a computing device, a computing device, or a cluster composed of multiple computing devices. The optical tag may be offline, that is, the optical tag does not need to communicate with the server. Of course, it can be understood that online optical tags that can communicate with the server are also feasible.
图4示出了根据本发明的一个实施例的用于呈现与光标签有关的信息的方法,该方法可以由使用了两个摄像头(分别称为第一摄像头和第二摄像头)的设备来执行,其中,第一摄像头用于在光标签识别模式下操作以识别光标签,第二摄像头用于拍摄包含光标签的图像(例如,正常曝光下的实景图像)。第一摄像头和第二摄像头的位置和姿态可以具有固定的相对关系。在一个实施例中,第一摄像头和第二摄像头可以安装于同一个设 备(例如,具有至少两个摄像头的手机)上。然而,第一摄像头和第二摄像头中的任一个或两者也可以不安装于设备上,而是与该设备可通信地连接。可以预先确定第一摄像头与第二摄像头之间(也可以称为第一摄像头坐标系与第二摄像头坐标系之间)的旋转矩阵R0和位移向量t0,以及这两个摄像头的内参信息等。旋转矩阵R0用于表示两个摄像头之间的相对姿态信息,位移向量t0用于表示两个摄像头之间的相对位移信息。通过使用该旋转矩阵R0和位移向量t0,可以通过旋转操作和位移操作将第一摄像头坐标系中的位置信息转换为在第二摄像头坐标系中的位置信息。在一些具有两个摄像头的设备(例如具有两个摄像头的手机)中,两个摄像头的姿态是相同的,在这种情况下,旋转矩阵R0为单位矩阵,因此实际上可以不进行两个摄像头坐标系之间的旋转操作。该方法包括如下步骤:Figure 4 shows a method for presenting information related to a light tag according to an embodiment of the present invention. The method can be executed by a device that uses two cameras (referred to as the first camera and the second camera, respectively) , Wherein the first camera is used to operate in the light tag recognition mode to recognize the light tag, and the second camera is used to capture an image containing the light tag (for example, a real-life image under normal exposure). The position and posture of the first camera and the second camera may have a fixed relative relationship. In one embodiment, the first camera and the second camera may be installed on the same device (for example, a mobile phone with at least two cameras). However, either or both of the first camera and the second camera may not be installed on the device, but may be communicatively connected with the device. The rotation matrix R0 and the displacement vector t0 between the first camera and the second camera (also referred to as the first camera coordinate system and the second camera coordinate system) and the internal parameter information of the two cameras can be determined in advance. The rotation matrix R0 is used to represent the relative posture information between the two cameras, and the displacement vector t0 is used to represent the relative displacement information between the two cameras. By using the rotation matrix R0 and the displacement vector t0, the position information in the first camera coordinate system can be converted into the position information in the second camera coordinate system through a rotation operation and a displacement operation. In some devices with two cameras (for example, a mobile phone with two cameras), the postures of the two cameras are the same. In this case, the rotation matrix R0 is the identity matrix, so in fact, two cameras are not required. Rotation operations between coordinate systems. The method includes the following steps:
步骤401:使用第一摄像头在光标签识别模式下获得包含光标签的第一图像。Step 401: Use the first camera to obtain a first image containing the optical label in the optical label recognition mode.
当第一摄像头在光标签识别模式下工作时,其可以拍摄包含光标签的第一图像(例如,图1所示的图像)。通过分析该第一图像,可以获得光标签的成像位置以及光标签传递的信息。光标签识别模式通常与摄像头的正常拍摄模式不同,例如,在光标签识别模式下,摄像头可以被设置到预定的低曝光模式,以便能够从拍摄的第一图像中识别出光标签传递的信息。在识别出了光标签传递的信息之后,可以在光标签的成像位置处呈现与该光标签对应的交互信息(例如,交互图标),供用户操作。但是,该第一图像可能并不是用户友好的,因为用户通过肉眼观察该第一图像时可能难以获得有用的其他信息,例如光标签周围环境的信息。因此,直接在使用光标签识别模式获得的图像上呈现与光标签对应的交互信息不能提供良好的用户体验。When the first camera works in the optical tag recognition mode, it can take a first image containing the optical tag (for example, the image shown in FIG. 1). By analyzing the first image, the imaging position of the optical tag and the information transmitted by the optical tag can be obtained. The optical label recognition mode is usually different from the normal shooting mode of the camera. For example, in the optical label recognition mode, the camera can be set to a predetermined low exposure mode so that the information conveyed by the optical label can be recognized from the first image taken. After the information conveyed by the optical tag is recognized, interactive information (for example, an interactive icon) corresponding to the optical tag may be presented at the imaging position of the optical tag for the user to operate. However, the first image may not be user-friendly, because it may be difficult for the user to obtain useful other information, such as information about the surrounding environment of the light tag, when observing the first image with naked eyes. Therefore, directly presenting the interactive information corresponding to the light tag on the image obtained by using the light tag recognition mode cannot provide a good user experience.
另外,需要说明的是,虽然光标签识别模式通常与摄像头的正常拍摄模式不同,但是本发明并不排除光标签识别模式与正常拍摄模式相同或基本相同的方案。In addition, it should be noted that although the optical label recognition mode is usually different from the normal shooting mode of the camera, the present invention does not exclude a solution in which the optical label recognition mode is the same or substantially the same as the normal shooting mode.
步骤402:基于第一图像获得光标签相对于第一摄像头的位置信息。Step 402: Obtain position information of the optical label relative to the first camera based on the first image.
可以通过分析在第一图像中的光标签的成像来获得光标签相对于第一摄像头的位置信息,其可以被表示为光标签在第一摄像头坐标系中的位 置信息。例如,该位置信息可以由以第一摄像头为原点的坐标系中的坐标(X1,Y1,Z1)来表示,并可以被称为位移向量t1。光标签的位置信息可以由一个点的位置信息来表示,例如,可以由光标签的中心点的位置信息来表示光标签的位置信息;光标签的位置信息也可以由多个点的位置信息来表示,例如,可以由能够限定光标签的大致轮廓的多个点的位置信息来表示光标签的位置信息;光标签的位置信息也可以由一个区域的位置信息来表示;等等。The position information of the optical label relative to the first camera can be obtained by analyzing the imaging of the optical label in the first image, which can be expressed as the position information of the optical label in the first camera coordinate system. For example, the position information may be represented by coordinates (X1, Y1, Z1) in a coordinate system with the first camera as the origin, and may be referred to as a displacement vector t1. The position information of the optical label can be represented by the position information of one point, for example, the position information of the optical label can be represented by the position information of the center point of the optical label; the position information of the optical label can also be represented by the position information of multiple points. Representation, for example, the position information of the optical label can be represented by the position information of multiple points that can define the rough outline of the optical label; the position information of the optical label can also be represented by the position information of a region; and so on.
在一个实施例中,可以通过分析在第一图像中的光标签的成像来确定光标签相对于第一摄像头的距离和方向,从而确定其相对于第一摄像头的位置信息。例如,可以通过第一图像中的光标签成像大小以及可选的其他信息(例如,光标签的实际物理尺寸信息、摄像头内参)来确定光标签与第一摄像头的相对距离(成像越大,距离越近;成像越小,距离越远)。第一摄像头所在的设备可以从服务器获得光标签的实际物理尺寸信息,或者光标签可以具有默认的统一的物理尺寸(其可以存储在设备上)。可以通过分析第一图像中的光标签成像位置,来确定光标签相对于第一摄像头的方向。In an embodiment, the distance and direction of the optical label relative to the first camera can be determined by analyzing the imaging of the optical label in the first image, thereby determining its position information relative to the first camera. For example, the relative distance between the optical label and the first camera can be determined by the imaging size of the optical label in the first image and optional other information (for example, the actual physical size information of the optical label, camera internal parameters) (the larger the image, the greater the distance The closer; the smaller the image, the farther the distance). The device where the first camera is located may obtain the actual physical size information of the optical label from the server, or the optical label may have a default uniform physical size (which may be stored on the device). The direction of the optical label relative to the first camera can be determined by analyzing the imaging position of the optical label in the first image.
另外地或者可选地,还可以进一步通过分析在第一图像中的光标签的成像来确定光标签成像的透视变形,从而确定光标签相对于第一摄像头的姿态信息(也可称为方向信息或朝向信息),例如,光标签在第一摄像头坐标系中的姿态信息。第一摄像头所在的设备可以从服务器获得光标签的实际物理形状信息,或者光标签可以具有默认的统一的物理形状(其可以存储在设备上)。所确定的光标签相对于第一摄像头的姿态信息可以由一个旋转矩阵R1表示。旋转矩阵在成像领域中是已知的,为了不模糊本发明,在此不再详细描述。Additionally or alternatively, the perspective deformation of the optical label imaging can be determined by analyzing the imaging of the optical label in the first image, thereby determining the posture information of the optical label relative to the first camera (also referred to as direction information). Or orientation information), for example, the posture information of the optical tag in the first camera coordinate system. The device where the first camera is located may obtain the actual physical shape information of the optical label from the server, or the optical label may have a default unified physical shape (which may be stored on the device). The determined posture information of the optical tag relative to the first camera can be represented by a rotation matrix R1. The rotation matrix is known in the imaging field, and in order not to obscure the present invention, it will not be described in detail here.
在一个实施例中,可以根据光标签建立一个坐标系,该坐标系可以被称为世界坐标系或光标签坐标系。可以将光标签上的一些点确定为在该世界坐标系中的一些空间点,并且可以根据光标签的物理尺寸信息和/或物理形状信息来确定这些空间点在该世界坐标系中的坐标。第一摄像头所在的设备可以从服务器获得光标签的物理尺寸信息和/或物理形状信息,或者光标签可以具有默认的统一的物理尺寸信息和/或物理形状信息,并且设备可 以存储该物理尺寸信息和/或物理形状信息。光标签上的一些点例如可以是光标签的外壳的角、光标签中的光源的端部、光标签中的一些标识点、等等。根据光标签的物体结构特征或几何结构特征,可以在第一图像中找到与这些空间点分别对应的像点,并确定各个像点在第一图像中的位置。根据各个空间点在世界坐标系中的坐标以及对应的各个像点在第一图像中的位置,结合第一摄像头的内参信息,可以计算得到光标签在第一摄像头坐标系中的位置信息,其可以用位移向量t1来表示。另外地或者可选地,根据各个空间点在世界坐标系中的坐标以及对应的各个像点在第一图像中的位置,还可以计算得到光标签在第一摄像头坐标系中的姿态信息,其可以用旋转矩阵R1来表示。旋转矩阵R1与位移向量t1的组合(R1,t1)即为光标签在第一摄像头坐标系中的位姿信息(也即,位置和姿态信息)。根据各个空间点在世界坐标系中的坐标以及对应的各个像点在图像中的位置来计算旋转矩阵R和位移向量t的方法在现有技术中是已知的,例如,可以利用3D-2D的PnP(Perspective-n-Point)方法来计算R、t,为了不模糊本发明,在此不再详细介绍。旋转矩阵R和位移向量t实际上可以描述如何将某个点的坐标在世界坐标系和摄像头坐标系之间转换。例如,通过旋转矩阵R和位移向量t,可以将某个空间点在世界坐标系中的坐标转换为在摄像头坐标系中的坐标,并可以进一步转换为图像中的像点的位置。In an embodiment, a coordinate system can be established based on the optical tag, and the coordinate system can be called the world coordinate system or the optical tag coordinate system. Some points on the optical label can be determined as some spatial points in the world coordinate system, and the coordinates of these spatial points in the world coordinate system can be determined according to the physical size information and/or physical shape information of the optical label. The device where the first camera is located can obtain the physical size information and/or physical shape information of the optical tag from the server, or the optical tag can have default unified physical size information and/or physical shape information, and the device can store the physical size information And/or physical shape information. Some points on the optical label may be, for example, the corner of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on. According to the object structure feature or geometric structure feature of the optical tag, the image points corresponding to these spatial points can be found in the first image, and the position of each image point in the first image can be determined. According to the coordinates of each spatial point in the world coordinate system and the position of each corresponding image point in the first image, combined with the internal parameter information of the first camera, the position information of the optical tag in the first camera coordinate system can be calculated. It can be represented by the displacement vector t1. Additionally or alternatively, according to the coordinates of each spatial point in the world coordinate system and the position of each corresponding image point in the first image, the posture information of the optical tag in the first camera coordinate system can also be calculated. It can be represented by the rotation matrix R1. The combination (R1, t1) of the rotation matrix R1 and the displacement vector t1 is the pose information (that is, position and posture information) of the optical tag in the first camera coordinate system. The method of calculating the rotation matrix R and the displacement vector t according to the coordinates of each spatial point in the world coordinate system and the position of the corresponding image point in the image is known in the prior art. For example, 3D-2D can be used The PnP (Perspective-n-Point) method is used to calculate R and t. In order not to obscure the present invention, the detailed description is omitted here. The rotation matrix R and the displacement vector t can actually describe how to transform the coordinates of a certain point between the world coordinate system and the camera coordinate system. For example, through the rotation matrix R and the displacement vector t, the coordinates of a certain space point in the world coordinate system can be converted to the coordinates in the camera coordinate system, and can be further converted to the position of the image point in the image.
在一个实施例中,可以进一步基于第一图像中光标签的成像来获得光标签传递的信息,例如光标签的标识信息。In an embodiment, the information conveyed by the optical label may be further obtained based on the imaging of the optical label in the first image, for example, identification information of the optical label.
步骤403:根据光标签相对于第一摄像头的位置信息获得光标签相对于第二摄像头的位置信息。Step 403: Obtain the position information of the optical label relative to the second camera according to the position information of the optical label relative to the first camera.
在获得了光标签相对于第一摄像头的位置信息(例如由位移向量t1表示的位置信息)之后,可以根据该位置信息,使用第一摄像头与第二摄像头之间的相对位姿信息(例如,第一摄像头与第二摄像头之间的旋转矩阵R0和位移向量t0),获得光标签相对于第二摄像头的位置信息,该位置信息例如可以是光标签在第二摄像头坐标系中的位置信息,并可以由位移向量t2表示。After obtaining the position information of the optical tag relative to the first camera (for example, the position information represented by the displacement vector t1), the relative pose information between the first camera and the second camera (for example, The rotation matrix R0 and the displacement vector t0 between the first camera and the second camera are obtained to obtain the position information of the optical label relative to the second camera. The position information may be, for example, the position information of the optical label in the second camera coordinate system. And can be represented by the displacement vector t2.
另外地或者可选地,如果在步骤402中还获得了光标签相对于第一摄像头的姿态信息(例如由旋转矩阵R1表示的姿态信息),可以根据该姿态 信息,使用第一摄像头与第二摄像头之间的相对姿态信息(例如,第一摄像头与第二摄像头之间的旋转矩阵R0),获得光标签相对于第二摄像头的姿态信息,例如,光标签在第二摄像头坐标系中的姿态信息,该姿态信息可以由旋转矩阵R2表示。Additionally or alternatively, if the posture information of the optical tag relative to the first camera (for example, posture information represented by the rotation matrix R1) is also obtained in step 402, the first camera and the second camera can be used according to the posture information. The relative posture information between the cameras (for example, the rotation matrix R0 between the first camera and the second camera), to obtain the posture information of the optical label relative to the second camera, for example, the posture of the optical label in the second camera coordinate system Information, the posture information can be represented by the rotation matrix R2.
步骤404:使用第二摄像头获得包含光标签的第二图像。Step 404: Use the second camera to obtain a second image containing the light tag.
第二摄像头所获得的包含光标签的第二图像例如可以是正常曝光的实景图像,其中包含了光标签以及其周围环境的信息。图5示出了一个示例性的包含光标签的正常曝光的图像,其与图1所示的低曝光图像对应,其中示出了一个餐厅的门,以及在门的上方的一个矩形的光标签。该第二图像虽然能够示出光标签周围的环境信息(也即,第二图像是用户友好的),但是,由于该第二图像不是在光标签识别模式下拍摄的,因此基于该第二图像无法实现光标签的识别(也即,无法识别出光标签或者光标签传递的信息),从而也就无法在该第二图像上呈现与该光标签对应的信息(例如,交互图标)。The second image containing the light tag obtained by the second camera may be, for example, a normal exposure real scene image, which contains information about the light tag and its surrounding environment. Figure 5 shows an exemplary normally exposed image containing a light label, which corresponds to the low exposure image shown in Figure 1, which shows a restaurant door, and a rectangular light label above the door . Although the second image can show the environmental information around the optical tag (that is, the second image is user-friendly), since the second image is not captured in the optical tag recognition mode, it cannot be based on the second image. The recognition of the optical label is realized (that is, the optical label or the information transmitted by the optical label cannot be identified), and thus the information corresponding to the optical label (for example, an interactive icon) cannot be presented on the second image.
第二摄像头所获得的包含光标签的第二图像优选的是正常曝光的实景图像,但这并非限制,根据实际需要,第二图像也可以是摄像头在其他拍摄模式下获得的图像,例如灰度图像等。The second image containing the light label obtained by the second camera is preferably a real-life image with normal exposure, but this is not a limitation. According to actual needs, the second image can also be an image obtained by the camera in other shooting modes, such as grayscale Images etc.
步骤405:根据光标签相对于第二摄像头的位置信息,在所述第二图像上呈现与所述光标签有关的信息。Step 405: According to the position information of the optical label relative to the second camera, present information related to the optical label on the second image.
在一个实施例中,在获得了光标签相对于第二摄像头的位置信息之后,可以根据该位置信息,结合第二摄像头的内参信息,利用成像公式计算光标签在第二摄像头拍摄的第二图像上应该处于的显示位置。根据某个点相对于摄像头的位置信息利用成像公式计算该点的成像位置在本领域中是公知的,在此不再详细描述,以免模糊本发明。根据所计算的显示位置,可以在第二图像上的合适位置处呈现(例如,叠加、嵌入、覆盖等等)与光标签有关的信息,该合适位置优选地是所计算的显示位置。与光标签有关的信息可以是各种信息,例如,光标签的图像、光标签的标志、光标签的标识信息、与光标签或其标识信息相关联的图标、与光标签或其标识信息相关联的店铺名称、与光标签或其标识信息相关联的任何其他信息,以及它们的各种组合。In an embodiment, after obtaining the position information of the optical label relative to the second camera, the second image captured by the optical label on the second camera can be calculated using the imaging formula based on the position information and the internal parameter information of the second camera. The display position where the above should be. It is well known in the art to use an imaging formula to calculate the imaging position of a certain point based on its position information relative to the camera, and will not be described in detail here to avoid obscuring the present invention. According to the calculated display position, information related to the light tag may be presented (for example, superimposed, embedded, covered, etc.) at a suitable position on the second image, and the suitable position is preferably the calculated display position. The information related to the light tag can be various information, for example, the image of the light tag, the logo of the light tag, the identification information of the light tag, the icon associated with the light tag or its identification information, and the light tag or its identification information. The name of the associated store, any other information associated with the light label or its identification information, and various combinations of them.
另外地或者可选地,如果在步骤403中还获得了光标签相对于第二摄像头的姿态信息,则在所述第二图像上呈现与光标签有关的信息时,还可以进一步基于该姿态信息来呈现与光标签有关的信息。例如,在呈现光标签的图像、标志、图标等时,可以基于光标签在第二摄像头坐标系中的姿态信息,设置这些图像、标志、图标等的姿态信息。这是有利的,特别是当与光标签对应的标志或图标等是三维虚拟对象时。Additionally or alternatively, if the posture information of the optical tag relative to the second camera is also obtained in step 403, when the information related to the optical tag is presented on the second image, the posture information may be further based on the posture information. To present information related to the light tag. For example, when the image, logo, icon, etc. of the light tag are presented, the posture information of these images, logos, icons, etc. can be set based on the posture information of the light tag in the second camera coordinate system. This is advantageous, especially when the logo or icon or the like corresponding to the light label is a three-dimensional virtual object.
图6示出了一个示例性的根据本发明的实施例呈现的图像,其在图5所示的图像的基础上叠加了一个与其中的光标签相关联的圆形图标,该圆形图标的显示位置为光标签的实际成像位置。该图标可以具有交互功能,用户在点击该图标之后,可以访问相应餐厅的信息,并可以执行预定、排队、点餐等操作。如此,使得用户不仅能够通过设备呈现的图像与光标签进行交互操作,也能够感知光标签周围的环境信息。Fig. 6 shows an exemplary image presented according to an embodiment of the present invention, which is superimposed on the image shown in Fig. 5 with a circular icon associated with the light label therein. The display position is the actual imaging position of the optical label. The icon can have an interactive function. After clicking the icon, the user can access the information of the corresponding restaurant, and can perform operations such as reservation, queuing, and ordering. In this way, the user can not only interact with the optical tag through the image presented by the device, but also perceive the environmental information around the optical tag.
可以理解,上文所述的获得第一图像、获得第二图像等步骤可以以任何合适的顺序执行,并且可以并发执行。另外,这些步骤也可以根据需要重复执行,以不断更新摄像头所显示的场景和光标签的显示位置。It can be understood that the steps of obtaining the first image and obtaining the second image described above can be performed in any suitable order and can be performed concurrently. In addition, these steps can also be repeated as needed to continuously update the scene displayed by the camera and the display position of the light tag.
根据本发明的一个实施例,设备的第一摄像头和第二摄像头安装于同一平面(也即,第一摄像头坐标系与第二摄像头坐标系在Z轴方向不存在偏移),并且具有相同的内参和相同的姿态(也即,第一摄像头坐标系与第二摄像头坐标系的姿态相同,两者之间的旋转矩阵为单位矩阵,因此不需要进行两个摄像头坐标系之间的旋转操作),例如,一种装配了具有相同内参的两个摄像头的手机,这两个摄像头的方向也相同,仅安装位置存在一些差别(例如,两者的安装位置存在几毫米的偏移)。在这种情况下,光标签在这两个摄像头拍摄的图像中的成像位置基本相同(特别是在光标签距离摄像头比较远的情况下),仅存在不明显的偏移。针对这种情况,在本发明的一个实施例中,可以直接根据光标签在第一摄像头(工作于光标签识别模式)拍摄的图像中的成像位置来确定光标签在第二摄像头(工作于非光标签识别模式)拍摄的图像中的成像位置。图7示出了根据该实施例的用于呈现与光标签有关的信息的方法,其可以包括如下步骤:According to an embodiment of the present invention, the first camera and the second camera of the device are installed on the same plane (that is, the coordinate system of the first camera and the coordinate system of the second camera are not offset in the Z-axis direction), and have the same Internal parameters and the same posture (that is, the posture of the first camera coordinate system and the second camera coordinate system are the same, and the rotation matrix between the two is the identity matrix, so there is no need to perform the rotation operation between the two camera coordinate systems) For example, a mobile phone equipped with two cameras with the same internal parameters, the orientation of the two cameras is also the same, only the installation position is somewhat different (for example, the installation position of the two is offset by a few millimeters). In this case, the imaging position of the optical tag in the images captured by the two cameras is basically the same (especially when the optical tag is far from the camera), and there is only an insignificant offset. In view of this situation, in an embodiment of the present invention, the imaging position of the optical label in the image captured by the first camera (working in optical label recognition mode) can be directly used to determine the optical label in the second camera (working in non- Optical label recognition mode) the imaging position in the captured image. Fig. 7 shows a method for presenting information related to an optical tag according to this embodiment, which may include the following steps:
步骤701:使用第一摄像头在光标签识别模式下获得包含光标签的第一图像。Step 701: Use the first camera to obtain a first image containing the optical label in the optical label recognition mode.
步骤702:获得光标签在第一图像中的第一成像位置。Step 702: Obtain the first imaging position of the optical label in the first image.
光标签在图像中的成像位置可以由一个点的位置信息来表示,例如,可以由光标签的中心点的位置信息来表示光标签在图像中的成像位置;光标签在图像中的成像位置也可以由多个点的位置信息来表示,例如,可以由能够限定光标签的大致轮廓的多个点的位置信息来表示光标签在图像中的成像位置;光标签在图像中的成像位置也可以由一个区域的位置信息来表示;等等。The imaging position of the light tag in the image can be represented by the position information of a point, for example, the imaging position of the light tag in the image can be represented by the position information of the center point of the light tag; the imaging position of the light tag in the image is also It can be represented by the position information of multiple points, for example, the imaging position of the light tag in the image can be represented by the position information of multiple points that can define the approximate outline of the light tag; the imaging position of the light tag in the image can also be It is represented by the location information of an area; etc.
步骤703:使用第二摄像头获得包含所述光标签的第二图像。Step 703: Use the second camera to obtain a second image containing the optical tag.
步骤704:根据所述第一成像位置在所述第二图像中的第二成像位置处呈现与所述光标签有关的信息。Step 704: Present information related to the light tag at a second imaging position in the second image according to the first imaging position.
在一个实施例中,在步骤704中,可以根据成像公式来基于第一成像位置推导第二成像位置。In one embodiment, in step 704, the second imaging position may be derived based on the first imaging position according to the imaging formula.
具体地,对于安装于同一平面的具有相同内参和相同姿态的两个摄像头,假设两个摄像头坐标系仅仅在X轴方向上有偏移d,如此,两个摄像头坐标系之间的位移向量T便为(d,0,0)。对于第一摄像头坐标系中的点(X,Y,Z),其在第二摄像头坐标系中的坐标为(X+d,Y,Z)。根据成像公式,该点在第一摄像头拍摄的图像中的成像位置(u1,v1)和在第二摄像头拍摄的图像中的成像位置(u2,v2)可以计算为:Specifically, for two cameras with the same internal parameters and the same posture installed on the same plane, it is assumed that the two camera coordinate systems only have an offset d in the X-axis direction. Thus, the displacement vector T between the two camera coordinate systems It is (d, 0, 0). For a point (X, Y, Z) in the coordinate system of the first camera, its coordinates in the coordinate system of the second camera are (X+d, Y, Z). According to the imaging formula, the imaging position (u1, v1) of the point in the image taken by the first camera and the imaging position (u2, v2) in the image taken by the second camera can be calculated as:
=>=>
=>=>
联立上式可得:Combine the above formula to get:
其中,fx,fy为相机在x,y方向上的焦距;(cx,cy)为相机小孔在相机坐标系中的坐标。Among them, fx, fy are the focal length of the camera in the x and y directions; (cx, cy) are the coordinates of the camera aperture in the camera coordinate system.
从上可以看出,对于空间中的同一个点,当上述两个摄像头坐标系仅在X轴方向上有偏移d时(例如,当两个摄像头水平排列时,该偏移d可以被预先测定),对于它们的成像位置(u1,v1)和(u2,v2),存在以及。It can be seen from the above that for the same point in space, when the above two camera coordinate systems only have an offset d in the X-axis direction (for example, when the two cameras are arranged horizontally, the offset d can be preset Determination), for their imaging positions (u1, v1) and (u2, v2), exist as well.
类似地,对于空间中的同一个点,当上述两个摄像头坐标系仅仅在Y轴方向上有偏移d时(例如,当两个摄像头竖直排列时,该偏移d可以被预先测定),对于它们的成像位置(u1,v1)和(u2,v2),存在以及。Similarly, for the same point in space, when the above two camera coordinate systems only have an offset d in the Y-axis direction (for example, when the two cameras are arranged vertically, the offset d can be determined in advance) , For their imaging positions (u1, v1) and (u2, v2), there are as well.
相应地,对于空间中的同一个点,如果上述两个摄像头坐标系在X轴和Y轴方向上分别有偏移dx和dy(可以被预先测定)时,对于它们的成像位置(u1,v1)和(u2,v2),存在以及。Correspondingly, for the same point in space, if the above two camera coordinate systems have offsets dx and dy in the X-axis and Y-axis directions respectively (which can be determined in advance), their imaging positions (u1, v1 ) And (u2,v2), exist as well.
对于安装了该第一摄像头和第二摄像头的设备而言,fx,fy是摄像头固有的参数值。因此,只要能够预先知道第一摄像头和第二摄像头之间的相对偏移(包括偏移方向和偏移距离),在获得了某个点在第一摄像头坐标系或第二摄像头坐标系中的Z坐标后,便可以基于该点在一个摄像头拍摄的图像中的成像位置推导出其在另一个摄像头拍摄的图像中的成像位置。由于第一摄像头和第二摄像头安装于同一平面(也即,第一摄像头坐标系与第二摄像头坐标系在Z轴方向不存在偏移),因此,点在第一摄像头坐标系或第二摄像头坐标系中的Z坐标是相同的,都大致等于该点到摄像头的安装平面的垂直距离。因此,在一个实施例中,可以根据光标签在第一摄像头坐标系或第二摄像头坐标系中的Z坐标(或者光标签到两个摄像头的安装平面的垂直距离)以及光标签在第一摄像头拍摄的第一图像中的第一成像位置,来确定光标签在第二摄像头拍摄的第二图像中的第二成像位置。可以使用在上文的步骤402中描述的任何方法来获得光标签在摄像头坐标系中的Z坐标或者光标签到摄像头的安装平面的垂直距离。For the device with the first camera and the second camera installed, fx and fy are the parameter values inherent to the camera. Therefore, as long as the relative offset (including the offset direction and the offset distance) between the first camera and the second camera can be known in advance, after obtaining a certain point in the first camera coordinate system or the second camera coordinate system After the Z coordinate, the imaging position of the point in the image captured by another camera can be derived based on the imaging position of the point in the image captured by another camera. Since the first camera and the second camera are installed on the same plane (that is, there is no offset between the first camera coordinate system and the second camera coordinate system in the Z-axis direction), the point is in the first camera coordinate system or the second camera The Z coordinate in the coordinate system is the same, which is roughly equal to the vertical distance from the point to the installation plane of the camera. Therefore, in an embodiment, the Z coordinate of the optical label in the first camera coordinate system or the second camera coordinate system (or the vertical distance between the optical label and the installation plane of the two cameras) and the optical label in the first camera The first imaging position in the captured first image is used to determine the second imaging position of the light tag in the second image captured by the second camera. Any method described in step 402 above can be used to obtain the Z coordinate of the optical label in the camera coordinate system or the vertical distance from the optical label to the installation plane of the camera.
在某些应用中,在扫描识别光标签时,光标签通常位于屏幕正中附近,在这种情况下,可以使用光标签到第一摄像头或第二摄像头的距离来作为光标签到两个摄像头的安装平面的垂直距离的近似值。光标签到摄像头的距离更易于测定,例如,可以如上文所述通过光标签成像大小来确定光标签到摄像头的距离,或者可以通过双目摄像头来测定光标签到摄像头的距离。如此,能够以可接受的误差来更便捷或更快地确定光标签在第二摄像头拍摄的第二图像中的第二成像位置。In some applications, when scanning and identifying the optical label, the optical label is usually located near the center of the screen. In this case, the distance between the optical label and the first camera or the second camera can be used as the distance between the optical label and the two cameras. The approximate value of the vertical distance of the installation plane. The distance from the optical label to the camera is easier to determine. For example, the distance from the optical label to the camera can be determined by the imaging size of the optical label as described above, or the distance from the optical label to the camera can be measured by a binocular camera. In this way, the second imaging position of the optical tag in the second image captured by the second camera can be determined more conveniently or faster with an acceptable error.
在另一个实施例中,在步骤704中,可以将第二成像位置设置为与第一成像位置相同。在上文中,推导得到了以及,其中,fx,fy为相机在x,y方向上的焦距,dx,dy为两个摄像头坐标系在X轴和Y轴方向上的偏移。fx,fy,dx,dy与Z(通常为几米到几十米的距离)相比通常会小很多,因此,在一些对精度要求不高的应用下,可以认为和约等于0,从而将第二成像位置设置为与第一成像位置相同。这种将光标签在第一图像中的 成像位置直接作为其在第二图像中的成像位置的方式会带来一些误差,但其提高了效率,减轻了计算量,因此在一些对精度要求不高的应用中是非常有利的。特别是,对于在远距离识别光标签的情况(Z很大),上述方式带来的误差实际上是非常小的,并不会影响用户的使用体验。In another embodiment, in step 704, the second imaging position may be set to be the same as the first imaging position. In the above, the derivation obtained and, where fx, fy are the focal lengths of the camera in the x and y directions, and dx, dy are the offsets of the two camera coordinate systems in the X and Y directions. fx, fy, dx, dy are usually much smaller than Z (usually a distance of a few meters to tens of meters). Therefore, in some applications that do not require high accuracy, the sum can be considered to be approximately equal to 0, and the second The imaging position is set to be the same as the first imaging position. This method of directly using the imaging position of the light tag in the first image as its imaging position in the second image will bring some errors, but it improves the efficiency and reduces the amount of calculation. Therefore, in some cases, the accuracy requirements are not required. It is very advantageous in high applications. In particular, for the case where the optical tag is recognized at a long distance (Z is large), the error caused by the above method is actually very small, and will not affect the user experience.
本文中提到的设备可以是用户携带的设备(例如,手机、平板电脑、智能眼镜、智能头盔、智能手表、等等),但是可以理解,该设备也可以是能够自主移动的机器,例如,无人机、无人驾驶汽车、机器人等,该设备上安装有图像采集器件,例如摄像头。The device mentioned in this article may be a device carried by a user (for example, a mobile phone, a tablet computer, smart glasses, a smart helmet, a smart watch, etc.), but it is understood that the device may also be a machine that can move autonomously, for example, UAVs, unmanned vehicles, robots, etc., which are equipped with image acquisition devices, such as cameras.
在本发明的一个实施例中,可以以计算机程序的形式来实现本发明。计算机程序可以存储于各种存储介质(例如,硬盘、光盘、闪存等)中,当该计算机程序被处理器执行时,能够用于实现本发明的方法。In an embodiment of the present invention, the present invention can be implemented in the form of a computer program. The computer program can be stored in various storage media (for example, a hard disk, an optical disk, a flash memory, etc.), and when the computer program is executed by a processor, it can be used to implement the method of the present invention.
在本发明的另一个实施例中,可以以电子设备的形式来实现本发明。该电子设备包括处理器和存储器,在存储器中存储有计算机程序,当该计算机程序被处理器执行时,能够用于实现本发明的方法。In another embodiment of the present invention, the present invention can be implemented in the form of an electronic device. The electronic device includes a processor and a memory, and a computer program is stored in the memory. When the computer program is executed by the processor, it can be used to implement the method of the present invention.
本文中针对“各个实施例”、“一些实施例”、“一个实施例”、或“实施例”等的参考指代的是结合所述实施例所描述的特定特征、结构、或性质包括在至少一个实施例中。因此,短语“在各个实施例中”、“在一些实施例中”、“在一个实施例中”、或“在实施例中”等在整个本文中各处的出现并非必须指代相同的实施例。此外,特定特征、结构、或性质可以在一个或多个实施例中以任何合适方式组合。因此,结合一个实施例中所示出或描述的特定特征、结构或性质可以整体地或部分地与一个或多个其他实施例的特征、结构、或性质无限制地组合,只要该组合不是非逻辑性的或不能工作。本文中出现的类似于“根据A”或“基于A”的表述意指非排他性的,也即,“根据A”可以涵盖“仅仅根据A”,也可以涵盖“根据A和B”,除非特别声明或者根据上下文明确可知其含义为“仅仅根据A”。在方法流程中按照一定顺序进行描述的各个步骤并非必须按照该顺序执行,相反,其中的一些步骤的执行顺序可以改变,并且一些步骤可以并发执行,只要不影响方案的实现即可。另外,本申请附图中的各个元素仅仅为了示意说明,并非按比例绘制。References to "various embodiments", "some embodiments", "one embodiment", or "an embodiment" herein refer to the specific features, structures, or properties described in connection with the embodiments included in In at least one embodiment. Therefore, the appearances of the phrases "in various embodiments", "in some embodiments", "in one embodiment", or "in an embodiment" in various places throughout this document do not necessarily refer to the same implementation example. In addition, specific features, structures, or properties can be combined in any suitable manner in one or more embodiments. Therefore, a specific feature, structure, or property shown or described in one embodiment can be combined in whole or in part with the feature, structure, or property of one or more other embodiments without limitation, as long as the combination is not non-limiting. Logical or not working. Expressions similar to "according to A" or "based on A" appearing in this article mean non-exclusive, that is, "according to A" can cover "only according to A" or "according to A and B" unless specifically The statement or the context clearly knows its meaning as "only based on A". The steps described in a certain order in the method flow do not have to be executed in this order. On the contrary, the execution order of some of the steps can be changed, and some steps can be executed concurrently, as long as it does not affect the realization of the solution. In addition, the various elements in the drawings of the present application are for illustrative purposes only, and are not drawn to scale.
由此描述了本发明的至少一个实施例的几个方面,可以理解,对本领 域技术人员来说容易地进行各种改变、修改和改进。这种改变、修改和改进意于在本发明的精神和范围内。虽然本发明已经通过优选实施例进行了描述,然而本发明并非局限于这里所描述的实施例,在不脱离本发明范围的情况下还包括所作出的各种改变以及变化。Thus, several aspects of at least one embodiment of the present invention have been described, and it can be understood that various changes, modifications and improvements can be easily made by those skilled in the art. Such changes, modifications and improvements are intended to be within the spirit and scope of the present invention. Although the present invention has been described through preferred embodiments, the present invention is not limited to the embodiments described here, and also includes various changes and changes made without departing from the scope of the present invention.

Claims (16)

  1. 一种用于呈现与光通信装置有关的信息的方法,包括:A method for presenting information related to an optical communication device, including:
    使用第一摄像头在光通信装置识别模式下获得包含光通信装置的第一图像;Use the first camera to obtain the first image including the optical communication device in the optical communication device recognition mode;
    基于所述第一图像获得所述光通信装置相对于第一摄像头的位置信息;Obtaining position information of the optical communication device relative to the first camera based on the first image;
    根据所述光通信装置相对于第一摄像头的位置信息获得所述光通信装置相对于第二摄像头的位置信息;Obtaining the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera;
    使用第二摄像头获得包含所述光通信装置的第二图像;以及Use a second camera to obtain a second image including the optical communication device; and
    根据所述光通信装置相对于第二摄像头的位置信息在所述第二图像上呈现与所述光通信装置有关的信息。The information related to the optical communication device is presented on the second image according to the position information of the optical communication device relative to the second camera.
  2. 根据权利要求1所述的方法,其中,根据所述光通信装置相对于第一摄像头的位置信息获得所述光通信装置相对于第二摄像头的位置信息包括:The method according to claim 1, wherein obtaining the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera comprises:
    根据所述光通信装置相对于第一摄像头的位置信息,使用第一摄像头与第二摄像头之间的相对位姿信息,获得所述光通信装置相对于第二摄像头的位置信息。According to the position information of the optical communication device relative to the first camera, the relative pose information between the first camera and the second camera is used to obtain the position information of the optical communication device relative to the second camera.
  3. 根据权利要求1所述的方法,其中,基于所述第一图像获得所述光通信装置相对于第一摄像头的位置信息包括:The method according to claim 1, wherein obtaining position information of the optical communication device relative to the first camera based on the first image comprises:
    基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的位置信息。The position information of the optical communication device relative to the first camera is obtained based on the imaging of the optical communication device in the first image.
  4. 根据权利要求3所述的方法,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的位置信息包括:The method according to claim 3, wherein the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image comprises:
    基于所述第一图像中的光通信装置的成像大小来获得光通信装置相对于第一摄像头的距离;Obtaining the distance of the optical communication device relative to the first camera based on the imaging size of the optical communication device in the first image;
    基于所述第一图像中的光通信装置的成像位置来获得光通信装置相对于第一摄像头的方向;以及Obtaining the direction of the optical communication device relative to the first camera based on the imaging position of the optical communication device in the first image; and
    通过光通信装置相对于第一摄像头的距离和方向来获得所述光通信装置相对于第一摄像头的位置信息。The position information of the optical communication device relative to the first camera is obtained by the distance and direction of the optical communication device relative to the first camera.
  5. 根据权利要求3所述的方法,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的位置信息包括:The method according to claim 3, wherein the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image comprises:
    根据光通信装置上的一些点在光通信装置坐标系中的坐标以及这些点在所述第一图像中的成像位置,并结合第一摄像头的内参信息,获得所述光通信装置相对于第一摄像头的位置信息。According to the coordinates of some points on the optical communication device in the optical communication device coordinate system and the imaging positions of these points in the first image, combined with the internal parameter information of the first camera, the relative position of the optical communication device relative to the first image is obtained. Location information of the camera.
  6. 根据权利要求1所述的方法,还包括:The method according to claim 1, further comprising:
    基于所述第一图像获得所述光通信装置相对于第一摄像头的姿态信息;Obtaining posture information of the optical communication device relative to the first camera based on the first image;
    根据所述光通信装置相对于第一摄像头的姿态信息以及第一摄像头与第二摄像头之间的相对姿态信息,获得所述光通信装置相对于第二摄像头的姿态信息,Obtaining the posture information of the optical communication device relative to the second camera according to the posture information of the optical communication device relative to the first camera and the relative posture information between the first camera and the second camera,
    以及其中,所述根据所述光通信装置相对于第二摄像头的位置信息在所述第二图像上呈现与所述光通信装置有关的信息包括:And wherein, the presenting information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes:
    根据所述光通信装置相对于第二摄像头的位置信息和姿态信息在所述第二图像上呈现与所述光通信装置有关的信息。The information related to the optical communication device is presented on the second image according to the position information and posture information of the optical communication device relative to the second camera.
  7. 根据权利要求6所述的方法,其中,基于所述第一图像获得所述光通信装置相对于第一摄像头的姿态信息包括:The method according to claim 6, wherein obtaining the posture information of the optical communication device relative to the first camera based on the first image comprises:
    基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的姿态信息。Obtain posture information of the optical communication device relative to the first camera based on imaging of the optical communication device in the first image.
  8. 根据权利要求7所述的方法,其中,所述基于所述第一图像中的光通信装置的成像来获得所述光通信装置相对于第一摄像头的姿态信息包括:The method according to claim 7, wherein said obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image comprises:
    通过分析所述第一图像中的光通信装置的成像的透视变形,来获得所述光通信装置相对于第一摄像头的姿态信息;或者Obtain the posture information of the optical communication device relative to the first camera by analyzing the perspective deformation of the imaging of the optical communication device in the first image; or
    根据光通信装置上的一些点在光通信装置坐标系中的坐标以及这些 点在所述第一图像中的成像位置,获得所述光通信装置相对于第一摄像头的姿态信息。According to the coordinates of some points on the optical communication device in the optical communication device coordinate system and the imaging positions of these points in the first image, the posture information of the optical communication device relative to the first camera is obtained.
  9. 根据权利要求1所述的方法,其中,根据所述光通信装置相对于第二摄像头的位置信息在所述第二图像上呈现与所述光通信装置有关的信息包括:The method according to claim 1, wherein presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera comprises:
    根据所述光通信装置相对于第二摄像头的位置信息,确定与该位置信息对应的在所述第二图像中的成像位置,并在所述第二图像中的所述成像位置处呈现与所述光通信装置有关的信息。According to the position information of the optical communication device relative to the second camera, the imaging position in the second image corresponding to the position information is determined, and the imaging position in the second image is displayed and displayed at the imaging position in the second image. The information about the optical communication device.
  10. 一种用于呈现与光通信装置有关的信息的方法,包括:A method for presenting information related to an optical communication device, including:
    使用第一摄像头在光通信装置识别模式下获得包含光通信装置的第一图像;Use the first camera to obtain the first image including the optical communication device in the optical communication device recognition mode;
    获得所述光通信装置在第一图像中的第一成像位置;Obtaining the first imaging position of the optical communication device in the first image;
    使用第二摄像头获得包含所述光通信装置的第二图像;以及Use a second camera to obtain a second image including the optical communication device; and
    根据所述第一成像位置在所述第二图像中的第二成像位置处呈现与所述光通信装置有关的信息,Presenting information related to the optical communication device at a second imaging position in the second image according to the first imaging position,
    其中,所述第一摄像头和第二摄像头安装于同一平面并具有相同的姿态和内参。Wherein, the first camera and the second camera are installed on the same plane and have the same posture and internal parameters.
  11. 根据权利要求10所述的方法,其中,The method of claim 10, wherein:
    所述第二成像位置与所述第一成像位置相同。The second imaging position is the same as the first imaging position.
  12. 根据权利要求10所述的方法,其中,The method of claim 10, wherein:
    根据第一摄像头和第二摄像头之间的相对偏移、所述第一成像位置、所述光通信装置在第一摄像头坐标系或第二摄像头坐标系中的Z坐标或者所述光通信装置到第一摄像头和第二摄像头的安装平面的垂直距离,来确定所述第二成像位置。According to the relative offset between the first camera and the second camera, the first imaging position, the Z coordinate of the optical communication device in the first camera coordinate system or the second camera coordinate system, or the optical communication device The vertical distance between the installation plane of the first camera and the second camera is used to determine the second imaging position.
  13. 根据权利要求10所述的方法,其中,The method of claim 10, wherein:
    根据第一摄像头和第二摄像头之间的相对偏移、所述第一成像位置、所述光通信装置到第一摄像头或第二摄像头的距离,来确定所述第二成像 位置。The second imaging position is determined according to the relative offset between the first camera and the second camera, the first imaging position, and the distance from the optical communication device to the first camera or the second camera.
  14. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,能够用于实现权利要求1-13中任一项所述的方法。A computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, it can be used to implement the method of any one of claims 1-13.
  15. 一种电子设备,其中包括处理器和存储器,在存储器中存储有计算机程序,当该计算机程序被处理器执行时,能够用于实现权利要求1-13中任一项所述的方法。An electronic device comprising a processor and a memory, and a computer program is stored in the memory, and when the computer program is executed by the processor, it can be used to implement the method according to any one of claims 1-13.
  16. 一种计算机程序产品,当所述计算机程序产品被处理器执行时,能够用于实现权利要求1-13中任一项所述的方法。A computer program product, when the computer program product is executed by a processor, it can be used to implement the method of any one of claims 1-13.
PCT/CN2020/080160 2019-03-27 2020-03-19 Method for presenting information related to optical communication apparatus, and electronic device WO2020192543A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910237930.1 2019-03-27
CN201910237930.1A CN111753565B (en) 2019-03-27 2019-03-27 Method and electronic equipment for presenting information related to optical communication device

Publications (1)

Publication Number Publication Date
WO2020192543A1 true WO2020192543A1 (en) 2020-10-01

Family

ID=72608886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080160 WO2020192543A1 (en) 2019-03-27 2020-03-19 Method for presenting information related to optical communication apparatus, and electronic device

Country Status (3)

Country Link
CN (1) CN111753565B (en)
TW (1) TW202103045A (en)
WO (1) WO2020192543A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726996A (en) * 2021-01-04 2022-07-08 北京外号信息技术有限公司 Method and system for establishing a mapping between a spatial position and an imaging position

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102333193A (en) * 2011-09-19 2012-01-25 深圳超多维光电子有限公司 Terminal equipment
CN104715753A (en) * 2013-12-12 2015-06-17 联想(北京)有限公司 Data processing method and electronic device
CN106446749A (en) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 Optical label shooting and optical label decoding relay work method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103154265A (en) * 2010-05-11 2013-06-12 波士顿大学董事会 Use of nanopore arrays for multiplex sequencing of nucleic acids
CN106525021A (en) * 2015-09-14 2017-03-22 中兴通讯股份有限公司 Method, apparatus and system for determining positions, as well as processing center
CN106372556B (en) * 2016-08-30 2019-02-01 西安小光子网络科技有限公司 A kind of recognition methods of optical label
CN206210121U (en) * 2016-12-03 2017-05-31 河池学院 A kind of Parking based on smart mobile phone seeks car system
CN109413324A (en) * 2017-08-16 2019-03-01 中兴通讯股份有限公司 A kind of image pickup method and mobile terminal
CN109242912A (en) * 2018-08-29 2019-01-18 杭州迦智科技有限公司 Join scaling method, electronic equipment, storage medium outside acquisition device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102333193A (en) * 2011-09-19 2012-01-25 深圳超多维光电子有限公司 Terminal equipment
CN104715753A (en) * 2013-12-12 2015-06-17 联想(北京)有限公司 Data processing method and electronic device
CN106446749A (en) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 Optical label shooting and optical label decoding relay work method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726996A (en) * 2021-01-04 2022-07-08 北京外号信息技术有限公司 Method and system for establishing a mapping between a spatial position and an imaging position
CN114726996B (en) * 2021-01-04 2024-03-15 北京外号信息技术有限公司 Method and system for establishing a mapping between a spatial location and an imaging location

Also Published As

Publication number Publication date
CN111753565B (en) 2021-12-24
TW202103045A (en) 2021-01-16
CN111753565A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
US11887312B2 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
WO2019242262A1 (en) Augmented reality-based remote guidance method and device, terminal, and storage medium
WO2021218546A1 (en) Device positioning method and system
US20180075652A1 (en) Server and method for producing virtual reality image about object
US20210232858A1 (en) Methods and systems for training an object detection algorithm using synthetic images
US8369578B2 (en) Method and system for position determination using image deformation
US11263818B2 (en) Augmented reality system using visual object recognition and stored geometry to create and render virtual objects
CN113835352B (en) Intelligent device control method, system, electronic device and storage medium
WO2020192543A1 (en) Method for presenting information related to optical communication apparatus, and electronic device
WO2021093703A1 (en) Interaction method and system based on optical communication apparatus
WO2021057887A1 (en) Method and system for setting virtual object capable of being presented to target
WO2020244480A1 (en) Relative positioning device, and corresponding relative positioning method
CN111242107B (en) Method and electronic device for setting virtual object in space
CN113008135B (en) Method, apparatus, electronic device and medium for determining a position of a target point in space
US11935286B2 (en) Method and device for detecting a vertical planar surface
JP6208977B2 (en) Information processing apparatus, communication terminal, and data acquisition method
CN112581630A (en) User interaction method and system
CN112417904B (en) Method and electronic device for presenting information related to an optical communication device
WO2020244576A1 (en) Method for superimposing virtual object on the basis of optical communication apparatus, and corresponding electronic device
TWI759764B (en) Superimpose virtual object method based on optical communitation device, electric apparatus, and computer readable storage medium
Ballestin et al. Assessment of optical see-through head mounted display calibration for interactive augmented reality
CN112051546B (en) Device for realizing relative positioning and corresponding relative positioning method
WO2022121606A1 (en) Method and system for obtaining identification information of device or user thereof in scenario
CN114827338A (en) Method and electronic device for presenting virtual objects on a display medium of a device
CN112053444A (en) Method for superimposing virtual objects based on optical communication means and corresponding electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20779556

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20779556

Country of ref document: EP

Kind code of ref document: A1