WO2020192543A1 - Procédé de présentation d'informations relatives à un appareil de communication optique, et dispositif électronique - Google Patents

Procédé de présentation d'informations relatives à un appareil de communication optique, et dispositif électronique Download PDF

Info

Publication number
WO2020192543A1
WO2020192543A1 PCT/CN2020/080160 CN2020080160W WO2020192543A1 WO 2020192543 A1 WO2020192543 A1 WO 2020192543A1 CN 2020080160 W CN2020080160 W CN 2020080160W WO 2020192543 A1 WO2020192543 A1 WO 2020192543A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
communication device
optical communication
image
information
Prior art date
Application number
PCT/CN2020/080160
Other languages
English (en)
Chinese (zh)
Inventor
方俊
牛旭恒
王强
李江亮
Original Assignee
北京外号信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京外号信息技术有限公司 filed Critical 北京外号信息技术有限公司
Publication of WO2020192543A1 publication Critical patent/WO2020192543A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present invention belongs to the field of optical information technology, and in particular relates to a method and electronic equipment for presenting information related to an optical communication device.
  • Optical communication devices are also called optical tags, and these two terms can be used interchangeably in this article.
  • Optical tags can transmit information by emitting different lights. They have the advantages of long recognition distance, relaxed requirements for visible light conditions, and strong directivity, and the information transmitted by optical tags can change over time, which can provide large information capacity and flexibility Configuration capabilities. Compared with the traditional two-dimensional code, the optical tag has a longer recognition distance and stronger information interaction ability, which can provide huge convenience for users and businesses.
  • the optical label recognition device can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.), or a machine that can move autonomously (for example, Drones, driverless cars, robots, etc.).
  • a user for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.
  • a machine that can move autonomously for example, Drones, driverless cars, robots, etc.
  • the identification device needs to use the camera on it to capture the image of the optical tag in a specific optical tag recognition mode (for example, low exposure mode). Obtain the image of the optical label, and analyze these images through the built-in application to identify the information conveyed by the optical label.
  • an image containing a light tag captured in a specific light tag recognition mode usually cannot reproduce the environmental information around the light tag well, which is very unfavorable for user experience or subsequent user interaction.
  • the brightness of the image surrounding the light tag is usually very low, or even pitch black.
  • Figure 1 shows an exemplary image containing a light label taken in low exposure mode. The middle position of the upper part of the figure has the image of the light label.
  • the objects around the light label are usually low in brightness, It is difficult to distinguish objects around the light tag from the image taken in this low exposure mode.
  • the image obtained in the normal shooting mode can show the environmental information around the light tag, because the image is not shot in the light tag recognition mode, the recognition of the light tag cannot be achieved based on the image (that is, the light cannot be recognized).
  • the information transmitted by the label or the optical label so the interactive information corresponding to the optical label (for example, an interactive icon) cannot be presented on the image.
  • One aspect of the present invention relates to a method for presenting information related to an optical communication device, including: using a first camera to obtain a first image containing an optical communication device in an optical communication device recognition mode; based on the first image Obtain the position information of the optical communication device relative to the first camera; obtain the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera; use the second camera to obtain A second image of the optical communication device; and presenting information related to the optical communication device on the second image according to position information of the optical communication device relative to the second camera.
  • obtaining the position information of the optical communication device relative to the second camera according to the position information of the optical communication device relative to the first camera includes: according to the position information of the optical communication device relative to the first camera , Using the rotation matrix and displacement vector between the first camera and the second camera to obtain the position information of the optical communication device relative to the second camera.
  • obtaining the position information of the optical communication device relative to the first camera based on the first image includes: obtaining the relative position of the optical communication device based on the imaging of the optical communication device in the first image Location information of the first camera.
  • the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: based on the information of the optical communication device in the first image Obtain the distance of the optical communication device relative to the first camera based on the imaging size; obtain the direction of the optical communication device relative to the first camera based on the imaging position of the optical communication device in the first image; The distance and direction of a camera are used to obtain the position information of the optical communication device relative to the first camera.
  • the obtaining the position information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: according to some points on the optical communication device in the optical communication device The coordinates in the coordinate system and the imaging positions of these points in the first image are combined with the internal parameter information of the first camera to obtain the position information of the optical communication device relative to the first camera.
  • the above method further includes: obtaining posture information of the optical communication device relative to the first camera based on the first image; obtaining the optical communication device according to posture information of the optical communication device relative to the first camera The posture information relative to the second camera, and wherein the presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes: according to the The position information and posture information of the optical communication device relative to the second camera present information related to the optical communication device on the second image.
  • obtaining the posture information of the optical communication device relative to the first camera based on the first image includes: obtaining the relative position of the optical communication device based on imaging of the optical communication device in the first image The posture information of the first camera.
  • the obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: determining the optical communication device in the first image The perspective deformation of the imaging to obtain the posture information of the optical communication device relative to the first camera.
  • the obtaining the posture information of the optical communication device relative to the first camera based on the imaging of the optical communication device in the first image includes: according to some points on the optical communication device The coordinates in the coordinate system and the imaging positions of these points in the first image obtain posture information of the optical communication device relative to the first camera.
  • the coordinates of some points on the optical communication device in the optical communication device coordinate system are determined according to the physical size information and/or physical shape information of the optical communication device.
  • presenting the information related to the optical communication device on the second image according to the position information of the optical communication device relative to the second camera includes: according to the position information of the optical communication device relative to the second camera Determine the imaging position in the second image corresponding to the position information, and present information related to the optical communication device at the imaging position in the second image.
  • the second image is a real scene image with normal exposure.
  • the above method further includes: obtaining identification information of the optical communication device based on the first image.
  • Another aspect of the present invention relates to a method for presenting information related to an optical communication device, including: using a first camera to obtain a first image containing an optical communication device in an optical communication device recognition mode; and obtaining the optical communication device The first imaging position of the device in the first image; using a second camera to obtain a second image containing the optical communication device; and presenting at the second imaging position in the second image according to the first imaging position Information related to the optical communication device, wherein the first camera and the second camera are installed on the same plane and have the same posture and internal parameters.
  • the second imaging position is the same as the first imaging position.
  • the first imaging position the Z coordinate of the optical communication device in the first camera coordinate system or the second camera coordinate system, or The vertical distance between the optical communication device and the installation plane of the first camera and the second camera determines the second imaging position.
  • the second camera is determined according to the relative offset between the first camera and the second camera, the first imaging position, and the distance from the optical communication device to the first camera or the second camera. Imaging position.
  • Another aspect of the present invention relates to a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, it can be used to implement the above method.
  • Another aspect of the present invention relates to an electronic device, which includes a processor and a memory, and a computer program is stored in the memory.
  • a computer program is stored in the memory.
  • the solution of the present invention provides a method for presenting information related to an optical communication device.
  • an optical label can be displayed on an image taken in a non-optical label recognition mode (for example, a real image taken in a normal mode).
  • Related information therefore, enables users not only to interact with the optical tag through the image presented by the device, but also to perceive the environmental information around the optical tag, thereby improving the interaction efficiency and the interaction experience.
  • Figure 1 shows an exemplary image containing a light tag taken in a low exposure mode
  • Figure 2 shows an exemplary optical label
  • Figure 3 shows an image of a light label taken by a rolling shutter imaging device in a low exposure mode
  • Figure 4 shows a method for presenting information related to optical tags according to an embodiment of the present invention
  • Figure 5 shows an exemplary normally exposed image containing a light label
  • Fig. 6 shows an exemplary image presented according to an embodiment of the present invention.
  • Fig. 7 shows a method for presenting information related to an optical tag according to another embodiment of the present invention.
  • the optical tag usually includes a controller and at least one light source, and the controller can drive the light source through different driving modes to transmit different information outward.
  • each light tag can be assigned an identification information (ID), which is used to uniquely identify or identify the manufacturer, manager, or user of the light tag Light label.
  • ID an identification information
  • the controller in the optical tag can drive the light source to transmit the identification information outward, and the user can use the optical tag identification device to perform continuous image collection on the optical tag to obtain the identification information transmitted by the optical tag, which can be based on the identification.
  • Information to access corresponding services for example, accessing a web page associated with the identification information of the optical tag, obtaining other information associated with the identification information (for example, location information of the optical tag corresponding to the identification information), and so on.
  • Fig. 2 shows an exemplary optical label 100, which includes three light sources (respectively a first light source 101, a second light source 102, and a third light source 103).
  • the optical label 100 also includes a controller (not shown in FIG. 2) for selecting a corresponding driving mode for each light source according to the information to be transmitted.
  • the controller can use driving signals with different frequencies to control the turning on and off of the light source, so that when a rolling shutter imaging device (such as a CMOS imaging device) is used to photograph the light label 100 in a low exposure mode
  • a rolling shutter imaging device such as a CMOS imaging device
  • FIG 3 shows an image of the optical label 100 taken by the rolling shutter imaging device in the low exposure mode when the optical label 100 is transmitting information, in which the image of the first light source 101 shows relatively narrow stripes.
  • the images of the second light source 102 and the third light source 103 exhibit relatively wide fringes.
  • the optical label may additionally include one or more positioning marks located near the light source for transmitting information.
  • the positioning marks may be, for example, lights of a specific shape or color, and the lights may, for example, remain on during operation.
  • the optical label recognition device can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, a smart helmet, a smart watch, a car, etc.), or a machine that can move autonomously (for example, Drones, driverless cars, robots, etc.).
  • the optical label recognition device can obtain multiple images containing the optical label through continuous image acquisition of the optical label through the camera on it, and analyze the imaging of the optical label (or each light source in the optical label) in each image. Identify the information transmitted by the optical tag.
  • the identification information (ID) of the optical tag and any other information can be stored in the server.
  • the other information is, for example, service information related to the optical tag, description information or attribute information related to the optical tag, such as location information of the optical tag, Physical size information, physical shape information, orientation information, etc.
  • the optical label may also have unified or default physical size information and physical shape information.
  • the device can use the identified identification information of the optical label to query the server to obtain other information related to the optical label.
  • the server may be a software program running on a computing device, a computing device, or a cluster composed of multiple computing devices.
  • the optical tag may be offline, that is, the optical tag does not need to communicate with the server. Of course, it can be understood that online optical tags that can communicate with the server are also feasible.
  • Figure 4 shows a method for presenting information related to a light tag according to an embodiment of the present invention.
  • the method can be executed by a device that uses two cameras (referred to as the first camera and the second camera, respectively) , Wherein the first camera is used to operate in the light tag recognition mode to recognize the light tag, and the second camera is used to capture an image containing the light tag (for example, a real-life image under normal exposure).
  • the position and posture of the first camera and the second camera may have a fixed relative relationship.
  • the first camera and the second camera may be installed on the same device (for example, a mobile phone with at least two cameras). However, either or both of the first camera and the second camera may not be installed on the device, but may be communicatively connected with the device.
  • the rotation matrix R0 and the displacement vector t0 between the first camera and the second camera (also referred to as the first camera coordinate system and the second camera coordinate system) and the internal parameter information of the two cameras can be determined in advance.
  • the rotation matrix R0 is used to represent the relative posture information between the two cameras
  • the displacement vector t0 is used to represent the relative displacement information between the two cameras.
  • the position information in the first camera coordinate system can be converted into the position information in the second camera coordinate system through a rotation operation and a displacement operation.
  • the rotation matrix R0 is the identity matrix, so in fact, two cameras are not required. Rotation operations between coordinate systems.
  • the method includes the following steps:
  • Step 401 Use the first camera to obtain a first image containing the optical label in the optical label recognition mode.
  • the first camera When the first camera works in the optical tag recognition mode, it can take a first image containing the optical tag (for example, the image shown in FIG. 1). By analyzing the first image, the imaging position of the optical tag and the information transmitted by the optical tag can be obtained.
  • the optical label recognition mode is usually different from the normal shooting mode of the camera.
  • the camera in the optical label recognition mode, the camera can be set to a predetermined low exposure mode so that the information conveyed by the optical label can be recognized from the first image taken.
  • interactive information for example, an interactive icon
  • the optical tag may be presented at the imaging position of the optical tag for the user to operate.
  • the first image may not be user-friendly, because it may be difficult for the user to obtain useful other information, such as information about the surrounding environment of the light tag, when observing the first image with naked eyes. Therefore, directly presenting the interactive information corresponding to the light tag on the image obtained by using the light tag recognition mode cannot provide a good user experience.
  • the optical label recognition mode is usually different from the normal shooting mode of the camera, the present invention does not exclude a solution in which the optical label recognition mode is the same or substantially the same as the normal shooting mode.
  • Step 402 Obtain position information of the optical label relative to the first camera based on the first image.
  • the position information of the optical label relative to the first camera can be obtained by analyzing the imaging of the optical label in the first image, which can be expressed as the position information of the optical label in the first camera coordinate system.
  • the position information may be represented by coordinates (X1, Y1, Z1) in a coordinate system with the first camera as the origin, and may be referred to as a displacement vector t1.
  • the position information of the optical label can be represented by the position information of one point, for example, the position information of the optical label can be represented by the position information of the center point of the optical label; the position information of the optical label can also be represented by the position information of multiple points.
  • the position information of the optical label can be represented by the position information of multiple points that can define the rough outline of the optical label; the position information of the optical label can also be represented by the position information of a region; and so on.
  • the distance and direction of the optical label relative to the first camera can be determined by analyzing the imaging of the optical label in the first image, thereby determining its position information relative to the first camera.
  • the relative distance between the optical label and the first camera can be determined by the imaging size of the optical label in the first image and optional other information (for example, the actual physical size information of the optical label, camera internal parameters) (the larger the image, the greater the distance The closer; the smaller the image, the farther the distance).
  • the device where the first camera is located may obtain the actual physical size information of the optical label from the server, or the optical label may have a default uniform physical size (which may be stored on the device).
  • the direction of the optical label relative to the first camera can be determined by analyzing the imaging position of the optical label in the first image.
  • the perspective deformation of the optical label imaging can be determined by analyzing the imaging of the optical label in the first image, thereby determining the posture information of the optical label relative to the first camera (also referred to as direction information). Or orientation information), for example, the posture information of the optical tag in the first camera coordinate system.
  • the device where the first camera is located may obtain the actual physical shape information of the optical label from the server, or the optical label may have a default unified physical shape (which may be stored on the device).
  • the determined posture information of the optical tag relative to the first camera can be represented by a rotation matrix R1.
  • the rotation matrix is known in the imaging field, and in order not to obscure the present invention, it will not be described in detail here.
  • a coordinate system can be established based on the optical tag, and the coordinate system can be called the world coordinate system or the optical tag coordinate system.
  • Some points on the optical label can be determined as some spatial points in the world coordinate system, and the coordinates of these spatial points in the world coordinate system can be determined according to the physical size information and/or physical shape information of the optical label.
  • the device where the first camera is located can obtain the physical size information and/or physical shape information of the optical tag from the server, or the optical tag can have default unified physical size information and/or physical shape information, and the device can store the physical size information And/or physical shape information.
  • Some points on the optical label may be, for example, the corner of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on.
  • the image points corresponding to these spatial points can be found in the first image, and the position of each image point in the first image can be determined.
  • the position information of the optical tag in the first camera coordinate system can be calculated. It can be represented by the displacement vector t1.
  • the posture information of the optical tag in the first camera coordinate system can also be calculated. It can be represented by the rotation matrix R1.
  • the combination (R1, t1) of the rotation matrix R1 and the displacement vector t1 is the pose information (that is, position and posture information) of the optical tag in the first camera coordinate system.
  • the method of calculating the rotation matrix R and the displacement vector t according to the coordinates of each spatial point in the world coordinate system and the position of the corresponding image point in the image is known in the prior art. For example, 3D-2D can be used The PnP (Perspective-n-Point) method is used to calculate R and t.
  • the rotation matrix R and the displacement vector t can actually describe how to transform the coordinates of a certain point between the world coordinate system and the camera coordinate system. For example, through the rotation matrix R and the displacement vector t, the coordinates of a certain space point in the world coordinate system can be converted to the coordinates in the camera coordinate system, and can be further converted to the position of the image point in the image.
  • the information conveyed by the optical label may be further obtained based on the imaging of the optical label in the first image, for example, identification information of the optical label.
  • Step 403 Obtain the position information of the optical label relative to the second camera according to the position information of the optical label relative to the first camera.
  • the relative pose information between the first camera and the second camera for example, The rotation matrix R0 and the displacement vector t0 between the first camera and the second camera are obtained to obtain the position information of the optical label relative to the second camera.
  • the position information may be, for example, the position information of the optical label in the second camera coordinate system. And can be represented by the displacement vector t2.
  • the first camera and the second camera can be used according to the posture information.
  • the relative posture information between the cameras for example, the rotation matrix R0 between the first camera and the second camera
  • the posture information of the optical label relative to the second camera for example, the posture of the optical label in the second camera coordinate system Information
  • the posture information can be represented by the rotation matrix R2.
  • Step 404 Use the second camera to obtain a second image containing the light tag.
  • the second image containing the light tag obtained by the second camera may be, for example, a normal exposure real scene image, which contains information about the light tag and its surrounding environment.
  • Figure 5 shows an exemplary normally exposed image containing a light label, which corresponds to the low exposure image shown in Figure 1, which shows a restaurant door, and a rectangular light label above the door .
  • the second image can show the environmental information around the optical tag (that is, the second image is user-friendly), since the second image is not captured in the optical tag recognition mode, it cannot be based on the second image.
  • the recognition of the optical label is realized (that is, the optical label or the information transmitted by the optical label cannot be identified), and thus the information corresponding to the optical label (for example, an interactive icon) cannot be presented on the second image.
  • the second image containing the light label obtained by the second camera is preferably a real-life image with normal exposure, but this is not a limitation. According to actual needs, the second image can also be an image obtained by the camera in other shooting modes, such as grayscale Images etc.
  • Step 405 According to the position information of the optical label relative to the second camera, present information related to the optical label on the second image.
  • the second image captured by the optical label on the second camera can be calculated using the imaging formula based on the position information and the internal parameter information of the second camera.
  • the display position where the above should be. It is well known in the art to use an imaging formula to calculate the imaging position of a certain point based on its position information relative to the camera, and will not be described in detail here to avoid obscuring the present invention. According to the calculated display position, information related to the light tag may be presented (for example, superimposed, embedded, covered, etc.) at a suitable position on the second image, and the suitable position is preferably the calculated display position.
  • the information related to the light tag can be various information, for example, the image of the light tag, the logo of the light tag, the identification information of the light tag, the icon associated with the light tag or its identification information, and the light tag or its identification information.
  • the name of the associated store any other information associated with the light label or its identification information, and various combinations of them.
  • the posture information of the optical tag relative to the second camera is also obtained in step 403, when the information related to the optical tag is presented on the second image, the posture information may be further based on the posture information.
  • the posture information of these images, logos, icons, etc. can be set based on the posture information of the light tag in the second camera coordinate system. This is advantageous, especially when the logo or icon or the like corresponding to the light label is a three-dimensional virtual object.
  • Fig. 6 shows an exemplary image presented according to an embodiment of the present invention, which is superimposed on the image shown in Fig. 5 with a circular icon associated with the light label therein.
  • the display position is the actual imaging position of the optical label.
  • the icon can have an interactive function. After clicking the icon, the user can access the information of the corresponding restaurant, and can perform operations such as reservation, queuing, and ordering. In this way, the user can not only interact with the optical tag through the image presented by the device, but also perceive the environmental information around the optical tag.
  • steps of obtaining the first image and obtaining the second image described above can be performed in any suitable order and can be performed concurrently. In addition, these steps can also be repeated as needed to continuously update the scene displayed by the camera and the display position of the light tag.
  • the first camera and the second camera of the device are installed on the same plane (that is, the coordinate system of the first camera and the coordinate system of the second camera are not offset in the Z-axis direction), and have the same Internal parameters and the same posture (that is, the posture of the first camera coordinate system and the second camera coordinate system are the same, and the rotation matrix between the two is the identity matrix, so there is no need to perform the rotation operation between the two camera coordinate systems)
  • the orientation of the two cameras is also the same, only the installation position is somewhat different (for example, the installation position of the two is offset by a few millimeters).
  • the imaging position of the optical tag in the images captured by the two cameras is basically the same (especially when the optical tag is far from the camera), and there is only an insignificant offset.
  • the imaging position of the optical label in the image captured by the first camera (working in optical label recognition mode) can be directly used to determine the optical label in the second camera (working in non- Optical label recognition mode) the imaging position in the captured image.
  • Fig. 7 shows a method for presenting information related to an optical tag according to this embodiment, which may include the following steps:
  • Step 701 Use the first camera to obtain a first image containing the optical label in the optical label recognition mode.
  • Step 702 Obtain the first imaging position of the optical label in the first image.
  • the imaging position of the light tag in the image can be represented by the position information of a point, for example, the imaging position of the light tag in the image can be represented by the position information of the center point of the light tag; the imaging position of the light tag in the image is also It can be represented by the position information of multiple points, for example, the imaging position of the light tag in the image can be represented by the position information of multiple points that can define the approximate outline of the light tag; the imaging position of the light tag in the image can also be It is represented by the location information of an area; etc.
  • Step 703 Use the second camera to obtain a second image containing the optical tag.
  • Step 704 Present information related to the light tag at a second imaging position in the second image according to the first imaging position.
  • the second imaging position may be derived based on the first imaging position according to the imaging formula.
  • the two camera coordinate systems only have an offset d in the X-axis direction.
  • the displacement vector T between the two camera coordinate systems It is (d, 0, 0).
  • a point (X, Y, Z) in the coordinate system of the first camera its coordinates in the coordinate system of the second camera are (X+d, Y, Z).
  • the imaging position (u1, v1) of the point in the image taken by the first camera and the imaging position (u2, v2) in the image taken by the second camera can be calculated as:
  • fx, fy are the focal length of the camera in the x and y directions; (cx, cy) are the coordinates of the camera aperture in the camera coordinate system.
  • the offset d can be preset Determination), for their imaging positions (u1, v1) and (u2, v2), exist as well.
  • fx and fy are the parameter values inherent to the camera. Therefore, as long as the relative offset (including the offset direction and the offset distance) between the first camera and the second camera can be known in advance, after obtaining a certain point in the first camera coordinate system or the second camera coordinate system After the Z coordinate, the imaging position of the point in the image captured by another camera can be derived based on the imaging position of the point in the image captured by another camera.
  • the point is in the first camera coordinate system or the second camera
  • the Z coordinate in the coordinate system is the same, which is roughly equal to the vertical distance from the point to the installation plane of the camera. Therefore, in an embodiment, the Z coordinate of the optical label in the first camera coordinate system or the second camera coordinate system (or the vertical distance between the optical label and the installation plane of the two cameras) and the optical label in the first camera
  • the first imaging position in the captured first image is used to determine the second imaging position of the light tag in the second image captured by the second camera. Any method described in step 402 above can be used to obtain the Z coordinate of the optical label in the camera coordinate system or the vertical distance from the optical label to the installation plane of the camera.
  • the optical label when scanning and identifying the optical label, is usually located near the center of the screen.
  • the distance between the optical label and the first camera or the second camera can be used as the distance between the optical label and the two cameras.
  • the distance from the optical label to the camera is easier to determine.
  • the distance from the optical label to the camera can be determined by the imaging size of the optical label as described above, or the distance from the optical label to the camera can be measured by a binocular camera. In this way, the second imaging position of the optical tag in the second image captured by the second camera can be determined more conveniently or faster with an acceptable error.
  • the second imaging position may be set to be the same as the first imaging position.
  • fx, fy are the focal lengths of the camera in the x and y directions
  • dx, dy are the offsets of the two camera coordinate systems in the X and Y directions.
  • fx, fy, dx, dy are usually much smaller than Z (usually a distance of a few meters to tens of meters). Therefore, in some applications that do not require high accuracy, the sum can be considered to be approximately equal to 0, and the second The imaging position is set to be the same as the first imaging position.
  • This method of directly using the imaging position of the light tag in the first image as its imaging position in the second image will bring some errors, but it improves the efficiency and reduces the amount of calculation. Therefore, in some cases, the accuracy requirements are not required. It is very advantageous in high applications. In particular, for the case where the optical tag is recognized at a long distance (Z is large), the error caused by the above method is actually very small, and will not affect the user experience.
  • the device mentioned in this article may be a device carried by a user (for example, a mobile phone, a tablet computer, smart glasses, a smart helmet, a smart watch, etc.), but it is understood that the device may also be a machine that can move autonomously, for example, UAVs, unmanned vehicles, robots, etc., which are equipped with image acquisition devices, such as cameras.
  • a user for example, a mobile phone, a tablet computer, smart glasses, a smart helmet, a smart watch, etc.
  • the device may also be a machine that can move autonomously, for example, UAVs, unmanned vehicles, robots, etc., which are equipped with image acquisition devices, such as cameras.
  • the present invention can be implemented in the form of a computer program.
  • the computer program can be stored in various storage media (for example, a hard disk, an optical disk, a flash memory, etc.), and when the computer program is executed by a processor, it can be used to implement the method of the present invention.
  • the present invention can be implemented in the form of an electronic device.
  • the electronic device includes a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, it can be used to implement the method of the present invention.
  • references to "various embodiments”, “some embodiments”, “one embodiment”, or “an embodiment” herein refer to the specific features, structures, or properties described in connection with the embodiments included in In at least one embodiment. Therefore, the appearances of the phrases “in various embodiments”, “in some embodiments”, “in one embodiment”, or “in an embodiment” in various places throughout this document do not necessarily refer to the same implementation example.
  • specific features, structures, or properties can be combined in any suitable manner in one or more embodiments. Therefore, a specific feature, structure, or property shown or described in one embodiment can be combined in whole or in part with the feature, structure, or property of one or more other embodiments without limitation, as long as the combination is not non-limiting. Logical or not working.

Abstract

L'invention concerne un procédé de présentation d'informations relatives à un appareil de communication optique, et un dispositif électronique. Le procédé consiste : à utiliser une première caméra pour obtenir, dans un mode de reconnaissance d'appareil de communication optique, une première image contenant un appareil de communication optique ; à obtenir, sur la base de la première image, des informations de position de l'appareil de communication optique par rapport à la première caméra ; à obtenir, en fonction des informations de position de l'appareil de communication optique par rapport à la première caméra, des informations de position de l'appareil de communication optique par rapport à une seconde caméra ; à utiliser la seconde caméra pour obtenir une seconde image contenant l'appareil de communication optique ; et à présenter, en fonction des informations de position de l'appareil de communication optique par rapport à la seconde caméra et sur la seconde image, des informations relatives à l'appareil de communication optique.
PCT/CN2020/080160 2019-03-27 2020-03-19 Procédé de présentation d'informations relatives à un appareil de communication optique, et dispositif électronique WO2020192543A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910237930.1A CN111753565B (zh) 2019-03-27 2019-03-27 用于呈现与光通信装置有关的信息的方法和电子设备
CN201910237930.1 2019-03-27

Publications (1)

Publication Number Publication Date
WO2020192543A1 true WO2020192543A1 (fr) 2020-10-01

Family

ID=72608886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080160 WO2020192543A1 (fr) 2019-03-27 2020-03-19 Procédé de présentation d'informations relatives à un appareil de communication optique, et dispositif électronique

Country Status (3)

Country Link
CN (1) CN111753565B (fr)
TW (1) TW202103045A (fr)
WO (1) WO2020192543A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726996A (zh) * 2021-01-04 2022-07-08 北京外号信息技术有限公司 用于建立空间位置与成像位置之间的映射的方法和系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102333193A (zh) * 2011-09-19 2012-01-25 深圳超多维光电子有限公司 一种终端设备
CN104715753A (zh) * 2013-12-12 2015-06-17 联想(北京)有限公司 一种数据处理的方法及电子设备
CN106446749A (zh) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 一种光标签拍摄和光标签解码接力工作方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143340A2 (fr) * 2010-05-11 2011-11-17 Trustees Of Boston University Utilisation de réseaux nanoporeux en vue du séquençage multiplexe d'acides nucléiques
CN106525021A (zh) * 2015-09-14 2017-03-22 中兴通讯股份有限公司 位置确定方法、装置、系统及处理中心
CN106372556B (zh) * 2016-08-30 2019-02-01 西安小光子网络科技有限公司 一种光标签的识别方法
CN206210121U (zh) * 2016-12-03 2017-05-31 河池学院 一种基于智能手机的停车场停车寻车系统
CN109413324A (zh) * 2017-08-16 2019-03-01 中兴通讯股份有限公司 一种拍摄方法和移动终端
CN109242912A (zh) * 2018-08-29 2019-01-18 杭州迦智科技有限公司 采集装置外参标定方法、电子设备、存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102333193A (zh) * 2011-09-19 2012-01-25 深圳超多维光电子有限公司 一种终端设备
CN104715753A (zh) * 2013-12-12 2015-06-17 联想(北京)有限公司 一种数据处理的方法及电子设备
CN106446749A (zh) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 一种光标签拍摄和光标签解码接力工作方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726996A (zh) * 2021-01-04 2022-07-08 北京外号信息技术有限公司 用于建立空间位置与成像位置之间的映射的方法和系统
CN114726996B (zh) * 2021-01-04 2024-03-15 北京外号信息技术有限公司 用于建立空间位置与成像位置之间的映射的方法和系统

Also Published As

Publication number Publication date
TW202103045A (zh) 2021-01-16
CN111753565B (zh) 2021-12-24
CN111753565A (zh) 2020-10-09

Similar Documents

Publication Publication Date Title
US11887312B2 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
WO2019242262A1 (fr) Procédé et dispositif de guidage à distance basé sur la réalité augmentée, terminal et support de stockage
US9401050B2 (en) Recalibration of a flexible mixed reality device
WO2021218546A1 (fr) Procédé et système de positionnement de dispositif
US8369578B2 (en) Method and system for position determination using image deformation
KR102398478B1 (ko) 전자 디바이스 상에서의 환경 맵핑을 위한 피쳐 데이터 관리
US11263818B2 (en) Augmented reality system using visual object recognition and stored geometry to create and render virtual objects
CN113835352B (zh) 一种智能设备控制方法、系统、电子设备及存储介质
WO2020192543A1 (fr) Procédé de présentation d'informations relatives à un appareil de communication optique, et dispositif électronique
WO2021093703A1 (fr) Procédé et système d'interaction basés sur un appareil de communication optique
WO2021057887A1 (fr) Procédé et système permettant de définir un objet virtuel susceptible d'être présenté à une cible
WO2020244480A1 (fr) Dispositif de positionnement relatif et procédé de positionnement relatif correspondant
CN111242107B (zh) 用于设置空间中的虚拟对象的方法和电子设备
CN113008135B (zh) 用于确定空间中目标点位置的方法、设备、电子装置及介质
US11935286B2 (en) Method and device for detecting a vertical planar surface
JP6208977B2 (ja) 情報処理装置、通信端末およびデータ取得方法
CN112581630A (zh) 一种用户交互方法和系统
CN112417904B (zh) 用于呈现与光通信装置有关的信息的方法和电子设备
WO2020244576A1 (fr) Procédé de superposition d'objet virtuel sur la base d'un appareil de communication optique, et dispositif électronique correspondant
TWI759764B (zh) 基於光通信裝置疊加虛擬物件的方法、電子設備以及電腦可讀取記錄媒體
Ballestin et al. Assessment of optical see-through head mounted display calibration for interactive augmented reality
CN112051546B (zh) 一种用于实现相对定位的装置以及相应的相对定位方法
WO2022121606A1 (fr) Procédé et système d'obtention d'informations d'identification de dispositif ou d'utilisateur de celui-ci dans un scénario
CN114827338A (zh) 用于在设备的显示媒介上呈现虚拟对象的方法和电子装置
CN112053444A (zh) 基于光通信装置叠加虚拟对象的方法和相应的电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20779556

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20779556

Country of ref document: EP

Kind code of ref document: A1