WO2014101435A1 - 基于二维码的增强现实方法、系统及终端 - Google Patents

基于二维码的增强现实方法、系统及终端 Download PDF

Info

Publication number
WO2014101435A1
WO2014101435A1 PCT/CN2013/081876 CN2013081876W WO2014101435A1 WO 2014101435 A1 WO2014101435 A1 WO 2014101435A1 CN 2013081876 W CN2013081876 W CN 2013081876W WO 2014101435 A1 WO2014101435 A1 WO 2014101435A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional code
image
information
resource
scene image
Prior art date
Application number
PCT/CN2013/081876
Other languages
English (en)
French (fr)
Inventor
柳寅秋
李薪宇
宋海涛
Original Assignee
成都理想境界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都理想境界科技有限公司 filed Critical 成都理想境界科技有限公司
Publication of WO2014101435A1 publication Critical patent/WO2014101435A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image

Definitions

  • the present invention relates to the field of mobile augmented reality, and in particular, to a two-dimensional code based augmented reality method, system, and mobile terminal. Background technique
  • the two-dimensional code also known as the two-dimensional barcode, records the data symbol information by using a certain pattern of a certain pattern on a plane (two-dimensional direction) to distribute the black and white graphics, which is used in the coding to make up the inside of the computer.
  • the logic-based "0", "1" bitstream concept uses thousands of geometric shapes corresponding to binary systems to represent literal numerical information.
  • Augmented Reality has begun to slowly enter the public eye.
  • the core is to superimpose virtual information in real-time scenes, and use virtual information to real scenes. Complement and enhance so that virtual information can be displayed in the real world.
  • the existing augmented reality technology in order to superimpose the virtual information on the real scene, the relative positional relationship between the camera and the real scene must be calculated, that is, the real scene image is registered with the sample image to obtain a homography matrix. Therefore, for the existing augmented reality technology, if the mobile terminal or the augmented reality server does not store the sample image of a real scene or the feature point information of the sample image, the fusion of the virtual information and the real scene cannot be implemented.
  • An object of the present invention is to provide a two-dimensional code-based augmented reality method, system, and mobile terminal, which can generate two images in a scene image by decoding a two-dimensional code in a scene image and re-encoding without a sample image.
  • the multimedia information of the video, image, text, and 3D model related to the two-dimensional code is presented.
  • the present invention provides a two-dimensional code-based augmented reality method, including: a camera module capturing a real scene image containing a two-dimensional code;
  • the feature detection is performed on the two-dimensional code front view and the two-dimensional code image in the scene image captured by the camera module, respectively, and the feature descriptions of the two are obtained respectively; and the image registration is performed according to the feature descriptions of the two, and the posture of the camera is calculated to obtain a single Qualitative matrix
  • the virtual information corresponding to the two-dimensional code is rendered and outputted at a position of the two-dimensional code or a certain offset position of the two-dimensional code in the real scene.
  • the text content is rendered as a texture;
  • the resource information is parsed to obtain the resource URL, the UR is accessed to obtain the virtual information, and according to the virtual information type] 3 ⁇ 4 preset mode to load.
  • the virtual information type comprises: one or more of a video, an image, a text, and a 3D model.
  • the two-dimensional code in the real scene image is a conventional two-dimensional code or a customized two-dimensional code; the resource information in the customized two-dimensional code includes a resource identifier, a resource type, a resource loading interface size, and a rendering position offset degree. One or more.
  • the feature detection is performed on the two-dimensional code front view and the two-dimensional code image in the scene image captured by the camera module, and the feature descriptions of the two are respectively obtained, which is one of two ways of the T-plane - the first one: the second The front view of the dimension code and the two-dimensional code image in the scene image captured by the camera module are detected by the full-image feature, and the feature descriptions of the two are respectively obtained;
  • Manner 2 Feature detection is performed only on the positive view of the two-dimensional code and the stable region of the two-dimensional code image in the scene image captured by the camera module, and the feature descriptions of the two are respectively obtained.
  • the present invention also proposes an augmented reality system based on a two-dimensional code, comprising:
  • a camera module configured to capture a real scene image containing a two-dimensional code
  • the two-dimensional code decoding module is configured to scan the two-dimensional code in the scene image, and decode the two-dimensional code to obtain the encoded information of the two-dimensional code, where the encoded information includes: a code system, a version, and resource information;
  • a two-dimensional code encoding module that re-encodes the encoded information parsed by the two-dimensional code decoding module to generate a two-dimensional code front view having the same two-dimensional code system and version in the scene image;
  • a resource obtaining module configured to parse resource information in the encoded information to obtain virtual information corresponding to the two-dimensional code
  • an image feature extraction module configured to view the two-dimensional code and the two-dimensional image in the scene image captured by the camera module Code map For feature detection, the feature descriptions of the two are obtained respectively - the image tracking registration module is used to perform image registration according to the two-dimensional code front view and the two-dimensional code image feature description in the scene image, and calculate the posture of the camera to obtain a single Qualitative matrix
  • a rendering display module configured to: according to the homography matrix, render and output the virtual information corresponding to the two-dimensional code at a position of the two-dimensional code in the real scene or a certain offset position of the two-dimensional code.
  • the image feature extraction module performs feature detection on the two-dimensional code front view and the two-dimensional code image in the scene image captured by the camera module, and is one of the following two methods: mode one: the two-dimensional code front view And the two-dimensional code image in the scene image captured by the camera module performs full-picture feature detection, and respectively obtains feature descriptions of the two;
  • Manner 2 Feature detection is performed only on the positive view of the two-dimensional code and the stable region of the two-dimensional code image in the scene image captured by the camera module, and the feature descriptions of the two are respectively obtained.
  • the two-dimensional code in the real scene image is a conventional two-dimensional code or a customized two-dimensional code;
  • the resource information in the customized two-dimensional code includes a resource identifier, a resource type, a resource loading interface size, and a rendering position offset degree. One or more of them.
  • the resource obtaining module parses the resource information to obtain text content
  • the text content is rendered as a texture
  • the resource information is obtained by parsing the resource information, accessing the UR to obtain virtual information, and
  • the loading is performed according to a virtual information type preset manner, and the virtual information type includes: one or more of a video, an image, a text, and a 3D model.
  • the present invention also proposes a mobile terminal comprising the above-described two-dimensional code based augmented reality system.
  • the present invention has a beneficial effect as T:
  • the invention directly generates a two-dimensional code front view consistent with the two-dimensional code in the scene image according to the two-dimensional code in the scene image, and uses the generated two-dimensional code front view to track and match the two-dimensional code image in the scene image, and calculates
  • the attitude of the camera is a homography matrix. It does not need to store the QR code sample image in the database. It is applicable to any QR code. It breaks the traditional augmented reality. In the database, the corresponding sample image must be pre-stored in the database. The limitations of tracking matching.
  • the invention does not need sample images, the query and matching steps from the remote server are avoided, the system response delay caused by the network transmission problem can be reduced, and the user's network communication traffic is saved.
  • the present invention deeply explores the potential of the two-dimensional code as an information portal, so that the related information and resources of the two-dimensional code are presented to the user in a more vivid form.
  • FIG. 1 is a schematic flowchart of an augmented reality method based on a two-dimensional code according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a process and an effect of superimposing virtual information according to the method of FIG.
  • Figure 3 is a schematic diagram of two-dimensional code images of several commonly used codes
  • FIG. 4 is a schematic diagram of a two-dimensional code application in which an occlusion area is blocked by a small icon. detailed description
  • augmented reality is generally obtained by registering a real scene image with a sample image to obtain a homography matrix.
  • This method requires that the image capable of augmented reality must be stored in the registered sample image on the mobile terminal or the server side, if the mobile terminal or the augmented reality server does not store the sample image of a real scene or the characteristics of the sample image.
  • the point information the fusion of the virtual information and the real scene cannot be realized, and the promotion of the augmented reality technology is limited by the sample image.
  • the sample image stored in the mobile terminal occupies the terminal storage space, and the storage of the mass identifier cannot be satisfied: if the sample image is stored in the remote server, the retrieval and download of the sample image delays the system response and wastes the user's network communication traffic.
  • Two-dimensional code decoding can obtain information such as the code system, version and error correction level of the two-dimensional code. After decoding the two-dimensional code, it can be re-encoded according to the same code system and version to generate exactly the same as the previous two-dimensional code pattern.
  • the two-dimensional code is proposed as the recognition and localization identifier in the augmented reality system, and the two-dimensional code in the real scene image is decoded and re-encoded to generate the two-dimensional code in the real scene image.
  • the same two-dimensional code front view of the image is used as the sample image to track the feature points of the two-dimensional code in the scene, and the matching feature points are found to calculate the homography moment.
  • the two-dimensional code stable region is relative to the occludable region of the two-dimensional code, because the two-dimensional code has a certain error correction capability, and can be correctly decoded when a certain region is occluded (as shown in FIG. 4
  • the occlusion area of the QR code is placed with a small icon QR code, which can be correctly decoded. Therefore, we call the occlusion area that can be occluded without affecting the correct decoding of the QR code.
  • the area other than the occludable area is a stable area.
  • a schematic flowchart of a method for augmented reality based on a two-dimensional code includes the following steps S101 to S105:
  • the SlOb camera module captures a real scene image containing a two-dimensional code, which may be a conventional two-dimensional code, or may be a customized two-dimensional code, and the conventional two-dimensional code refers to a text field or a URI link in the resource information.
  • the network two-dimensional code, the customized two-dimensional code refers to one or more of the resource information including the resource identifier, the resource type, the resource loading interface size, the rendering position offset, and the like;
  • S102 Scan a two-dimensional code in the scene image, and decode the two-dimensional code to obtain encoding information of the two-dimensional code, where the encoding information includes: a code system, a version, and resource information, where the resource information refers to scanning two-dimensional Information about the code;
  • the resource information corresponding to the two-dimensional code may be text information, or may be a resource URI.
  • the remote server is accessed according to the UIR address to obtain the virtual information content corresponding to the URI;
  • the encoded information of the obtained two-dimensional code is re-encoded to generate a two-dimensional code front view with the same two-dimensional code code system and version in the scene image, and the two-dimensional code front view may be a scene image.
  • the two-dimensional code pattern in the two-dimensional code pattern is exactly the same (the occlusion area of the scene QR code without any occlusion), or it may be the same two-dimensional code as the two-dimensional code stable area pattern in the scene image.
  • S104 performing feature detection on the two-dimensional code front view and the two-dimensional code image in the scene image captured by the camera module, respectively obtaining feature descriptions of the two; and performing image registration according to the feature description of the two (ie, feature point matching) , find the matching feature points in the two), calculate the pose of the camera, and obtain the homography matrix;
  • the feature check is performed on the two-dimensional code front view and the two-dimensional code image in the scene image captured by the camera module.
  • the measurement may be performed on the two-dimensional code front view and the two-dimensional code image in the scene image captured by the camera module for full-image feature detection (this method has better effect when the occlusion region of the scene two-dimensional code has no occlusion), A certain matching error may occur when the occludable portion of the scene QR code is partially or completely occluded, or may be performed only on the stable region of the two-dimensional code image in the front view of the two-dimensional code and the scene image captured by the camera module.
  • Feature detection this method does not require occlusion of the occludable area of the scene QR code, and can match the image well).
  • the method for detecting the feature may be: FAST, Harris, Shi-Thomas, etc., and the feature description may be SIFT, SU: RF, ORB, BRIEF, FREAK, etc. These technologies are all prior art and will not be described here.
  • the two-dimensional code in the real scene is a planar object, which determines a world coordinate system.
  • the two-dimensional code front view generated by decoding and encoding belongs to the image coordinate system, and is in the two-dimensional code front view and the scene image captured by the camera module. For example, four sets of corresponding feature points are arranged on the two-dimensional code image, and the coordinates of the four feature points on the world coordinate system and the coordinates on the image coordinate system can be established as follows:
  • the virtual information corresponding to the two-dimensional code is rendered and outputted at a position of the two-dimensional code in the real scene or a certain offset position of the two-dimensional code.
  • the two-dimensional code is a conventional two-dimensional code
  • the virtual information is directly superimposed on the two-dimensional code position; and if the two-dimensional code is a customized two-dimensional code, the resource loading interface size and rendering are set in the resource information.
  • the positional deviation degree or the like is displayed in the virtual information stack at an offset position from the target position according to the set size and the degree of offset.
  • the text content is rendered as a texture
  • the resource information is obtained by parsing the resource information
  • the URI is accessed to obtain the virtual information
  • the virtual information type is loaded in a preset manner, and the virtual information type includes: one or more of a video, an image, a text, and a 3D model, for example: when the virtual information is text information, the text content is directly used as a texture.
  • the 3D model needs to be parsed first; when the virtual information is video information, the video needs to be decoded first, and each video frame image is smeared as a texture, and the frame is mapped to the frame by frame. Graphic rendering on the 3D model.
  • the two-dimensional code in the real scene can be tracked and tracked in the tracking feature S104, and the tracking algorithm is updated to calculate the homography.
  • Matrix to enable virtual information More accurate overlap to the predetermined location.
  • the condition for judging the loss of tracking may be: recalculating the matching degree of the tracking success point, and counting, when the number of well-matched points is below the threshold (the threshold range is generally 5 ⁇ 20, preferably 10), the agent judges Tracking is lost.
  • the method of the embodiment of the present invention can be applied to two-dimensional codes such as PI)F417, QR Code, Data Matrix, Grid Matrix and Aztec, and several schematic diagrams of the two-dimensional code of the conventional 3 ⁇ 4 code can be seen in FIG.
  • the two-dimensional code may be a conventional two-dimensional code or a customized two-dimensional code.
  • the so-called conventional two-dimensional code means that the two-dimensional code is derived from the Internet, and the resource information contained therein is generally a character string, for example, URJ for a piece of text or two-dimensional code related resources;
  • the so-called custom two-dimensional code refers to a resource identifier formed according to a certain unified format, that is, the resource information in the two-dimensional code includes a resource identifier, a resource type, and a resource loading interface size.
  • One or more of some other setting information such as rendering position offset.
  • the resource address is determined by ID and URI (this address may be on the remote server or local on the client).
  • the resource type can be used to preset the loading of related resources. The way (pictures, text, audio, video, and 3D models are loaded differently), other settings include the size of the resource loading interface and the offset of the rendering position relative to the QR code.
  • the invention also provides a two-dimensional code-based augmented reality system corresponding to the above method, comprising: a camera module, a two-dimensional code decoding module, a two-dimensional code encoding module, a resource acquiring module, an image feature extraction module, and an image tracking matching.
  • Quasi-module and rendering display module where:
  • the camera module is configured to capture a real scene image containing a two-dimensional code, where the two-dimensional code in the real scene image is a conventional two-dimensional code or a customized two-dimensional code, and the resource information in the customized two-dimensional code includes a resource identifier.
  • the resource identifier One or more of a resource type, a resource loading interface size, and a rendering position offset;
  • the two-dimensional code decoding module is configured to scan a two-dimensional code in a scene image, and decode the two-dimensional code to obtain encoding information of the two-dimensional code, where the encoding information includes: a code system, a version, and resource information;
  • the two-dimensional code encoding module is configured to perform coding according to the same code system and version according to the coded information parsed by the two-dimensional code decoding module, and generate two identical code patterns in the scene image. a dimension code front view, or a two-dimensional code front view identical to the two-dimensional code stable area pattern in the scene image, the resource obtaining module, configured to parse the resource information in the encoded information, to obtain a two-dimensional code corresponding Virtual information;
  • the image feature extraction module is configured to perform feature detection on a two-dimensional code front view and a two-dimensional code image in a scene image captured by the camera module, and respectively obtain feature descriptions of the two;
  • the image tracking registration module is configured to perform image registration according to a two-dimensional code front view and a two-dimensional code image feature description in the scene image, and calculate a posture of the camera to obtain a homography matrix;
  • the rendering display module is configured to: at a two-dimensional code position or a two-dimensional code in a real scene according to a homography matrix At a predetermined offset position, the virtual information corresponding to the two-dimensional code is displayed and outputted, and when the resource acquisition module parses the resource information to obtain text content, the text content is rendered as a texture; and when parsing
  • the resource information is obtained by the resource URI, and the URI accesses the URI to obtain the virtual information, and is loaded according to the virtual information type in a preset manner, where the virtual information type includes: video, image, text, and type in the 3D model. Or a variety.
  • the image tracking registration module obtains the homography matrix
  • the two-dimensional code in the real scene is tracked, and the tracking algorithm is updated in real time to calculate the homography matrix, so that the virtual information can be more accurately overlapped. Go to the scheduled location.
  • the present invention also proposes a mobile terminal comprising the above-described augmented reality system based on a two-dimensional code.
  • the invention uses the two-dimensional code as the identification and positioning identifier in the augmented reality system, and directly generates the two-dimensional code front view by decoding and re-encoding the two-dimensional code, which is used for the calculation of the homography matrix, which breaks the traditional augmented reality application.
  • the corresponding sample image must be pre-stored in order to perform the tracking matching, and the template query and matching step to the remote server when using the traditional identifier is avoided, which can reduce the system response delay caused by the network transmission problem. Save users' network traffic.
  • the invention is not limited to the specific embodiments described above.
  • the invention extends to any new feature or any new combination disclosed in this specification, as well as a novel or a novel combination of any new method or process disclosed.

Abstract

本发明公开了一种基于二维码的增强现实方法,以二维码作为增强现实系统中的识别与定位标识,对真实场景图像中的二维码进行解码与再编码直接生成与之相同的二维码正视图,用生成的二维码正视图与场景图像中的二维码图像进行跟踪匹配,计算单应性矩阵,相应的,本发明还公开了基于二维码的增强现实系统及移动终端,既打破了传统增强现实应用中,必须在数据库中预先存储对应样本图像才能进行跟踪匹配的局限性,又避免了使用传统标识物的时候向远程服务器的模板查询与匹配步骤,能够减少因网络传输问题造成的系统响应延迟,节约用户的网络通信流量。

Description

基于二维码的增强现实方法、 系统及终端
技术领域 本发明涉及移动增强现实领域,尤其涉及一种基于二维码的增强现实方法、系统及移 动终端。 背景技术
二维码又称二维条码, 是用某种特定的儿何图形按一定规律在平面(二维方向上)分 布黑白相间的图形来记录数据符号信息,其在代码编制上巧妙利用构成计算机內部逻辑基 础的 "0" 、 " 1 " 比特流概念,使用若千个与二迸制相对应的几何形体来表示文字数值信 息。
近年来,二维码得到了广泛应用,市面上专门用于扫描识别二维码的移动应用层出不 穷, 但这些应用扫描二维码后, 均直接呈现出二维码解析后所得到的文字信息或视频、 网 页等资源链接网址, 使用起来不够炫。
在二维码使用越来越普遍的同时, 增强现实技术 (AR, Augumenied Reality) 开始慢 慢进入公众视线,其核心是将虚拟信息实时叠加到真实环境呈现的场景中,利用虚拟信息 对真实场景进行补充、增强,让虚拟信息在真实世界中同歩展示。现有的增强现实技术中, 要实现将虚拟信息叠加到真实场景上,必须计算摄像机与真实场景间的相对位置关系, 即 通过真实场景图像与样本图像进行配准,得到单应性矩阵。因此对于现有的增强现实技术, 如果移动终端或增强现实服务器端没有存储某真实场景的样本图像或该样本图像的特征 点信息, 则无法实现虚拟信息与该真实场景的融合。 发明内容 本发明的目的是提供一种基于二维码的增强现实方法、系统及移动终端,在没有样本 图像的情况下,通过解码场景图像中二维码及重新编码生成与场景图像中的二维码一致的 二维码正视图,及对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行特征 检测与匹配,计算单应性矩阵,实现在真实场景二维码位置处或二维码的一定偏移位置处, 呈现二维码相关的视频、 图像、 文本、 3D模型的多媒体信息。 为了实现上述发明目的, 本发明提供了一种基于二维码的增强现实方法, 包括: 摄像模块捕获含有二维码的真实场景图像;
扫描场景图像中的二维码, 并对二维码进行解码, 获得二维码的编码信息, 所述编码 信息包括: 码制、 版本及资源信息;
对获得的二维码的编码信息进行再编码,生成与场景图像中的二维码码制及版本相同 的二维码正视图; 同^解析所述资源信息, 以获取二维码对应的虚拟信息;
对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行特征检测,分别得 到二者的特征描述; 并根据二者的特征描述进行图像配准, 计算摄像机的姿态, 得到单应 性矩阵;
根据单应性矩阵,在真实场景中二维码位置处或二维码的一定偏移位置处,渲染并输 出显示所述与二维码对应的虚拟信息。
优选的, 当解析所述资源信息得到的是文本内容时, 将文本内容作为纹理进行渲染; 当解析所述资源信息得到的是资源 URL 则访问该 UR]获取虚拟信息, 并根据虚拟信息类 型 ]¾预设方式进行加载。
优选的, 所述虚拟信息类型包括: 视频、 图像、 文本、 3D模型中的一种或多种。 其中,所述真实场景图像中的二维码为常规二维码或定制二维码; 定制二维码中的资 源信息包括资源标识符、资源类型、资源加载界面尺寸、渲染位置偏移度中的一种或多种。
优选的,所述对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行特征 检测, 分别得到二者的特征描述, 为 T面两种方式之一- 方式一:对二维码正视图以及摄像模块捕获的场景图像中的二维码图像迸行全图特征 检测, 分别得到二者的特征描述;
方式二:仅对二维码正视图以及摄像模块捕获的场景图像中的二维码图像的稳定区域 进行特征检测, 分别得到二者的特征描述。
相应的, 本发明还提出了一种基于二维码的增强现实系统, 包括:
摄像模块, 用于捕获含有二维码的真实场景图像;
二维码解码模块, 用于扫描场景图像中的二维码, 并对二维码进行解码, 获取二维码 的编码信息, 所述编码信息包括: 码制、 版本及资源信息;
二维码编码模块,对所述二维码解码模块解析出来的编码信息进行再编码,生成与场 景图像中的二维码码制及版本相同的二维码正视图;
资源获取模块,用于解析所述编码信息中的资源信息,以获取二维码对应的虚拟信息; 图像特征提取模块,用于对二维码正视图以及摄像模块捕获的场景图像中的二维码图 像进行特征检测, 分别得到二者的特征描述- 图像跟踪配准模块,用于根据二维码正视图及场景图像中的二维码图像特征描述进行 图像配准, 计算摄像机的姿态, 得到单应性矩阵;
渲染显示模块,用于根据单应性矩阵,在真实场景中二维码位置处或二维码的一定偏 移位置处, 渲染并输出显示所述与二维码对应的虚拟信息。
优选的,所述图像特征提取模块,对二维码正视图以及摄像模块捕获的场景图像中的 二维码图像进行特征检测, 为下面两种方式之一 - 方式一:对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行全图特征 检测, 分别得到二者的特征描述;
方式二:仅对二维码正视图以及摄像模块捕获的场景图像中的二维码图像的稳定区域 进行特征检测, 分别得到二者的特征描述。
优选的,所述真实场景图像中的二维码为常规二维码或定制二维码;定制二维码中的 资源信息包括资源标识符、资源类型、 资源加载界面尺寸、渲染位置偏移度中的一种或多 种。
优选的, 当所述资源获取模块解析所述资源信息得到的是文本内容时,将文本内容作 为纹理进行渲染; 而当解析所述资源信息得到的是资源 URL则访问该 UR获取虚拟信息, 并根据虚拟信息类型 预设方式进行加载, 所述虚拟信息类型包括: 视频、 图像、 文本、 3D模型中的一种或多种。
相应的,本发明还提出了一种移动终端,所述移动终端包括上述的基于二维码的增强 现实系统。
与现有技术相比, 本发明具有如 T有益效果:
发明直接根据场景图像中的二维码, 重新生成与场景图像中二维码一致的二维码 正视图,用生成的二维码正视图与场景图像中的二维码图像进行跟踪匹配,计算摄像机的 姿态, 得到单应性矩阵, 不需要数据库中存储二维码样本图像, 对任一二维码均适用, 打 破了传统增强现实应 ]¾中,必须在数据库中预先存储对应样本图像才能进行跟踪匹配的局 限性。
2、 本发明由于不需要样本图像, 因此避免了从远程服务器的查询与匹配歩骤, 能够 减少因网络传输 题造成的系统响应延迟, 节约用户的网络通信流量。
3、 本发明深度挖掘二维码作为信息入口的应 ¾潜力, 使二维码的相关信息、 资源以 更加生动的形式呈现给用户。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技 术描述中所需要使用的 图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明 的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性的前提下, 还可以 根据这些 ϋ图获得其他的附图:
图 i为本发明实施例基于二维码的增强现实方法流程示意图;
图 2为根据图】方法进行虚拟信息叠加的过程及效果示意图;
图 3为几种常用码制的二维码图像示意图;
图 4为可遮挡区域被小图标遮挡的二维码应用示意图。 具体实施方式
T面将结合本发明实施例中的附图,对本发明实施例中的技术方案迸行清楚、完整地 描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实施^。基于本 发明中的实施倒,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实 施例, 都属于本发明保护的范围。
本领域技术人员应知: 在增强现实过程中, 将虚拟信息准确的叠加在目标物体之上, 需要对摄像机的姿态进行计算, 来确定从目标物体的坐标系到图像坐标系的单应性矩阵。
本申请的发明人发现, 现有增强现实一般都是通过真实场景图像与样本图像进行配 准, 得到单应性矩阵。这种方式要求对能够进行增强现实的图像, 必须在移动终端或服务 器端存储其 于配准的样本图像,如果移动终端或增强现实服务器端没有存储某真实场景 的样本图像或该样本图像的特征点信息,则无法实现虚拟信息与该真实场景的融合,增强 现实技术的推广被样本图像所限制。 另外样本图像存储于移动终端会占用终端存储空间, ϋ不能满足海量标识物的存储:而若将样本图像存储于远程服务器,样本图像的检索与下 载会延迟系统响应旦浪费用户的网络通信流量。
本申请的发明人发现, 二维码具有易用性和普及性, 且二维码作为信息入口, 可以由 任何信息生成, 其可以携带任意一段文本或任意资源的 URL 最重要的一点是: 由于二维 码译码可以得到该二维码的码制、版本以及纠错等级等信息,二维码解码后能再根据相同 的码制和版本重新编码生成与之前二维码图案完全相同的二维码正视图,或生成与之前二 维码稳定区域图案完全相同的二维码正视图。因此提出以二维码作为增强现实系统中的识 别与定位标识,对真实场景图像中的二维码解码再编码,生成与真实场景图像中的二维码 图像相同的二维码正视图,将其作为样本图像与场景中的二维码进行特征点跟踪匹配, 找 到二者中相匹配的特征点, 来进行单应性矩陈的计算。
所述二维码稳定区域是相对于二维码的可遮挡区域而言的,因为二维码具有一定的纠 错能力, 其可以在一定区域被遮挡的情况下正确解码 (如图 4为在二维码的可遮挡区域放 置了一个小图标的二维码, 这个二维码是可以被正确解码的), 因此我们称可被遮挡而不 影响二维码正确解码的区域为可遮挡区域, 除了可遮挡区域以外的区域为稳定区域。
T面结合 Pfi图详细介绍本发明介绍方案。
参见图 1、 图 2, 为本发明实施例基于二维码的增强现实方法流程示意图, 包括如下 S101〜S105歩骤:
SlOb 摄像模块捕获含有二维码的真实场景图像, 所述二维码可以为常规二维码, 也-可以为定制二维码,常规二维码指资源信息中包含一个文本字段或 URI链接的网络二维 码, 定制二维码指其资源信息包括资源标识符、 资源类型、 资源加载界面尺寸、渲染位置 偏移度等其他一些设置信息中的一种或多种;
S102: 扫描场景图像中的二维码, 并对二维码进行解码, 获得二维码的编码信息, 所述编码信息包括: 码制、版本及资源信息等, 所述资源信息指扫描二维码得到的相关信 息;
S 103: 对获得的二维码的编码信息进行再编码, 生成与场景图像中的二维码码制及 版本相同的二维码正视图; 同时解析所述资源信息, 以获取二维码对应的虚拟信息, 二维 码对应的资源信息可能为文本信息, 也可能为资源 URI, 当为资源 URI时, 根据 UIR地址 访 远程服务器,获得 URI对应的虚拟信息內容;
本步骤中,对获得的二维码的编码信息迸行再编码,生成与场景图像中的二维码码制 及版本相同的二维码正视图,该二维码正视图可能为与场景图像中的二维码图案完全相同 的二维码正视图(场景二维码的可遮挡区域无任何遮挡时), 也可能为与场景图像中的二 维码稳定区域图案完全相同的二维码正视图(场景二维码的可遮挡区域被部分或全部遮挡 时, 由于二维码具有纠错能力, 能够在具有小图标遮挡一定区域的情况下正确解码, 然而 对解码后的信息进行再编码,可生成与原二维码包含数据一致的二维码正视图,但生成的 正视图中不会存在原二维码中的遮挡小图标) 。
S 104: 对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行特征检测, 分别得到二者的特征描述; 并根据二者的特征描述进行图像配准(即特征点匹配, 找到二 者中相匹配的特征点) , 计算摄像机的姿态, 得到单应性矩阵;
本步骤中,对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行特征检 测,可以是对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行全图特征检 测(此种方式在场景二维码的可遮挡区域无任何遮挡时效果较好,其对场景二维码的可遮 挡部分被区域或全部遮挡时可能会出现一定匹配误差),也可以是仅对二维码正视图以及 摄像模块捕获的场景图像中的二维码图像的稳定区域进行特征检测(此种方式对场景二维 码的可遮挡区域是否被遮挡无要求, 均可良好匹配图像) 。
其中, 特征检测的方法可以为: FAST、 Harris, Shi-Thomas等, 特征描述可以用 SIFT、 SU:RF、 ORB、 BRIEF、 FREAK等, 这些技术都属于现有技术, 在此不赘述。 真实场景中 的二维码为平面物体,其确定了一个世界坐标系,经过解码再编码生成的二维码正视图属 于图像坐标系, 以二维码正视图以及摄像模块捕获的场景图像中的二维码图像上 配到 4 组对应特征点为例,可以将这四个特征点在世界坐标系上的坐标与图像坐标系上的坐标建 立起如下对应关系;
Figure imgf000008_0002
Figure imgf000008_0001
其中, (《, v)表示特征点的图像坐标, (Χ'»'·^,Ζ'«)表示特征点在世界坐标系上的坐标, 均为已知参数, 表示待计算的单应性矩阵, 匹配到的特征点越多, 计算结果越精确。
S 105 : 根据单应性矩阵, 在真实场景中二维码位置处或二维码的一定偏移位置处, 渲染并输出显示所述与二维码对应的虚拟信息。
本步骤中, 若二维码为常规二维码, 直接将虚拟信息叠加到二维码位置处; 而若二维 码为定制二维码, 其资源信息中设定了资源加载界面尺寸、渲染位置偏移度等, 则根据设 定的尺寸及偏移度将虚拟信息叠显示在距目标位置设定偏移位置处。 另外, 本步骤中, 当 解析所述资源信息得到的是文本内容时,将文本内容作为纹理迸行渲染; 当解析所述资源 信息得到的是资源 URI , 则访问该 URI获取虚拟信息, 并根据虚拟信息类型用预设方式进 行加载,所述虚拟信息类型包括: 视频、 图像、 文本、 3D模型中的一种或多种, 例如: 当 虚拟信息为文本信息时, 将文本内容直接作为纹理进行渲染; 当虚拟信息为 3D模型时, 需要先对 3 D模型进行解析; 当虚拟信息为视频信息时, 需要先对视频进行解码, 将视频 各帧图像诈为紋理, 按序列逐帧映射到所述 3D模型上, 进行图形渲染。
本发明实施例中,通过歩骤 S】04得到单应性矩阵后,可对真实场景中二维码进行跟踪 跟踪 S 104中匹配成功的特征点) ,由跟踪算法实^更新计算单应性矩阵,以使虚拟信息能 够更准确的重叠到预定位置。 当跟踪丢失后, 重新进入歩骤 S10i。 判断跟踪丢失的条件 可以为: 对跟踪成功的点重新计算其匹配度, 并进行计数, 当匹配良好的点的数量在阈值 以下时 (阈值范围一般范围 5〜20, 优选为 10) , 劑判断跟踪丢失。
本发明实施例方法, 可以以 PI)F417、 QR Code、 Data Matrix, Grid Matrix以及 Aztec 等二维码为应用对象, 几种常 ]¾码制的二维码示意图可参见图 3。
本发明实施例中二维码可以为常规二维码,也可以为定制二维码,所谓常规二维码是 指二维码来源于互联网,其内包含的资源信息一般为一个字符串,例如为一段文本或者二 维码相关资源的 URJ; 所谓定制二维码是指, 按照某种统一格式形成的资源标识, 即二维 码中的资源信息包括资源标识符、资源类型、资源加载界面尺寸、渲染位置偏移度等其他 一些设置信息中的一种或多种。 例如,
ID:xxx UM:xxx TYPE:xxx WIDTH:xxx HEIGHT:xxx OFFSETrxxx,通过 ID和 URI确定资 源地址(该地址可能在远程服务器, 也可能在客户端本地) , 通过资源类型可以预先设置 相关资源的加载方式 (图片、 文本、 音频、 视频和 3D模型的加载方式各不相同) , 其他 设置信息包括了资源加载界面的尺寸以及渲染位置相对于二维码的偏移等等。
本发明还提出了一种与上述方法对应的基于二维码的增强现实系统,包括:摄像模块、 二维码解码模块、 二维码编码模块、 资源获取模块、 图像特征提取模块、 图像跟踪配准模 块以及渲染显示模块, 其中:
所述摄像模块,用于捕获含有二维码的真实场景图像,所述真实场景图像中的二维码 为常规二维码或定制二维码, 定制二维码中的资源信息包括资源标识符、 资源类型、资源 加载界面尺寸、 渲染位置偏移度中的一种或多种;
所述二维码解码模块, 用于扫描场景图像中的二维码, 并对二维码迸行解码, 获取二 维码的编码信息, 所述编码信息包括: 码制、 版本及资源信息;
所述二维码编码模块,用于根据所述二维码解码模块解析出来的编码信息,用与之相 同的码制和版本进行编码,生成与场景图像中的二维码图案完全相同的二维码正视图,或 与场景图像中的二维码稳定区域图案完全相同的二维码正视图,所述资源获取模块,用于 解析所述编码信息中的资源信息, 以获取二维码对应的虚拟信息;
所述图像特征提取模块,用于对二维码正视图以及摄像模块捕获的场景图像中的二维 码图像进行特征检测, 分别得到二者的特征描述;
所述图像跟踪配准模块,用于根据二维码正视图及场景图像中的二维码图像特征描述 进行图像配准, 计算摄像机的姿态, 得到单应性矩阵;
所述渲染显示模块,用于根据单应性矩阵,在真实场景中二维码位置处或二维码的一 定偏移位置处,渲染并输出显示所述与二维码对应的虚拟信息, 当所述资源获取模块解析 所述资源信息得到的是文本内容时,将文本内容作为纹理进行渲染;而当解析所述资源信 息得到的是资源 URI, 贝 il访问该 URI获取虚拟信息, 并根据虚拟信息类型用预设方式进行 加载, 所述虚拟信息类型包括: 视频、 图像、 文本、 3D模型中的 ·种或多种。
本发明实施例中,所述图像跟踪配准模块得到单应性矩阵后,对真实场景中二维码进行 跟踪,由跟踪算法实时更新计算单应性矩阵,以使虚拟信息能够更准确的重叠到预定位置。
本发明还提出了一种移动终端, 所述移动终端包括上述的基于二维码的增强现实系 统。
本发明以二维码作为增强现实系统中的识别与定位标识,通过二维码的解码与再编码 直接生成二维码正视图, 用于单应性矩阵的计算, 既打破了传统增强现实应用中, 必须在 数据库中预先存储对应样本图像才能进行跟踪匹配的局限性,又避免了使用传统标识物的 时候向远程服务器的模板查询与匹配步骤, 能够减少因网络传输问题造成的系统响应延 迟, 节约用户的网络通信流量。
本说明书中公开的所有特征,或公开的所有方法或过程中的步骤, 除了互相排斥的特 征和 /或歩骤以外, 均可以以任何方式组合。
本说明书(包括任何附加权利要求、摘要和附图)中公开的任一特征,除非特别叙述, 均可被其他等效或具有类似目的的替代特征加以替换。 即, 除非特别叙述, 每个特征只是 一系列等效或类似特征中的一个倒子而已。
本发明并不局限于前述的具体实施方式。本发明扩展到任何在本说明书中披露的新特 征或任何新的组合, 以及披露的任一新的方法或过程的歩骤或任何新的组合。

Claims

权利要求书
L-一种基于二维码的增强现实方法, 其特征在于,包括:
摄像模块捕获含有二维码的真实场景图像;
扫描场景图像中的二维码, 并对二维码进行解码, 获得二维码的编码信息, 所述编码 信息包括: 码制、 版本及资源信息;
对获得的二维码的编码信息进行再编码,生成与场景图像中的二维码码制及版本相同 的二维码正视图; 同^解析所述资源信息, 以获取二维码对应的虚拟信息;
对二维码正视图以及摄像模块捕获的场景图像中的二维码图像迸行特征检劉,分别得 到二者的特征描述; 并根据二者的特征描述迸行图像配准, 计算摄像机的姿态, 得到单应 性矩阵;
根据单应性矩阵,在真实场景中二维码位置处或二维码的一定偏移位置处,渲染并输 出显示所述与二维码对应的虚拟信息。
2.如权利要求〗所述的方法, 其特征在于- 当解析所述资源信息得到的是文本内容^, 将文本内容作为紋理进行渲染: 当解析所述资源信息得到的是资源 UR:, 则访问该 UR获取虚拟信息, 并根据虚拟信 息类型 ^预设方式进行加载。
3.如权利要求 2所述的方法, 其特征在于: 所述虚拟信息类型包括: 视频、 图像、 文 本、 3D模型中的一种或多种。
4.如权利要求 i至 3中任一项所述的方法, 其特征在于:
所述真实场景图像中的二维码为常规二维码或定制二维码;
定制二维码中的资源信息包括资源标识符、资源类型、资源加载界面尺寸、渲染位置 偏移度中的一种或多种。
5.如权利要求 1至 3中任一项所述的方法, 其特征在于, 所述对二维码正视图以及摄像 模块捕获的场景图像中的二维码图像进行特征检测,分别得到二者的特征描述,为下面两 种方式之一- 方式一:对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行全图特征 检测, 分别得到二者的特征描述;
方式二:仅对二维码正视图以及摄像模块捕获的场景图像中的二维码图像的稳定区域 进行特征检测, 分别得到二者的特征描述。
6.—种基于二维码的增强现实系统, 其特征在于,包括: 摄像模块, 用于捕获含有二维码的真实场景图像:
二维码解码模块, 用于扫描场景图像中的二维码, 并对二维码进行解码, 获取二维码 的编码信息, 所述编码信息包括: 码制、 版本及资源信息;
二维码编码模块,对所述二维码解码模块解析出来的编码信息进行再编码,生成与场 景图像中的二维码码制及版本相同的二维码正视图;
资源获取模块,用于解析所述编码信息中的资源信息,以获取二维码对应的虚拟信息; 图像特征提取模块,用于对二维码正视图以及摄像模块捕获的场景图像中的二维码图 像进行特征检测, 分别得到二者的特征描述;
图像跟踪配准模块,用于根据二维码正视图及场景图像中的二维码图像特征描述迸行 图像配准, 计算摄像机的姿态, 得到单应性矩阵;
渲染显示模块,用于根据单应性矩阵,在真实场景中二维码位置处或二维码的一定偏 移位置处, 渲染并输出显示所述与二维码对应的虚拟信息。
7.如权利要求 6所述的系统, 其特征在于, 所述图像特征提取模块, 对二维码正视图 以及摄像模块捕获的场景图像中的二维码图像进行特征检测, 为下面两种方式之一: 方式一:对二维码正视图以及摄像模块捕获的场景图像中的二维码图像进行全图特征 检测, 分别得到二者的特征描述;
方式二:仅对二维码正视图以及摄像模块捕获的场景图像中的二维码图像的稳定区域 进行特征检测, 分别得到二者的特征描述。
8.如权利要求 6或 7所述的系统, 其特征在于:
所述真实场景图像中的二维码为常规二维码或定制二维码;
定制二维码中的资源信息包括资源标识符、资源类型、资源加载界面尺寸、渲染位置 偏移度中的一种或多种》
9.如权利要求 6或 7所述的系统, 其特征在于,
当所述资源获取模块解析所述资源信息得到的是文本内容时,将文本内容作为紋理进 行渲染; 而当解析所述资源信息得到的是资源 um, 则访问该 URI获取虚拟信息, 并根据 虚拟信息类型用预设方式进行加载, 所述虚拟信息类型包括: 视频、 图像、 文本、 3D模 型中的 ·种或多种。
10.—种移动终端, 其特征在于, 所述移动终端包括权利要求 6至 9中任一项所述的基 于二维码的增强现实系统。
PCT/CN2013/081876 2012-12-30 2013-08-20 基于二维码的增强现实方法、系统及终端 WO2014101435A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210587153.1 2012-12-30
CN201210587153.1A CN103049729B (zh) 2012-12-30 2012-12-30 基于二维码的增强现实方法、系统及终端

Publications (1)

Publication Number Publication Date
WO2014101435A1 true WO2014101435A1 (zh) 2014-07-03

Family

ID=48062362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/081876 WO2014101435A1 (zh) 2012-12-30 2013-08-20 基于二维码的增强现实方法、系统及终端

Country Status (2)

Country Link
CN (1) CN103049729B (zh)
WO (1) WO2014101435A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986163A (zh) * 2018-06-29 2018-12-11 南京睿悦信息技术有限公司 基于多标识识别的增强现实定位算法

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049729B (zh) * 2012-12-30 2015-12-23 成都理想境界科技有限公司 基于二维码的增强现实方法、系统及终端
CN103996063A (zh) * 2014-06-12 2014-08-20 北京金山网络科技有限公司 一种数据处理方法及装置
CN104504402A (zh) * 2015-01-15 2015-04-08 刘畅 一种基于图像搜索的数据处理方法及系统
CN104504155B (zh) * 2015-01-15 2018-06-08 刘畅 一种基于图像搜索的数据获取方法及系统
CN105989390A (zh) * 2015-02-11 2016-10-05 北京鼎九信息工程研究院有限公司 一种二维码的生成方法及装置
CN107250802B (zh) * 2015-02-24 2018-11-16 株式会社日立高新技术 自动分析装置
CN104834680B (zh) * 2015-04-13 2017-11-07 西安教育文化数码有限责任公司 一种索引式增强现实方法
CN104850582B (zh) * 2015-04-13 2017-11-07 西安教育文化数码有限责任公司 一种索引式增强现实系统
CN105446626A (zh) * 2015-12-04 2016-03-30 上海斐讯数据通信技术有限公司 基于增强现实技术的商品信息获取方法、系统及移动终端
CN105787534B (zh) * 2016-02-29 2018-07-10 上海导伦达信息科技有限公司 融合二维码及ar码内容识别与学习并以增强现实实现方法
CN106251404B (zh) * 2016-07-19 2019-02-01 央数文化(上海)股份有限公司 方位跟踪方法、实现增强现实的方法及相关装置、设备
CN106897648B (zh) * 2016-07-22 2020-01-31 阿里巴巴集团控股有限公司 识别二维码位置的方法及其系统
CN106372144A (zh) * 2016-08-26 2017-02-01 江西科骏实业有限公司 二维码处理装置和方法
CN106408667B (zh) * 2016-08-30 2019-03-05 西安小光子网络科技有限公司 基于光标签的定制现实方法
CN108665035B (zh) * 2017-03-31 2020-12-22 清华大学 码标的生成方法及装置
CN107464288A (zh) * 2017-07-24 2017-12-12 腾讯科技(深圳)有限公司 模型展示方法及装置
CN109840951A (zh) * 2018-12-28 2019-06-04 北京信息科技大学 针对平面地图进行增强现实的方法及装置
CN111859199A (zh) * 2019-04-30 2020-10-30 苹果公司 在环境中定位内容
TWI785332B (zh) * 2020-05-14 2022-12-01 光時代科技有限公司 基於光標籤的場景重建系統

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102308599A (zh) * 2009-02-04 2012-01-04 摩托罗拉移动公司 在移动虚拟和增强现实系统中创建虚拟涂鸦的方法和装置
CN102800065A (zh) * 2012-07-13 2012-11-28 苏州梦想人软件科技有限公司 基于二维码识别跟踪的增强现实设备及方法
CN102821323A (zh) * 2012-08-01 2012-12-12 成都理想境界科技有限公司 基于增强现实技术的视频播放方法、系统及移动终端
US20120327117A1 (en) * 2011-06-23 2012-12-27 Limitless Computing, Inc. Digitally encoded marker-based augmented reality (ar)
CN103049729A (zh) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 基于二维码的增强现实方法、系统及终端

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944187B (zh) * 2010-09-07 2014-04-02 龚湘明 二维微型编码及其处理方法、装置
CN102323880A (zh) * 2011-06-30 2012-01-18 中兴通讯股份有限公司 基于浏览器解析方式的手机应用界面的开发方法和终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102308599A (zh) * 2009-02-04 2012-01-04 摩托罗拉移动公司 在移动虚拟和增强现实系统中创建虚拟涂鸦的方法和装置
US20120327117A1 (en) * 2011-06-23 2012-12-27 Limitless Computing, Inc. Digitally encoded marker-based augmented reality (ar)
CN102800065A (zh) * 2012-07-13 2012-11-28 苏州梦想人软件科技有限公司 基于二维码识别跟踪的增强现实设备及方法
CN102821323A (zh) * 2012-08-01 2012-12-12 成都理想境界科技有限公司 基于增强现实技术的视频播放方法、系统及移动终端
CN103049729A (zh) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 基于二维码的增强现实方法、系统及终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986163A (zh) * 2018-06-29 2018-12-11 南京睿悦信息技术有限公司 基于多标识识别的增强现实定位算法

Also Published As

Publication number Publication date
CN103049729B (zh) 2015-12-23
CN103049729A (zh) 2013-04-17

Similar Documents

Publication Publication Date Title
WO2014101435A1 (zh) 基于二维码的增强现实方法、系统及终端
CN103049728B (zh) 基于二维码的增强现实方法、系统及终端
KR102018143B1 (ko) 광학 바코드에 대한 맞춤형 기능 패턴들
CN109784181B (zh) 图片水印识别方法、装置、设备及计算机可读存储介质
CN107343220B (zh) 数据处理方法、装置和终端设备
WO2018014828A1 (zh) 识别二维码位置的方法及其系统
CN111627114A (zh) 室内视觉导航方法、装置、系统及电子设备
EP3223184B1 (en) Method and device for verifying identity information
JP2008530676A (ja) 取得した画像を使用したアクセスのための情報の格納
WO2017167159A1 (zh) 图像定位方法及装置
JP2016513843A (ja) 特徴の空間局所化を利用することによる物体検出時間の減少
KR20150105479A (ko) 2차원 코드 증강 현실의 실현 방법 및 디바이스
KR20150070236A (ko) 전체적 특성 피드백을 이용한 점진적인 시각적 질의 처리
WO2021164620A1 (zh) 运动数据处理方法、装置、设备和存储介质
US11748955B2 (en) Network-based spatial computing for extended reality (XR) applications
US11765178B2 (en) Expanded mobile device content access
CN110598139A (zh) 基于5G云计算的Web浏览器增强现实实时定位的方法
CN112040269B (zh) 视频数据展示方法、装置、终端设备及存储介质
CN112085031A (zh) 目标检测方法及系统
WO2019174429A1 (zh) 一种视频地图引擎系统
CN115803783A (zh) 从2d图像重建3d对象模型
US20160117553A1 (en) Method, device and system for realizing visual identification
CN104871179A (zh) 用于图像捕捉和便于注解的方法和系统
CN103903036A (zh) 一种大容量、易更新的二维码系统
US20170109596A1 (en) Cross-Asset Media Analysis and Processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13868893

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13868893

Country of ref document: EP

Kind code of ref document: A1