CN106846311B - Positioning and AR methods and systems and applications based on image recognition - Google Patents

Positioning and AR methods and systems and applications based on image recognition Download PDF

Info

Publication number
CN106846311B
CN106846311B CN201710044706.1A CN201710044706A CN106846311B CN 106846311 B CN106846311 B CN 106846311B CN 201710044706 A CN201710044706 A CN 201710044706A CN 106846311 B CN106846311 B CN 106846311B
Authority
CN
China
Prior art keywords
image
server
direction angle
client
sin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710044706.1A
Other languages
Chinese (zh)
Other versions
CN106846311A (en
Inventor
吴东辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xianjian Electronic Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710044706.1A priority Critical patent/CN106846311B/en
Publication of CN106846311A publication Critical patent/CN106846311A/en
Application granted granted Critical
Publication of CN106846311B publication Critical patent/CN106846311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及图像识别及定位领域,特别是涉及一种基于图像识别的定位及AR方法及系统及应用。其特征在于:由至少两个确定特征物的两个视角确定视角处的位置、或至少一个确定物体的一个视角及其视角方向投影面该物体中的线段或面积之比确定视角处的位置。有益效果是:本发明的目的是提供一种角度定位的方法,可以应用于导航定位、AR导航定位、AR游戏定位等。可在任何陌生环境中实现定位。适用于室内定位。

The present invention relates to the field of image recognition and positioning, and in particular to a positioning and AR method, system and application based on image recognition. It is characterized in that: the position at the viewing angle is determined by two viewing angles of at least two determined feature objects, or the position at the viewing angle is determined by the ratio of a viewing angle of at least one determined object and its viewing angle direction projection surface of the object. The beneficial effects are: the purpose of the present invention is to provide an angle positioning method, which can be applied to navigation positioning, AR navigation positioning, AR game positioning, etc. Positioning can be achieved in any unfamiliar environment. Suitable for indoor positioning.

Description

基于图像识别的定位及AR方法及系统及应用Positioning and AR methods and systems and applications based on image recognition

技术领域Technical field

本发明涉及图像识别及定位领域,特别是涉及一种基于图像识别的定位及AR方法及系统及应用。The present invention relates to the field of image recognition and positioning, and in particular to a positioning and AR method, system and application based on image recognition.

背景技术Background technique

目前基于图像识别的定位导航一般采用定位点拍摄的照片和预置的多幅照片照片之一进行比对获得所处位置;或建立3D模型图,通过定位点拍摄的照片和3D模型图比对获得所处位置。这两种方法都需要预先预置照片数据库或3D模型数据库,如果在任意的无数据库的地点则无法进行定位。Currently, positioning navigation based on image recognition generally compares the photo taken at the positioning point with one of the preset photos to obtain the location; or establishes a 3D model map and compares the photo taken at the positioning point with the 3D model map. Get the location. Both methods require a pre-set photo database or 3D model database, and positioning cannot be performed in any location without a database.

增强现实技术(Augmented Reality,简称 AR),是一种实时地计算摄影机影像的位置及角度并加上相应图像、视频、3D模型的技术,这种技术的目标是在屏幕上把虚拟世界套在现实世界并进行互动。Augmented Reality (AR) is a technology that calculates the position and angle of camera images in real time and adds corresponding images, videos, and 3D models. The goal of this technology is to put a virtual world on the screen. real world and interact with it.

对于AR技术应用于定位可以只考虑角度关系,忽略距离关系。For AR technology applied to positioning, only the angle relationship can be considered and the distance relationship can be ignored.

对于室内定位可以只考虑角度关系,忽略距离关系。For indoor positioning, only the angle relationship can be considered and the distance relationship can be ignored.

本发明基于图像识别,包括现有成熟的特征识别、物体轮廓识别、颜色识别、动作识别、人脸识别、文字识别等,不同的环境采用相应的识别策略。The present invention is based on image recognition, including existing mature feature recognition, object contour recognition, color recognition, action recognition, face recognition, text recognition, etc., and adopts corresponding recognition strategies in different environments.

发明内容Contents of the invention

本发明的目的是提供一种角度定位的方法,可以应用于导航定位、AR导航定位、AR游戏定位等。The purpose of the present invention is to provide an angle positioning method, which can be applied to navigation positioning, AR navigation positioning, AR game positioning, etc.

本发明的思路是:基于两个确定物体(特征物)的两个视角确定视角处的位置、或一个确定物体的一个视角及其该物体中的两个线段或两个面在视角方向的投影的线段比值或面积比值确定视角处的位置。The idea of the present invention is to determine the position of the viewing angle based on two viewing angles of two determined objects (feature objects), or one viewing angle of a determined object and the projection of two line segments or two surfaces in the object in the viewing angle direction. The line segment ratio or area ratio determines the position at the viewing angle.

参照图2即基于两个确定物体(特征物)A、B的两个视角β1、β2确定视角处O的位置。Referring to Figure 2, the position of O at the viewing angle is determined based on the two viewing angles β1 and β2 of the two determined objects (features) A and B.

或参照图3,一个确定物体的一个视角β3及其该物体中的两个线段DE、EF在视角方向的投影的线段比值D2E2/E2F2确定视角处O的位置。Or referring to Figure 3, a line segment ratio D2E2/E2F2 that determines a viewing angle β3 of an object and the projection of two line segments DE and EF in the object in the viewing angle direction determines the position O at the viewing angle.

或参照图4,一个确定物体的一个视角β4及其该物体中的两个面S1、S2在视角方向的投影的面积比值SO1/SO2确定视角处O的位置。Or referring to Figure 4, a determined object's viewing angle β4 and the area ratio SO1/SO2 of the projections of two surfaces S1 and S2 in the object in the viewing angle direction determine the position O at the viewing angle.

特征物即在计算机系统中通过图像识别可以明确的物体,或人为在计算机系统中指定的确定物体。特征物不是指特殊的物体,特征物可以是定位空间中的任何物体,但必须在计算机系统中通过图像识别可以确定的物体,如两个手机在不同的地点拍摄数字“5”,计算机系统通过图像识别(数字识别策略)确定了两个手机拍摄的都是同一个数字“5”,这样数字“5”就是特征物。或者相对稳定、固定的物体或图案,被计算机系统的图像识别软件发现识别而确定的物体。Feature objects are objects that can be clearly identified through image recognition in the computer system, or certain objects that are artificially specified in the computer system. Feature objects do not refer to special objects. Feature objects can be any objects in the positioning space, but they must be identified through image recognition in the computer system. For example, two mobile phones shoot the number "5" in different locations, and the computer system Image recognition (number recognition strategy) determines that the two mobile phones captured the same number "5", so the number "5" is a characteristic object. Or a relatively stable and fixed object or pattern, which is discovered and identified by the image recognition software of the computer system.

具体到客户端,即两个客户端分别获取各自所在地周围图像及拍摄时的方向角度发送至服务器,服务器通过图像识别确定所述两个客户端拍摄的共同物体的图像,利用共同物体图像的差别及其方向角度计算所述两个客户端连线的方向角度。Specific to the client, that is, the two clients respectively obtain the images around their respective locations and the direction and angle when shooting and send them to the server. The server determines the image of the common object captured by the two clients through image recognition and uses the difference in the image of the common object. and its direction angle to calculate the direction angle of the line connecting the two clients.

进一步,两个位置点分别获取至少两个特征物的方向角,通过方向角计算两个位置点连线的方向角。Further, the two position points obtain the direction angles of at least two feature objects respectively, and calculate the direction angle of the line connecting the two position points through the direction angles.

或,两个位置点分别获取不在同一条直线上的两个线段的比值及方向角,通过比值和方向角计算两个位置点连线的方向角。Or, two position points respectively obtain the ratio and direction angle of two line segments that are not on the same straight line, and calculate the direction angle of the line connecting the two position points through the ratio and direction angle.

或,两个位置点分别获取同一物体两个面(不在同一平面的两个面)的面积比值及方向角,通过面积比值及方向角计算两个位置点连线的方向角。Or, two position points respectively obtain the area ratio and direction angle of two surfaces of the same object (two surfaces that are not on the same plane), and calculate the direction angle of the line connecting the two position points through the area ratio and direction angle.

进一步,提供一种在屏幕上把虚拟世界套在现实世界并进行互动的方法(好友屏幕定位、商家屏幕定位、游戏目标物屏幕定位、红包屏幕定位、虚拟广告屏幕定位)及系统,系统可以是基于自主即时通讯IM平台,或通过第三方服务(API),或嵌入现有的IM平台,如QQ、微信、陌陌等。Further, provide a method and system for integrating the virtual world into the real world on the screen and interacting with it (friend screen positioning, merchant screen positioning, game target screen positioning, red envelope screen positioning, and virtual advertising screen positioning). The system can be Based on an independent instant messaging IM platform, or through third-party services (API), or embedded in existing IM platforms, such as QQ, WeChat, Momo, etc.

本发明采用的技术方案是:The technical solution adopted by the present invention is:

一种基于图像识别的定位的方法,其特征在于:由至少两个确定物体(特征物)的两个视角确定视角处的位置、或至少一个确定物体的一个视角及其该物体中的两个线段或两个面在视角方向的投影的线段比值或面积比值确定视角处的位置。A positioning method based on image recognition, characterized in that: the position at the viewing angle is determined from at least two viewing angles of the determined object (feature object), or at least one viewing angle of the determined object and two of the objects. The line segment ratio or area ratio of the projection of a line segment or two surfaces in the viewing direction determines the position at the viewing angle.

所述的一种基于图像识别的定位的方法,其特征还在于:两个位置点分别获取至少两个物体(特征物)的方向角,通过方向角计算两个位置点连线的方向角。The described positioning method based on image recognition is also characterized in that two position points obtain the direction angles of at least two objects (features) respectively, and the direction angle of the line connecting the two position points is calculated through the direction angles.

所述的一种基于图像识别的定位的方法,其特征还在于:两个位置点分别获取不在同一条直线上的两个线段的比值及方向角,通过比值和方向角计算两个位置点连线的方向角。The described positioning method based on image recognition is also characterized in that: two position points respectively obtain the ratio and direction angle of two line segments that are not on the same straight line, and calculate the connection between the two position points through the ratio and the direction angle. The direction angle of the line.

所述的一种基于图像识别的定位的方法,其特征还在于:两个位置点分别获取同一物体两个面的面积比值及方向角,通过面积比值及方向角计算两个位置点连线的方向角。The described positioning method based on image recognition is also characterized by: two position points obtain the area ratio and direction angle of two faces of the same object respectively, and calculate the line connecting the two position points through the area ratio and direction angle. direction angle.

所述的一种基于图像识别的定位的方法,还包括客户端和服务器,其特征在于,包括步骤:The described positioning method based on image recognition also includes a client and a server, and is characterized in that it includes the steps:

(1)通过客户端的方向器件确定统一基准线;(1) Determine the unified baseline through the client's direction device;

(2)位置C获取物体(特征物)A的图像A1及相对于基准线的角度α1,位置C获取物体(特征物)B的图像B1及相对于基准线的角度α2,上传图像A1及α1、图像B1及α2至服务器;(2) Position C obtains the image A1 of object (feature) A and the angle α1 relative to the baseline. Position C obtains the image B1 of object (feature) B and the angle α2 relative to the baseline. Upload images A1 and α1. , images B1 and α2 to the server;

(3)位置O获取物体(特征物)A的图像A2及相对于基准线的角度β1,位置O获取物体(特征物)B的图像B2及相对于基准线的角度β2,上传图像A2及β1、图像B2及β2至服务器;(3) Position O obtains the image A2 of object (feature) A and the angle β1 relative to the baseline, position O obtains the image B2 of object (feature) B and the angle β2 relative to the baseline, upload images A2 and β1 , images B2 and β2 to the server;

(4)服务器通过图像识别确定图像A1和图像A2都为物体(特征物)A的图像,服务器通过图像识别确定图像B1和图像B2都为物体(特征物)B的图像,通过α1、α2、β1、β2计算获得OC直线相对于基准线的角度γ。(4) The server determines that image A1 and image A2 are both images of object (feature) A through image recognition. The server determines that image B1 and image B2 are both images of object (feature) B through image recognition. Through α1, α2, β1 and β2 are calculated to obtain the angle γ of the OC straight line relative to the reference line.

即:γ=F1(α1、α2、β1、β2),F1为计算函数。That is: γ=F1 (α1, α2, β1, β2), F1 is the calculation function.

当然,获取更多的物体(特征物)进行计算,综合计算结果对本发明是有益的。Of course, it is beneficial to the present invention to obtain more objects (features) for calculation and comprehensive calculation results.

或,or,

所述的一种基于图像识别的定位的方法,还包括客户端和服务器,其特征在于,包括步骤:The described positioning method based on image recognition also includes a client and a server, and is characterized in that it includes the steps:

(1)通过客户端的方向器件确定统一基准线;(1) Determine the unified baseline through the client's direction device;

(2)线段物DEF的线段DE和线段EF不在同一直线上,位置C获取线段物DEF的垂直投影图像D1E1F1及垂直投影方向和基准线的角度α3,上传图像D1E1F1及α3至服务器;(2) The line segment DE and the line segment EF of the line segment object DEF are not on the same straight line. Position C obtains the vertical projection image D1E1F1 of the line segment object DEF and the angle α3 between the vertical projection direction and the baseline, and uploads the images D1E1F1 and α3 to the server;

(3)位置O获取线段物DEF的垂直投影图像D2E2F2及垂直投影方向和基准线的角度β3,上传图像D2E2F2及β3至服务器;(3) Position O obtains the vertical projection image D2E2F2 of the line segment object DEF and the angle β3 between the vertical projection direction and the reference line, and uploads the images D2E2F2 and β3 to the server;

(4)服务器通过图像识别确定图像D1E1F1和图像D2E2F2都为线段DEF的图像,通过D1E1/E1F1比值、D2E2/E2F2比值、α3、β3计算获得OC直线相对于基准线的角度γ。(4) The server determines through image recognition that both the image D1E1F1 and the image D2E2F2 are images of the line segment DEF, and calculates the angle γ of the OC straight line relative to the baseline through the D1E1/E1F1 ratio, D2E2/E2F2 ratio, α3, and β3.

即:γ=F2(D1E1/E1F1、α3、D2E2/E2F2、β3),F2为计算函数。That is: γ=F2 (D1E1/E1F1, α3, D2E2/E2F2, β3), F2 is the calculation function.

当然,获取更多的线段物进行计算,综合计算结果对本发明是有益的。Of course, it is beneficial to the present invention to obtain more line segment objects for calculation and comprehensive calculation results.

或,or,

所述的一种基于图像识别的定位的方法,还包括客户端和服务器,其特征在于,包括步骤:The described positioning method based on image recognition also includes a client and a server, and is characterized in that it includes the steps:

(1)通过客户端的方向器件确定统一基准线;(1) Determine the unified baseline through the client's direction device;

(2)立体物G的两个面的面积为S1、S2,位置C获取立体物G的垂直投影图像G1及垂直投影方向和基准线的角度α4,上传图像G1及α4至服务器;(2) The areas of the two faces of the three-dimensional object G are S1 and S2. At position C, obtain the vertical projection image G1 of the three-dimensional object G and the angle α4 between the vertical projection direction and the reference line, and upload the images G1 and α4 to the server;

(3)位置O获取立体物G的垂直投影图像G2及垂直投影方向和基准线的角度β4,上传图像G2及β4至服务器;(3) Position O obtains the vertical projection image G2 of the three-dimensional object G and the angle β4 between the vertical projection direction and the reference line, and uploads the images G2 and β4 to the server;

(4)服务器通过图像识别确定图像G1和图像G2都为立体物G的图像,G1图像中SC1为S1的图像面积,SC2为S2的图像面积,G2图像中SO1为S1的图像面积,SO2为S2的图像面积,通过SC1/SC2比值、SO1/SO2比值、α4、β4计算获得OC直线相对于基准线的角度γ。(4) The server determines through image recognition that both image G1 and image G2 are images of the three-dimensional object G. In the G1 image, SC1 is the image area of S1, SC2 is the image area of S2, in the G2 image SO1 is the image area of S1, and SO2 is For the image area of S2, the angle γ of the OC straight line relative to the baseline is calculated through the SC1/SC2 ratio, SO1/SO2 ratio, α4, and β4.

即:γ=F3(SC1/SC2、α4、SO1/SO2、β4),F3为计算函数。That is: γ=F3 (SC1/SC2, α4, SO1/SO2, β4), F3 is the calculation function.

当然,获取更多的立体物进行计算,综合计算结果对本发明是有益的。Of course, it is beneficial to the present invention to obtain more three-dimensional objects for calculation and comprehensive calculation results.

当然,可以采用上述特征物、线段物、立体物综合计算。Of course, the above-mentioned feature objects, line segment objects, and three-dimensional objects can be comprehensively calculated.

进一步,所述的基于图像识别的定位的方法,其特征还在于,包括步骤:Further, the positioning method based on image recognition is further characterized by comprising the steps:

客户端获取服务器计算的角度γ,依据角度γ在客户端引导地图上定位或导航。The client obtains the angle γ calculated by the server, and locates or navigates on the client's guidance map based on the angle γ.

或,所述的基于图像识别的定位的方法,其特征还在于,包括步骤:客户端获取服务器计算的角度γ,客户端打开摄像头获取实景图像,将角度γ的引导图标叠加显示在实景图像中。Or, the described positioning method based on image recognition is also characterized by including the steps: the client obtains the angle γ calculated by the server, the client turns on the camera to obtain the real-scene image, and superimposes the guide icon of the angle γ in the real-scene image. .

再进一步,客户端获取的角度还包括垂直方向的水平倾角用于确定垂直平面的方向角(同理可证),这样结合上述的水平平面的方向角,确定三维空间的方向角。Furthermore, the angle obtained by the client also includes the horizontal inclination angle in the vertical direction, which is used to determine the direction angle of the vertical plane (the same principle can be demonstrated). In this way, combined with the above-mentioned direction angle of the horizontal plane, the direction angle of the three-dimensional space is determined.

一种基于图像识别的定位的AR方法,其特征是:所述角度γ的引导的是相互好友、或商家、或游戏目标物、或红包、或广告。An AR method based on image recognition positioning, characterized in that the angle γ guides mutual friends, merchants, game targets, red envelopes, or advertisements.

所述的一种基于图像识别的定位的AR方法,其特征是:The described AR method based on image recognition and positioning is characterized by:

①客户端1启动发布红包程序,拍摄周边图像并同步获取方向角上传至服务器;①Client 1 starts the red envelope publishing program, captures surrounding images and simultaneously obtains the direction angle and uploads it to the server;

②服务器获取客户端1拍摄的周边图像及其方向角;②The server obtains the surrounding image and its direction angle taken by client 1;

③服务器根据客户端1的红包设置生成虚拟红包图像并关联步骤②中服务器获取的图像及其方向角,红包动作赋值包括方向角、水平倾角、垂直高度,及其动作赋值的变化;③The server generates a virtual red envelope image based on the red envelope settings of client 1 and associates the image and its direction angle obtained by the server in step ②. The red envelope action assignment includes the direction angle, horizontal inclination angle, vertical height, and changes in its action assignment;

④客户端2拍摄周边图像并同步获取方向角上传至服务器;④Client 2 captures surrounding images and simultaneously obtains direction angles and uploads them to the server;

⑤服务器根据步骤②获取的客户端1拍摄的周边图像及其方向角,及步骤④客户端2拍摄的周边图像及其方向角,计算获得客户端1发布红包的所在位置即红包所在位置,服务器将虚拟红包图像推送至客户端2;⑤ Based on the surrounding image and its direction angle taken by client 1 obtained in step ②, and the surrounding image and its direction angle taken by client 2 in step ④, the server calculates and obtains the location where client 1 releases the red envelope, that is, the location of the red envelope. The server Push the virtual red envelope image to client 2;

⑥客户端2根据红包的位置将虚拟红包图像叠加显示在屏幕实景中,同时,虚拟红包图像根据动作赋值进行运动;⑥Client 2 superimposes the virtual red envelope image and displays it on the real screen according to the position of the red envelope. At the same time, the virtual red envelope image moves according to the action assignment;

⑦客户端2通过触屏获取红包。⑦Client 2 obtains red envelopes through the touch screen.

所述的一种基于图像识别的定位的AR方法,其特征是:The described AR method based on image recognition and positioning is characterized by:

服务器发布广告步骤为:The steps for the server to publish advertisements are:

①服务器器获取任一客户端所拍摄的周边图像及其方向角;①The server obtains the surrounding image and its direction angle taken by any client;

②服务器根据广告主的设置生成虚拟广告图像并关联步骤①中服务器获取的图像及其方向角,广告主的设置包括投放位置、文字、图像、图像动作;②The server generates a virtual advertising image based on the advertiser's settings and associates the image and its direction angle obtained by the server in step ①. The advertiser's settings include placement, text, images, and image actions;

③广告受体客户端拍摄周边图像并同步获取方向角上传至服务器;③The advertising receptor client captures surrounding images and simultaneously obtains the direction angle and uploads it to the server;

④服务器根据步骤①获取的任一客户端拍摄的周边图像及其方向角,及步骤③广告受体客户端拍摄的周边图像及其方向角,计算获得广告投放位置;④The server calculates and obtains the advertising placement position based on the surrounding image and its directional angle captured by any client obtained in step ①, and the surrounding image and directional angle captured by the advertising receptor client in step ③;

⑤广告受体客户端根据广告的投放位置将虚拟广告图像叠加显示在屏幕实景中,同时,虚拟广告图像根据动作赋值进行运动;⑤The advertising receptor client overlays and displays the virtual advertising image on the real screen according to the placement position of the advertisement. At the same time, the virtual advertising image moves according to the action assignment;

⑥广告受体客户端通过触屏获取广告内容,如链接、跳转、收藏等。⑥The advertising receptor client obtains advertising content through the touch screen, such as links, jumps, collections, etc.

进一步,所述的一种基于图像识别的定位的方法,其特征在于,还包括步骤:确定两个特征物连线的距离、或线段物的尺寸、或立体物的尺寸,通过三角函数计算出两个位置连线的距离。Further, the described positioning method based on image recognition is characterized in that it also includes the step of: determining the distance between two feature objects, or the size of a line segment object, or the size of a three-dimensional object, and calculating it through trigonometric functions. The distance between two locations.

一种基于图像识别的定位的系统,其特征是:包括服务器和客户端,A positioning system based on image recognition, characterized by: including a server and a client,

服务器包括:Servers include:

图像识别单元,用于确定特征物或线段物或立体物;Image recognition unit, used to determine feature objects, line segment objects or three-dimensional objects;

计算单元,根据两个特征物的方向角计算两个位置点连线的方向角;或根据不在同一条直线上的两个线段的比值及其方向角计算两个位置点连线的方向角;或根据立体物两个面的面积比值及其方向角计算两个位置点连线的方向角;The calculation unit calculates the direction angle of the line connecting the two position points based on the direction angles of the two features; or calculates the direction angle of the line connecting the two position points based on the ratio of two line segments that are not on the same straight line and their direction angle; Or calculate the direction angle of the line connecting the two position points based on the area ratio of the two surfaces of the three-dimensional object and its direction angle;

计算结果推送单元,将计算的两个位置点连线的方向角数据推送至客户端;The calculation result push unit pushes the calculated direction angle data of the line connecting the two position points to the client;

客户端包括至少包括方向获取单元、图像叠加单元、摄像头。The client includes at least a direction acquisition unit, an image overlay unit, and a camera.

所述的一种基于图像识别的定位的系统,其特征是:所述系统嵌入现有的IM系统或支付系统或游戏系统。The feature of the positioning system based on image recognition is that the system is embedded in an existing IM system, payment system or game system.

所述的一种基于图像识别的定位的系统,其特征是:客户端是手机或平板电脑。The described positioning system based on image recognition is characterized in that: the client is a mobile phone or a tablet computer.

本发明基于手机具体进行具体实施方式说明,目前手机传感器包括加速度传感器、方向器件、陀螺仪、温度计等,可以获取的信息包括加速度、磁场、旋转矢量、陀螺仪、光线感应、压力、温度、接近、重力。The present invention is based on the specific implementation of a mobile phone. Currently, mobile phone sensors include acceleration sensors, direction devices, gyroscopes, thermometers, etc. The information that can be obtained includes acceleration, magnetic field, rotation vector, gyroscope, light induction, pressure, temperature, proximity ,gravity.

本发明的有益效果是:本发明的目的是提供一种角度定位的方法,可以应用于导航定位、AR导航定位、AR游戏定位等。可在任何陌生环境中实现定位。适用于室内定位。The beneficial effects of the present invention are: The purpose of the present invention is to provide an angle positioning method, which can be applied to navigation positioning, AR navigation positioning, AR game positioning, etc. Positioning can be achieved in any unfamiliar environment. Suitable for indoor positioning.

附图说明Description of the drawings

图1本发明利用两个特征物确定定位点角度的示意图。Figure 1 is a schematic diagram of the present invention using two features to determine the angle of the positioning point.

图2本发明利用两个特征物确定定位点角度的几何图。Figure 2 is a geometric diagram of the present invention using two features to determine the angle of the positioning point.

图3本发明利用线段物确定定位点角度的示意图。Figure 3 is a schematic diagram of the present invention using line segments to determine the angle of the anchor point.

图4本发明利用立体物确定定位点角度的示意图。Figure 4 is a schematic diagram of the present invention using three-dimensional objects to determine the angle of the positioning point.

图5本发明AR定位的界面示意图。Figure 5 is a schematic interface diagram of AR positioning according to the present invention.

图6本发明硬件配置图。Figure 6 is a hardware configuration diagram of the present invention.

图7本发明流程图。Figure 7 is a flow chart of the present invention.

图8本发明AR应用流程图。Figure 8 is an AR application flow chart of the present invention.

图9为本发明计算角度的一个案例。Figure 9 is an example of angle calculation according to the present invention.

具体实施方式Detailed ways

下面结合附图和实施例对本发明进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and examples.

图1本发明利用两个特征物确定定位点角度的示意图,101为手机,101处于O点,102为另一手机,102处于C点,在定位空间中,103为特征物A,104为特征物B,特征物为服务器通过图像识别可以明确判定的物体,或人为设置的标志物(标志物在服务器中备案,用于识别,如确定的广告画像、文字等),105为服务器。假设由101手机寻找102手机位置(即发现OC的角度),首先手机102可以对定位空间旋转拍照,具体如沿水平线360度连续摄像同步记录方向角并同时上传服务器,必然获取特征物A的图像及拍摄特征物A时的方向角度(视角),此时特征物A的方向和手机102的屏幕垂直(简单起见,暂仅考虑水平平面,垂直平面采用倾角计算,计算方法同理可证),即特征物A的图像落在手机102屏幕的中央,根据手机中的方向传感器(磁性元件)获取特征物A的方向和基准线(基准线由确定的南北角度线决定,简单起见,图中以东西方向为基准线,即WE方向,或OX方向)的夹角(即拍摄图像时的方向角),同理拍摄特征物B,并上传拍摄图像及其方向角至服务器;手机101对定位空间旋转拍照寻找102手机,同样获取两个特征物的图像及其方向角并上传服务器,由服务器根据两个地点的两个特征物的方向角由几何数学算出唯一的OC的方向角γ(和基准线OX夹角),参照图2本发明利用两个特征物确定定位点角度的几何图,其步骤为:Figure 1 is a schematic diagram of the present invention using two features to determine the angle of the positioning point. 101 is a mobile phone, 101 is at point O, 102 is another mobile phone, 102 is at point C, in the positioning space, 103 is the feature A, and 104 is the feature Object B, the characteristic object is an object that can be clearly determined by the server through image recognition, or an artificially set marker (the marker is registered in the server for identification, such as a certain advertising image, text, etc.), 105 is the server. Assume that mobile phone 101 is looking for the position of mobile phone 102 (that is, the angle at which OC is found). First, mobile phone 102 can rotate and take pictures of the positioning space. Specifically, for example, 360-degree continuous camera shooting along the horizontal line can simultaneously record the direction angle and upload it to the server at the same time. The image of feature A must be obtained. and the direction angle (viewing angle) when shooting feature A. At this time, the direction of feature A is perpendicular to the screen of mobile phone 102 (for simplicity, only the horizontal plane is considered for the time being, and the vertical plane is calculated using the inclination angle. The calculation method can be verified in the same way), That is, the image of feature A falls in the center of the screen of mobile phone 102, and the direction and baseline of feature A are obtained according to the direction sensor (magnetic element) in the mobile phone (the baseline is determined by the determined north-south angle line. For simplicity, in the figure The east-west direction is the angle between the baseline (that is, the WE direction, or the OX direction) (that is, the direction angle when shooting the image). In the same way, feature object B is shot, and the shot image and its direction angle are uploaded to the server; the mobile phone 101 pairs the positioning space Rotate and take photos to find the 102 mobile phone, and also obtain the images of the two features and their direction angles and upload them to the server. The server calculates the unique OC direction angle γ (and the reference angle) based on the direction angles of the two features at the two locations through geometric mathematics. The angle between line OX), refer to Figure 2. The present invention uses two features to determine the geometric diagram of the angle of the positioning point. The steps are:

(1)通过客户端的方向器件确定统一基准线;(1) Determine the unified baseline through the client's direction device;

(2)位置C获取特征物A的图像A1及相对于基准线的角度α1,位置C获取特征物B的图像B1及相对于基准线的角度α2,上传图像A1及α1、图像B1及α2至服务器;(2) Position C obtains the image A1 of feature A and the angle α1 relative to the reference line. Position C obtains the image B1 of feature B and the angle α2 relative to the reference line. Upload images A1 and α1, images B1 and α2 to server;

(3)位置O获取特征物A的图像A2及相对于基准线的角度β1,位置O获取特征物B的图像B2及相对于基准线的角度β2,上传图像A2及β1、图像B2及β2至服务器;(3) Position O obtains the image A2 of feature A and the angle β1 relative to the reference line. Position O obtains the image B2 of feature B and the angle β2 relative to the reference line. Upload images A2 and β1, images B2 and β2 to server;

(4)服务器通过图像识别确定图像A1和图像A2都为特征物A的图像,服务器通过图像识别确定图像B1和图像B2都为特征物B的图像,通过α1、α2、β1、β2计算获得OC直线相对于基准线的角度γ。(4) The server determines that image A1 and image A2 are both images of feature A through image recognition. The server determines that image B1 and image B2 are both images of feature B through image recognition. The OC is obtained through α1, α2, β1, and β2 calculations. The angle γ of the straight line relative to the datum line.

即:γ=F1(α1、α2、β1、β2),F1为计算函数。That is: γ=F1 (α1, α2, β1, β2), F1 is the calculation function.

进一步,确定两个特征物之间的距离,通过三角函数可以计算出两个手机之间的距离。Furthermore, the distance between the two features is determined, and the distance between the two mobile phones can be calculated through trigonometric functions.

下面参照图9提供一个计算过程:The following provides a calculation process with reference to Figure 9:

在三角形ABC中:In triangle ABC:

AB/sin(α1+α2)=AC/sin(δ-α2)=BC/sin(-α1-δ)=BC/sin(α1+δ),推导AC=ABsin(δ-α2)/sin(α1+α2);AB/sin(α1+α2)=AC/sin(δ-α2)=BC/sin( -α1-δ)=BC/sin(α1+δ), deriving AC=ABsin(δ-α2)/sin(α1+α2);

在三角形AOB中:In triangle AOB:

AB/sin(β1-β2)=OB/sin(-δ-β1)=OB/sin(δ+β1)=OA/sin(δ+β2),推导OA=ABsin(δ+β2)/sin(β1-β2);AB/sin(β1-β2)=OB/sin( -δ-β1)=OB/sin(δ+β1)=OA/sin(δ+β2), deriving OA=ABsin(δ+β2)/sin(β1-β2);

在三角形AOC中:In triangle AOC:

OA/sin(-γ+α1)=OA/ sin(γ-α1)=AC/sin(γ-β1),OA/sin( -γ+α1)=OA/ sin(γ-α1)=AC/sin(γ-β1),

代入求得:ABsin(δ+β2)/ sin(β1-β2)sin(γ-α1)= ABsin(δ-α2)/sin(α1+α2)sin(γ-β1), sin(γ-α1)/sin(γ-β1)= sin(α1+α2)sin(δ+β2)/sin(δ-α2) sin(β1-β2);Substitute to find: ABsin(δ+β2)/ sin(β1-β2)sin(γ-α1)= ABsin(δ-α2)/sin(α1+α2)sin(γ-β1), sin(γ-α1) /sin(γ-β1)= sin(α1+α2)sin(δ+β2)/sin(δ-α2) sin(β1-β2);

令K= sin(α1+α2) sin(δ+β2)/sin(δ-α2)sin(β1-β2),则sin(γ-α1)/ sin(γ-β1)=K;Let K= sin(α1+α2) sin(δ+β2)/sin(δ-α2)sin(β1-β2), then sin(γ-α1)/ sin(γ-β1)=K;

sinγcosα1-cosγsinα1=Ksinγcosβ1-Kcosγsinβ1;sinγcosα1-cosγsinα1=Ksinγcosβ1-Kcosγsinβ1;

(Ksinβ1-sinα1)cosγ=(Kcosβ1-cosα1)sinγ;(Ksinβ1-sinα1)cosγ=(Kcosβ1-cosα1)sinγ;

cosγ= sinγ(Kcosβ1-cosα1)/(Ksinβ1- sinα1);cosγ= sinγ(Kcosβ1-cosα1)/(Ksinβ1- sinα1);

令(Kcosβ1-cosα1)/(Ksinβ1- sinα1)=t,则cosγ= tsinγ;Let (Kcosβ1-cosα1)/(Ksinβ1- sinα1)=t, then cosγ= tsinγ;

因cos2γ+sin2γ=1,所以t2sin2γ+sin2γ=1;Since cos 2 γ+sin 2 γ=1, so t 2 sin 2 γ+sin 2 γ=1;

sin2γ=1/(1+t2),γ∈[0,]时sinγ≥0,sinγ=根号(1/(1+t2));sin 2 γ=1/(1+t 2 ),γ∈[0, ] when sinγ≥0, sinγ=root (1/(1+t 2 ));

最后求出γ角度。Finally, find the γ angle.

图3本发明利用线段物确定定位点角度的示意图,101为手机,101处于O点,102为另一手机,102处于C点,在定位空间中,DEF为线段物,线段DE和线段EF不在同一直线上,具体可以是某一物体上的线段,该线段能被计算机软件识别确定,即为服务器通过图像识别可以明确判定的物体,或人为设置的标志物(标志物在服务器中备案,用于识别),105为服务器。假设由101手机寻找102手机位置(即发现OC的角度),首先手机102可以对定位空间旋转拍照,获取线段物DEF的图像及拍摄线段物DEF时的方向角度(视角),此时线段物DEF的方向和手机102的屏幕垂直(简单起见,暂仅考虑水平平面,垂直平面的计算同理可证),即线段物DEF的图像落在手机102屏幕的中央,根据手机中的方向传感器(磁性元件)获取线段物DEF和基准线(基准线由确定的南北角度线决定,简单起见,图中以东西方向为基准线,即WE方向,或OX方向)的夹角(即拍摄图像时的方向角),拍摄完成上传拍摄图像及其方向角至服务器;手机101对定位空间旋转拍照寻找102手机,同样获取线段物DEF的图像及其方向角并上传服务器,由服务器根据两个地点的同一线段物DEF的图像的线段比值及方向角由几何数学算出唯一的OC的方向角γ(和基准线OX夹角),其步骤为:Figure 3 is a schematic diagram of the present invention using line segment objects to determine the angle of the positioning point. 101 is a mobile phone, 101 is at point O, 102 is another mobile phone, and 102 is at point C. In the positioning space, DEF is a line segment object, and line segment DE and line segment EF are not there. On the same straight line, it can specifically be a line segment on an object, which can be recognized and determined by computer software, that is, it is an object that the server can clearly determine through image recognition, or an artificially set marker (the marker is registered in the server and is used for identification), 105 is the server. Assume that mobile phone 101 is looking for the position of mobile phone 102 (that is, the angle at which OC is found). First, mobile phone 102 can rotate and take pictures of the positioning space, obtain the image of the line segment object DEF and the direction angle (viewing angle) when shooting the line segment object DEF. At this time, the line segment object DEF The direction is perpendicular to the screen of the mobile phone 102 (for simplicity, only the horizontal plane will be considered for now, the calculation of the vertical plane can be verified in the same way), that is, the image of the line segment object DEF falls in the center of the screen of the mobile phone 102. According to the direction sensor (magnetic) in the mobile phone component) to obtain the angle between the line segment object DEF and the baseline (the baseline is determined by the determined north-south angle line. For simplicity, the east-west direction is used as the baseline in the figure, that is, the WE direction, or the OX direction) (that is, the direction when the image is taken) angle), the shooting is completed and the captured image and its direction angle are uploaded to the server; the mobile phone 101 rotates and takes pictures in the positioning space to find the mobile phone 102, and also obtains the image of the line segment object DEF and its direction angle and uploads it to the server, and the server uses the same line segment in the two locations The line segment ratio and direction angle of the object DEF image are calculated through geometric mathematics to calculate the unique direction angle γ of OC (the angle between the baseline OX and the baseline OX). The steps are:

(1)通过客户端的方向器件确定统一基准线;(1) Determine the unified baseline through the client's direction device;

(2)线段物DEF的线段DE和线段EF不在同一直线上,位置C获取线段物DEF的垂直投影图像D1E1F1及垂直投影方向和基准线的角度α3,上传图像D1E1F1及α3至服务器;(2) The line segment DE and the line segment EF of the line segment object DEF are not on the same straight line. Position C obtains the vertical projection image D1E1F1 of the line segment object DEF and the angle α3 between the vertical projection direction and the baseline, and uploads the images D1E1F1 and α3 to the server;

(3)位置O获取线段物DEF的垂直投影图像D2E2F2及垂直投影方向和基准线的角度β3,上传图像D2E2F2及β3至服务器;(3) Position O obtains the vertical projection image D2E2F2 of the line segment object DEF and the angle β3 between the vertical projection direction and the reference line, and uploads the images D2E2F2 and β3 to the server;

(4)服务器通过图像识别确定图像D1E1F1和图像D2E2F2都为线段DEF的图像,通过D1E1/E1F1比值、D2E2/E2F2比值、α3、β3计算获得OC直线相对于基准线的角度γ。(4) The server determines through image recognition that both the image D1E1F1 and the image D2E2F2 are images of the line segment DEF, and calculates the angle γ of the OC straight line relative to the baseline through the D1E1/E1F1 ratio, D2E2/E2F2 ratio, α3, and β3.

即:γ=F2(D1E1/E1F1、α3、D2E2/E2F2、β3),F2为计算函数。That is: γ=F2 (D1E1/E1F1, α3, D2E2/E2F2, β3), F2 is the calculation function.

线段物可以理解为物体中的不在同一条直线上的线段。Line segment objects can be understood as line segments in an object that are not on the same straight line.

进一步,确定线段物的尺寸,通过三角函数可以计算出两个手机之间的距离。Further, determine the size of the line segment and calculate the distance between the two mobile phones through trigonometric functions.

图4本发明利用立体物确定定位点角度的示意图,101为手机,101处于O点,102为另一手机,102处于C点,在定位空间中,G为立体物,立体物的两个面为S1、S2,具体可以是某一物体,该物体能被计算机软件识别确定,即为服务器通过图像识别可以明确判定的物体,或人为设置的标志物(标志物在服务器中备案,用于识别),105为服务器。假设由101手机寻找102手机位置(即发现OC的角度),首先手机102可以对定位空间旋转拍照,获取立体物G的图像及拍摄立体物G时的方向角(视角),此时立体物G的方向和手机102的屏幕垂直(简单起见,暂仅考虑水平平面,垂直平面计算同理可证),即立体物G的图像落在手机102屏幕的中央,根据手机中的方向传感器(磁性元件)获取立体物G的方向和基准线(基准线由确定的南北角度线决定,简单起见,图中以东西方向为基准线,即WE方向,或OX方向)的夹角(即拍摄图像时的方向角),拍摄完成上传拍摄图像及其方向角至服务器;手机101对定位空间旋转拍照寻找102手机,同样获取立体物G的图像及其方向角并上传服务器,由服务器根据两个地点的同一立体物G的图像的两个面的面积比值及方向角由几何数学算出唯一的OC的方向角γ(和基准线OX夹角),其步骤为:Figure 4 is a schematic diagram of the present invention using a three-dimensional object to determine the angle of the positioning point. 101 is a mobile phone, 101 is at point O, 102 is another mobile phone, and 102 is at point C. In the positioning space, G is a three-dimensional object, and the two sides of the three-dimensional object It is S1 and S2, specifically it can be an object that can be recognized and determined by computer software, that is, it is an object that the server can clearly determine through image recognition, or an artificially set marker (the marker is filed in the server and used for identification ), 105 is the server. Assume that mobile phone 101 is looking for the position of mobile phone 102 (that is, the angle at which the OC is found). First, mobile phone 102 can rotate and take pictures in the positioning space to obtain the image of the three-dimensional object G and the direction angle (viewing angle) when shooting the three-dimensional object G. At this time, the three-dimensional object G The direction is perpendicular to the screen of the mobile phone 102 (for the sake of simplicity, only the horizontal plane will be considered for the time being, and the calculation of the vertical plane can be verified in the same way), that is, the image of the three-dimensional object G falls in the center of the screen of the mobile phone 102. According to the direction sensor (magnetic element) in the mobile phone ) Obtain the angle between the direction of the three-dimensional object G and the datum line (the datum line is determined by the determined north-south angle line. For simplicity, the east-west direction is used as the datum line in the figure, that is, the WE direction, or the OX direction) (i.e., when capturing the image) direction angle), the shooting is completed and the captured image and its direction angle are uploaded to the server; the mobile phone 101 rotates the positioning space to take pictures and looks for the mobile phone 102, and also obtains the image of the three-dimensional object G and its direction angle and uploads it to the server, and the server determines the identity of the two locations according to the The area ratio and direction angle of the two faces of the image of the three-dimensional object G are calculated through geometric mathematics to calculate the unique direction angle γ of OC (the angle between the reference line OX). The steps are:

(1)通过客户端的方向器件确定统一基准线;(1) Determine the unified baseline through the client's direction device;

(2)立体物G的两个面的面积为S1、S2,位置C获取立体物G的垂直投影图像G1及垂直投影方向和基准线的角度α4,上传图像G1及α4至服务器;(2) The areas of the two faces of the three-dimensional object G are S1 and S2. At position C, obtain the vertical projection image G1 of the three-dimensional object G and the angle α4 between the vertical projection direction and the reference line, and upload the images G1 and α4 to the server;

(3)位置O获取立体物G的垂直投影图像G2及垂直投影方向和基准线的角度β4,上传图像G2及β4至服务器;(3) Position O obtains the vertical projection image G2 of the three-dimensional object G and the angle β4 between the vertical projection direction and the reference line, and uploads the images G2 and β4 to the server;

(4)服务器通过图像识别确定图像G1和图像G2都为立体物G的图像,G1图像中SC1为S1的图像面积,SC2为S2的图像面积,G2图像中SO1为S1的图像面积,SO2为S2的图像面积,通过SC1/SC2比值、SO1/SO2比值、α4、β4计算获得OC直线相对于基准线的角度γ。(4) The server determines through image recognition that both image G1 and image G2 are images of the three-dimensional object G. In the G1 image, SC1 is the image area of S1, SC2 is the image area of S2, in the G2 image SO1 is the image area of S1, and SO2 is For the image area of S2, the angle γ of the OC straight line relative to the baseline is calculated through the SC1/SC2 ratio, SO1/SO2 ratio, α4, and β4.

即:γ=F3(SC1/SC2、α4、SO1/SO2、β4),F3为计算函数。That is: γ=F3 (SC1/SC2, α4, SO1/SO2, β4), F3 is the calculation function.

进一步,确定立体物的尺寸,通过三角函数可以计算出两个手机之间的距离。Further, determine the size of the three-dimensional object and calculate the distance between the two mobile phones through trigonometric functions.

图5本发明AR定位的界面示意图,101手机的屏幕中拍摄定位空间的实景图像,包括特征物A、B,当然可以包括线段物或立体物,设置动态指示图标501引导手机101寻找102手机所处位置C,502是手机101的位置光标。Figure 5 is a schematic diagram of the AR positioning interface of the present invention. A real-life image of the positioning space is captured on the screen of the mobile phone 101, including feature objects A and B. Of course, it can include line segment objects or three-dimensional objects. A dynamic indicator icon 501 is set to guide the mobile phone 101 to find the location of the mobile phone 102. At position C, 502 is the position cursor of mobile phone 101.

上述方法只考虑到水平平面计算,实际应用可以结合垂直平面(同理可证),形成3D立体定位。The above method only considers horizontal plane calculations. In practical applications, it can be combined with vertical planes (the same principle can be demonstrated) to form 3D stereopositioning.

图6本发明硬件配置图,包括服务器和客户端:Figure 6 is a hardware configuration diagram of the present invention, including server and client:

服务器包括:图像识别单元,用于图像识别并确定特征物、线段物、立体物、人物等,具体图像识别策略包括现有成熟的特征识别、物体轮廓识别、颜色识别、动作识别、人脸识别、文字识别等,不同的环境采用相应的识别策略。The server includes: an image recognition unit, used for image recognition and determination of feature objects, line objects, three-dimensional objects, people, etc. The specific image recognition strategy includes existing mature feature recognition, object outline recognition, color recognition, action recognition, and face recognition. , text recognition, etc., corresponding recognition strategies are adopted in different environments.

计算单元,用于角度计算,具体如三角函数计算。Calculation unit, used for angle calculations, specifically trigonometric function calculations.

计算结果推送单元,将计算结果信息、或计算结果信息图像推送至客户端。The calculation result pushing unit pushes the calculation result information or the calculation result information image to the client.

客户端即现有手机,包括:显示屏、摄像头;The client is the existing mobile phone, including: display screen and camera;

图像叠加单元,用于AR显示;Image overlay unit for AR display;

传感器单元,包括方向器件,用于南北方向测定,水平传感器(二维),用于测量倾角(用于水平定位,或垂直方向角度测量),陀螺仪,用于运动测量;The sensor unit includes a direction device for north-south direction determination, a horizontal sensor (two-dimensional) for measuring inclination (for horizontal positioning, or vertical angle measurement), and a gyroscope for motion measurement;

网络通信单元,包括WIFI网络或无线通信网络(2G、3G、4G等);Network communication unit, including WIFI network or wireless communication network (2G, 3G, 4G, etc.);

GPS、LBS定位单元,用于地理位置定位,结合本发明实施综合定位策略。GPS and LBS positioning units are used for geographical location positioning and implement comprehensive positioning strategies in combination with the present invention.

图7本发明流程图,包括步骤:Figure 7 is a flow chart of the present invention, including steps:

客户端1:Client 1:

确定基准线,即确定定位系统的统一基准线,采用客户端的方向器件确定,具体方向不限定,但一旦确定则所有客户端及服务器均以确定的基准线为角度测量基准线;Determine the baseline, that is, determine the unified baseline of the positioning system, which is determined by the client's direction device. The specific direction is not limited, but once determined, all clients and servers will use the determined baseline as the angle measurement baseline;

拍摄即时场景同步记录方向角并上传服务器,客户端通过摄像头并同步调用方向器件角度拍摄即时场景,进一步可以同步调用水平器件的水平倾角(一维或二维);Shoot real-time scenes and simultaneously record the direction angle and upload it to the server. The client uses the camera and synchronously calls the direction device angle to shoot the real-time scene. It can further synchronously call the horizontal inclination angle (one-dimensional or two-dimensional) of the horizontal device;

客户端2:Client 2:

确定基准线,即确定定位系统的统一基准线,采用客户端的方向器件确定,具体方向不限定,但一旦确定则所有客户端及服务器均以确定的基准线为角度测量基准线;Determine the baseline, that is, determine the unified baseline of the positioning system, which is determined by the client's direction device. The specific direction is not limited, but once determined, all clients and servers will use the determined baseline as the angle measurement baseline;

拍摄即时场景同步记录方向角并上传服务器,客户端通过摄像头并同步调用方向器件角度拍摄即时场景,进一步可以同步调用水平器件的水平倾角(一维或二维);Shoot real-time scenes and simultaneously record the direction angle and upload it to the server. The client uses the camera and synchronously calls the direction device angle to shoot the real-time scene. It can further synchronously call the horizontal inclination angle (one-dimensional or two-dimensional) of the horizontal device;

服务器:server:

至少确定两个特征物,服务器获取客户端上传的图像后进行图像识别,根据识别策略确定特征物;At least two features are determined. The server obtains the image uploaded by the client and performs image recognition, and determines the features according to the recognition strategy;

根据客户端1和两个特征物相对于基准线的角度,及,客户端2和两个特征物相对于基准线的角度,计算出客户端1和客户端2连线相对于基准线的角度。Based on the angles of client 1 and the two features relative to the baseline, and the angles of client 2 and the two features relative to the baseline, calculate the angle of the line connecting client 1 and client 2 relative to the baseline. .

或,or,

至少确定一个线段物DEF,线段物DEF的线段DE和线段EF不在同一直线上;Determine at least one line segment object DEF, and the line segment DE and line segment EF of the line segment object DEF are not on the same straight line;

根据客户端1和线段物DEF相对于基准线的角度及客户端1获取的图像的DE/EF比值,及,客户端2和线段物DEF相对于基准线的角度及客户端1获取的图像的DE/EF比值,计算出客户端1和客户端2连线相对于基准线的角度。According to the angle of client 1 and line segment object DEF relative to the baseline and the DE/EF ratio of the image acquired by client 1, and the angle of client 2 and line segment object DEF relative to the baseline and the DE/EF ratio of the image acquired by client 1 DE/EF ratio, calculate the angle of the line connecting client 1 and client 2 relative to the baseline.

或,or,

至少确定一个立体物;Identify at least one three-dimensional object;

根据客户端1和立体物相对于基准线的角度及客户端1获取的立体物图像的两个面的面积之比,及,客户端2和立体物相对于基准线的角度及客户端2获取的立体物图像的两个面的面积之比,计算出客户端1和客户端2连线相对于基准线的角度。According to the angle between client 1 and the three-dimensional object relative to the reference line and the ratio of the areas of the two faces of the three-dimensional object image obtained by client 1, and, the angle between client 2 and the three-dimensional object relative to the reference line and the angle obtained by client 2 The ratio of the areas of the two faces of the three-dimensional object image is used to calculate the angle of the line connecting client 1 and client 2 relative to the reference line.

服务器将客户端2的位置角度信息发送给客户端1,由客户端1将位置图标和实景叠加;The server sends the position and angle information of client 2 to client 1, and client 1 superimposes the position icon and the real scene;

服务器将客户端1的位置角度信息发送给客户端2,由客户端2将位置图标和实景叠加;The server sends the location and angle information of client 1 to client 2, and client 2 superimposes the location icon and the real scene;

图8本发明AR应用流程图,包括步骤:Figure 8 is an AR application flow chart of the present invention, including steps:

客户端1:Client 1:

确定基准线,即确定定位系统的统一基准线,采用客户端的方向器件确定,具体方向不限定,但一旦确定则所有客户端及服务器均以确定的基准线为角度测量基准线;Determine the baseline, that is, determine the unified baseline of the positioning system, which is determined by the client's direction device. The specific direction is not limited, but once determined, all clients and servers will use the determined baseline as the angle measurement baseline;

拍摄即时场景同步记录方向角并上传服务器,客户端通过摄像头并同步调用方向器件角度拍摄即时场景,进一步可以同步调用水平器件的水平倾角(一维或二维);Shoot real-time scenes and simultaneously record the direction angle and upload it to the server. The client uses the camera and synchronously calls the direction device angle to shoot the real-time scene. It can further synchronously call the horizontal inclination angle (one-dimensional or two-dimensional) of the horizontal device;

客户端2:Client 2:

确定基准线,即确定定位系统的统一基准线,采用客户端的方向器件确定,具体方向不限定,但一旦确定则所有客户端及服务器均以确定的基准线为角度测量基准线;Determine the baseline, that is, determine the unified baseline of the positioning system, which is determined by the client's direction device. The specific direction is not limited, but once determined, all clients and servers will use the determined baseline as the angle measurement baseline;

拍摄即时场景同步记录方向角并上传服务器,客户端通过摄像头并同步调用方向器件角度拍摄即时场景,进一步可以同步调用水平器件的水平倾角(一维或二维);Shoot real-time scenes and simultaneously record the direction angle and upload it to the server. The client uses the camera and synchronously calls the direction device angle to shoot the real-time scene. It can further synchronously call the horizontal inclination angle (one-dimensional or two-dimensional) of the horizontal device;

服务器:server:

至少确定两个特征物,服务器获取客户端上传的图像后进行图像识别,根据识别策略确定特征物;At least two features are determined. The server obtains the image uploaded by the client and performs image recognition, and determines the features according to the recognition strategy;

根据客户端1和两个特征物相对于基准线的角度,及,客户端2和两个特征物相对于基准线的角度,计算出客户端1和客户端2连线相对于基准线的角度。Based on the angles of client 1 and the two features relative to the baseline, and the angles of client 2 and the two features relative to the baseline, calculate the angle of the line connecting client 1 and client 2 relative to the baseline. .

或,or,

至少确定一个线段物DEF,线段物DEF的线段DE和线段EF不在同一直线上;Determine at least one line segment object DEF, and the line segment DE and line segment EF of the line segment object DEF are not on the same straight line;

根据客户端1和线段物DEF相对于基准线的角度及客户端1获取的图像的DE/EF比值,及,客户端2和线段物DEF相对于基准线的角度及客户端1获取的图像的DE/EF比值,计算出客户端1和客户端2连线相对于基准线的角度。According to the angle of client 1 and line segment object DEF relative to the baseline and the DE/EF ratio of the image acquired by client 1, and the angle of client 2 and line segment object DEF relative to the baseline and the DE/EF ratio of the image acquired by client 1 DE/EF ratio, calculate the angle of the line connecting client 1 and client 2 relative to the baseline.

或,or,

至少确定一个立体物;Identify at least one three-dimensional object;

根据客户端1和立体物相对于基准线的角度及客户端1获取的立体物图像的两个面的面积之比,及,客户端2和立体物相对于基准线的角度及客户端2获取的立体物图像的两个面的面积之比,计算出客户端1和客户端2连线相对于基准线的角度。According to the angle between client 1 and the three-dimensional object relative to the reference line and the ratio of the areas of the two faces of the three-dimensional object image obtained by client 1, and, the angle between client 2 and the three-dimensional object relative to the reference line and the angle obtained by client 2 The ratio of the areas of the two faces of the three-dimensional object image is used to calculate the angle of the line connecting client 1 and client 2 relative to the reference line.

服务器对AR红包定位并向客户端推送AR红包;The server locates the AR red envelope and pushes the AR red envelope to the client;

或,or,

服务器对虚拟广告定位并向客户端推送虚拟广告;The server locates the virtual advertisement and pushes the virtual advertisement to the client;

或,or,

服务器对AR游戏物定位并向客户端推送AR游戏物;The server locates the AR game object and pushes the AR game object to the client;

客户端将AR图像叠加显示在客户端屏幕实景中。The client superimposes the AR image and displays it on the real scene of the client screen.

基于图像识别的定位及AR方法的具体应用模式为:The specific application modes of positioning and AR methods based on image recognition are:

(1)目前即时通讯客户端已经普及使用,在虚拟空间的好友有可能并未谋面,本发明嵌入现有的即时通讯软件如QQ、微信等,双方好友可以打开摄像头在任何地点相互寻找对方,并在手机屏幕中进行相互指示图标显示。在找到对方时可以将虚拟图像如头像叠加显示在屏幕的实景中,同时可以提供音频或震动提示。(1) At present, instant messaging clients have been widely used. Friends in the virtual space may not have met each other. The present invention is embedded in existing instant messaging software such as QQ, WeChat, etc. Friends on both sides can turn on the camera to find each other at any location. And display mutual indication icons on the mobile phone screen. When finding the other party, virtual images such as avatars can be superimposed on the real scene on the screen, and audio or vibration prompts can be provided.

(2)一方先拍摄定位空间图像上传服务器,在一定的有效时间范围内由另一方对定位空间拍摄寻找。(2) One party first shoots the positioning space image and uploads it to the server, and the other party shoots and searches for the positioning space within a certain effective time range.

(3)一方为商家,由商家拍摄定位空间图像上传服务器,由客户对定位空间拍摄寻找到该商家。(3) One party is the merchant, and the merchant takes the positioning space image and uploads it to the server, and the customer takes the positioning space image to find the merchant.

(4)结合中国专利公告2016105984877一种AR的方法及系统,提供红包或虚拟广告的定位。所述虚拟图像包括位置及方向参数,客户端硬件包括位置及方向传感器,客户端显示的虚拟图像随客户端硬件的方位变化而变化,即VR技术。(4) Combined with China Patent Announcement 2016105984877, an AR method and system to provide red envelopes or virtual advertising positioning. The virtual image includes position and direction parameters, and the client hardware includes position and direction sensors. The virtual image displayed on the client changes with the orientation of the client hardware, which is VR technology.

具体步骤为:The specific steps are:

①客户端1启动发布红包程序,拍摄周边图像并同步获取方向角上传至服务器;①Client 1 starts the red envelope publishing program, captures surrounding images and simultaneously obtains the direction angle and uploads it to the server;

②服务器获取客户端1拍摄的周边图像及其方向角;②The server obtains the surrounding image and its direction angle taken by client 1;

③服务器根据客户端1的红包设置生成虚拟红包图像并关联步骤②中服务器获取的图像及其方向角,红包动作赋值包括方向角、水平倾角、垂直高度,及其动作赋值的变化;③The server generates a virtual red envelope image based on the red envelope settings of client 1 and associates the image and its direction angle obtained by the server in step ②. The red envelope action assignment includes the direction angle, horizontal inclination angle, vertical height, and changes in its action assignment;

④客户端2拍摄周边图像并同步获取方向角上传至服务器;④Client 2 captures surrounding images and simultaneously obtains direction angles and uploads them to the server;

⑤服务器根据步骤②获取的客户端1拍摄的周边图像及其方向角,及步骤④客户端2拍摄的周边图像及其方向角,计算获得客户端1发布红包的所在位置即红包所在位置,服务器将虚拟红包图像推送至客户端2;⑤ Based on the surrounding image and its direction angle taken by client 1 obtained in step ②, and the surrounding image and its direction angle taken by client 2 in step ④, the server calculates and obtains the location where client 1 releases the red envelope, that is, the location of the red envelope. The server Push the virtual red envelope image to client 2;

⑥客户端2根据红包的位置将虚拟红包图像叠加显示在屏幕实景中,同时,虚拟红包图像根据动作赋值进行运动;⑥Client 2 superimposes the virtual red envelope image and displays it on the real screen according to the position of the red envelope. At the same time, the virtual red envelope image moves according to the action assignment;

⑦客户端2通过触屏获取红包。⑦Client 2 obtains red envelopes through the touch screen.

当然,上述虚拟红包图像可以被商家广告、虚拟游戏物图像所替代。Of course, the above virtual red envelope images can be replaced by merchant advertisements and virtual game object images.

服务器发布广告步骤为:The steps for the server to publish advertisements are:

①服务器器获取任一客户端所拍摄的周边图像及其方向角;①The server obtains the surrounding image and its direction angle taken by any client;

②服务器根据广告主的设置(投放位置、文字、图像、图像动作等)生成虚拟广告图像并关联步骤①中服务器获取的图像及其方向角,广告主的设置包括投放位置、文字、图像、图像动作;②The server generates a virtual advertising image based on the advertiser's settings (placement, text, images, image actions, etc.) and associates the image and its direction angle obtained by the server in step ①. The advertiser's settings include placement, text, images, images, etc. action;

③广告受体客户端拍摄周边图像并同步获取方向角上传至服务器;③The advertising receptor client captures surrounding images and simultaneously obtains the direction angle and uploads it to the server;

④服务器根据步骤①获取的任一客户端拍摄的周边图像及其方向角,及步骤③广告受体客户端拍摄的周边图像及其方向角,计算获得广告投放位置;④The server calculates and obtains the advertising placement position based on the surrounding image and its directional angle captured by any client obtained in step ①, and the surrounding image and directional angle captured by the advertising receptor client in step ③;

⑤广告受体客户端根据广告的投放位置将虚拟广告图像叠加显示在屏幕实景中,同时,虚拟广告图像根据动作赋值进行运动;⑤The advertising receptor client overlays and displays the virtual advertising image on the real screen according to the placement position of the advertisement. At the same time, the virtual advertising image moves according to the action assignment;

⑥广告受体客户端通过触屏获取广告内容,如链接、跳转、收藏等。⑥The advertising receptor client obtains advertising content through the touch screen, such as links, jumps, collections, etc.

(5)游戏系统中游戏物的AR空间定位,游戏物如游戏中的虚拟动物、虚拟物品、虚拟宝贝等。具体如实施例中处于O处的手机旋转拍摄定位空间后在O处发行AR红包、或虚拟物品、或虚拟游戏物,在一定的时间段内提供给其它手机客户端寻找。(5) AR space positioning of game objects in the game system, such as virtual animals, virtual items, virtual treasures, etc. in the game. Specifically, in the embodiment, the mobile phone at O rotates to capture the positioning space and then issues AR red envelopes, virtual items, or virtual game objects at O, which are provided to other mobile phone clients to find within a certain period of time.

(6)目前人脸识别已经非常成熟,另外还可以结合衣着颜色等进行一过时场景识别,这样双方可以依据一过式动态图像识别结果获得对方的位置角度。(6) At present, face recognition is very mature. In addition, one-pass scene recognition can be combined with clothing color, etc., so that both parties can obtain the position and angle of the other party based on the one-pass dynamic image recognition results.

上述应用模式及规则均不限定本发明的方法及系统的基本特征,并非限定本发明的保护范围。凡在本发明的精神和原则之内,作出的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above application models and rules do not limit the basic features of the method and system of the present invention, nor do they limit the scope of protection of the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection scope of the present invention.

Claims (13)

1. A method of positioning based on image recognition, characterized by: the method comprises the steps that a server and a client are arranged, a first mobile phone is arranged at a point O, a second mobile phone is arranged at a point C, at least two characteristic object A and object B are determined in a positioning space, the first mobile phone searches for a second mobile phone position direction angle gamma, the second mobile phone photographs the positioning space, synchronously records the direction angle and uploads the same to the server, acquires an image of the characteristic object A and a direction angle when the characteristic object A is photographed, photographs the characteristic object B in the same way, and uploads a photographed image and the direction angle thereof to the server; the first mobile phone shoots the positioning space, also acquires images of two feature object and direction angles thereof, and uploads the images to the server, and the server calculates a unique OC direction angle gamma according to the direction angles of the two feature object at two places by geometric mathematics, wherein the steps are as follows:
(1) Determining a unified datum line through a direction device of the client;
(2) Acquiring an image A1 of an object A and an angle alpha 1 relative to a reference line at a position C, acquiring an image B1 of an object B and an angle alpha 2 relative to the reference line at the position C, and uploading the images A1 and alpha 1 and the images B1 and alpha 2 to a server;
(3) The position O acquires an image A2 of the object A and an angle beta 1 relative to the datum line, the position O acquires an image B2 of the object B and an angle beta 2 relative to the datum line, and the images A2 and beta 1 and the images B2 and beta 2 are uploaded to a server;
(4) The server determines that the image A1 and the image A2 are both images of the object A through image recognition, the server determines that the image B1 and the image B2 are both images of the object B through image recognition, and the direction angle gamma of the OC straight line relative to the datum line is obtained through calculation of alpha 1, alpha 2, beta 1 and beta 2.
2. A method of image recognition based positioning according to claim 1, further characterized by calculating a direction angle γ:
in triangle ABC:
AB/sin(α1+α2)=AC/sin(δ-α2)=BC/sin(- α1- δ) =bc/sin (α1+δ), deriving ac=absin (δ - α2)/sin (α1+α2);
in triangular AOB:
AB/sin(β1-β2)=OB/sin(- δ - β1) =ob/sin (δ+β1) =oa/sin (δ+β2), deriving oa=absin (δ+β2)/sin (β1- β2);
in triangular AOC:
OA/sin(-γ+α1)=OA/ sin(γ-α1)=AC/sin(γ-β1),
substitution to obtain: ABsin (δ+β2)/sin (β1- β2) sin (γ - α1) =absin (δ - α2)/sin (α1+α2) sin (γ - β1), sin (γ - α1)/sin (γ - β1) =sin (α1+α2) sin (δ+β2)/sin (δ - α2) sin (β1- β2);
let k=sin (α1+α2) sin (δ+β2)/sin (δ - α2) sin (β1- β2), then sin (γ - α1)/sin (γ - β1) =k;
sinγcosα1-cosγsinα1=Ksinγcosβ1-Kcosγsinβ1;
(Ksinβ1-sinα1)cosγ=(Kcosβ1-cosα1)sinγ;
cosγ= sinγ(Kcosβ1-cosα1)/(Ksinβ1- sinα1);
let (Kcos β1-cos α1)/(Ksin β1-sin α1) =t, then cos γ=tsin γ;
because of cos 2 γ+sin 2 Gamma=1, so t 2 sin 2 γ+sin 2 γ=1;
sin 2 γ=1/(1+t 2 ),γ∈[0, ]The time sin gamma is not less than 0, sin gamma=root number (1/(1+t) 2 ));
Finally, the direction angle gamma is obtained.
3. A method of positioning based on image recognition, characterized by: the method comprises the steps that a first mobile phone is arranged at a point O, a second mobile phone is arranged at a point C, DEF is a line segment object in a positioning space, a line segment DE and a line segment EF are not on the same straight line, the first mobile phone searches for a second mobile phone position direction angle gamma, the second mobile phone photographs the positioning space, an image of the line segment object DEF and a direction angle when the line segment object DEF is photographed are obtained, at the moment, the direction of the line segment object DEF is perpendicular to a screen of the second mobile phone, namely, the image of the line segment object DEF falls in the center of the screen of the second mobile phone, an included angle between the line segment object DEF and a reference line, namely, a direction angle when the image is photographed, is obtained according to a direction sensor in the mobile phone, and the photographed image and the direction angle are uploaded to the server; the first mobile phone photographs the positioning space to find a second mobile phone, likewise obtains an image of the line segment object DEF and a direction angle thereof, and uploads the image to the server, and the server calculates a unique OC direction angle gamma according to the line segment ratio and the direction angle of the image of the same line segment object DEF at two places by geometric mathematics, wherein the method comprises the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The method comprises the steps that a line segment DE and a line segment EF of a line segment object DEF are not on the same straight line, a vertical projection image D1E1F1 of the line segment object DEF and an angle alpha 3 between the vertical projection direction and a datum line are obtained at a position C, and the images D1E1F1 and alpha 3 are uploaded to a server;
(3) The position O acquires a vertical projection image D2E2F2 of the line segment object DEF and an angle beta 3 between the vertical projection direction and a reference line, and uploads the images D2E2F2 and beta 3 to a server;
(4) The server determines that the images D1E1F1 and the images D2E2F2 are images of line segments DEF through image recognition, and calculates and obtains the direction angle gamma of the OC straight line relative to the datum line through the ratio of D1E1/E1F1, the ratio of D2E2/E2F2, alpha 3 and beta 3.
4. A method of image recognition based positioning, further characterized by: the method comprises the steps that a first mobile phone is arranged at a point O, a second mobile phone is arranged at a point C, G is a three-dimensional object in a positioning space, two faces of the three-dimensional object are S1 and S2, the first mobile phone searches for a second mobile phone position direction angle gamma, the second mobile phone photographs the positioning space, an image of the three-dimensional object G and a direction angle when photographing the three-dimensional object G are obtained, at the moment, the direction of the three-dimensional object G is perpendicular to a screen of the second mobile phone, namely, the image of the three-dimensional object G falls in the center of the screen of the second mobile phone, and the direction angle when photographing the image is obtained according to a direction sensor in the mobile phone, namely, the direction angle when photographing the image is completed, and the photographed image and the direction angle are uploaded to the server; the first mobile phone photographs the positioning space to find a second mobile phone, also obtains the image of the solid object G and the direction angle thereof, and uploads the image to the server, and the server calculates the unique direction angle gamma of the OC according to the area ratio and the direction angle of two faces of the image of the same solid object G at two places by geometric mathematics, wherein the method comprises the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The areas of two surfaces of the three-dimensional object G are S1 and S2, a vertical projection image G1 of the three-dimensional object G and an angle alpha 4 between the vertical projection direction and a datum line are acquired at a position C, and the images G1 and alpha 4 are uploaded to a server;
(3) The position O acquires a vertical projection image G2 of the three-dimensional object G and an angle beta 4 between the vertical projection direction and a datum line, and uploads the images G2 and beta 4 to a server;
(4) The server determines that the image G1 and the image G2 are images of the three-dimensional object G through image recognition, the image area of SC1 in the image G1 is the image area of S1, the image area of SC2 in the image G2 is the image area of S1, the image area of SO2 is the image area of S2, and the direction angle gamma of the OC straight line relative to the datum line is obtained through calculation through the ratio of SC1/SC2, the ratio of SO1/SO2, alpha 4 and beta 4.
5. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, further comprising the steps of: the client acquires the direction angle gamma calculated by the server, and positions or navigates on the client guiding map according to the direction angle gamma.
6. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, further comprising the steps of: the client acquires the direction angle gamma calculated by the server, the client opens the camera to acquire a live-action image, and the guide icon of the direction angle gamma is displayed in the live-action image in a superimposed mode.
7. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, characterized in that: the direction angle gamma is guided by friends, businesses, game targets, red packages and advertisements.
8. The method for positioning based on image recognition according to claim 7, wherein:
(1) the client 1 starts a red packet issuing program, shoots surrounding images and synchronously acquires direction angles to upload to a server;
(2) the server acquires a peripheral image shot by the client 1 and a direction angle thereof;
(3) the server generates a virtual red packet image according to red packet setting of the client 1 and associates the image acquired by the server in the step (2) and the direction angle thereof, wherein red packet action assignment comprises the direction angle, the horizontal inclination angle, the vertical height and the change of action assignment thereof;
(4) the client 2 shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(5) the server calculates and obtains the position where the client 1 releases the red packet, namely the position where the red packet is located, according to the peripheral image and the direction angle which are obtained by the client 1 and obtained by the step (2) and the peripheral image and the direction angle which are obtained by the step (4) and the client 2, and the server pushes the virtual red packet image to the client 2;
(6) the client 2 displays the virtual red package image in a screen real scene in a superposition manner according to the red package position, and meanwhile, the virtual red package image moves according to the action assignment;
(7) the client 2 acquires the red packet through the touch screen.
9. The method for positioning based on image recognition according to claim 7, wherein:
the server issues the advertisement steps as follows:
(1) the server acquires a peripheral image shot by any client and a direction angle thereof;
(2) the server generates a virtual advertisement image according to the setting of an advertiser and associates the image and the direction angle thereof acquired by the server in the step (1), wherein the setting of the advertiser comprises a putting position, characters, images and image actions;
(3) the advertisement receptor client shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(4) the server calculates and obtains the advertisement putting position according to the peripheral image and the direction angle of the peripheral image shot by any client side obtained in the step (1) and the peripheral image and the direction angle of the peripheral image shot by the advertisement receiver client side in the step (3);
(5) the advertisement receptor client displays the virtual advertisement image in the screen live-action according to the advertisement putting position, and meanwhile, the virtual advertisement image moves according to the action assignment;
(6) the advertisement acceptor client obtains advertisement content through a touch screen, wherein the advertisement content comprises links, jumps or collections.
10. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, further comprising the steps of: and determining the distance between the two connecting lines of the characteristic objects, or the size of the line segment object, or the size of the three-dimensional object, and calculating the distance between the two connecting lines of the positions through a trigonometric function.
11. A system for a method of image recognition based positioning according to claim 1 or 2 or 3 or 4, characterized in that: comprising a server and a client side,
the server comprises:
the image recognition unit is used for determining a characteristic object, a line segment object or a three-dimensional object;
a calculation unit for calculating the direction angle of the connecting line of the two position points according to the direction angles of the two feature objects; or calculating the direction angle of the connecting line of the two position points according to the ratio of the two line segments which are not on the same straight line and the direction angle thereof; or calculating the direction angle of the connecting line of the two position points according to the area ratio of the two surfaces of the three-dimensional object and the direction angle thereof;
the calculation result pushing unit is used for pushing the calculated direction angle data of the connecting line of the two position points to the client;
the client comprises at least a direction acquisition unit, an image superposition unit and a camera.
12. The system of a method of image recognition based positioning of claim 11, wherein: the system is embedded in an existing IM system or payment system or gaming system.
13. The system of a method of image recognition based positioning of claim 11, wherein: the client is a mobile phone or a tablet computer.
CN201710044706.1A 2017-01-21 2017-01-21 Positioning and AR methods and systems and applications based on image recognition Active CN106846311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710044706.1A CN106846311B (en) 2017-01-21 2017-01-21 Positioning and AR methods and systems and applications based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710044706.1A CN106846311B (en) 2017-01-21 2017-01-21 Positioning and AR methods and systems and applications based on image recognition

Publications (2)

Publication Number Publication Date
CN106846311A CN106846311A (en) 2017-06-13
CN106846311B true CN106846311B (en) 2023-10-13

Family

ID=59119469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710044706.1A Active CN106846311B (en) 2017-01-21 2017-01-21 Positioning and AR methods and systems and applications based on image recognition

Country Status (1)

Country Link
CN (1) CN106846311B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995097B (en) * 2017-11-22 2025-03-25 河南餐赞网络科技有限公司 A method and system for interactive AR red envelope
CN107885333A (en) * 2017-11-22 2018-04-06 吴东辉 Determine that its client corresponds to id AR method and systems based on mobile phone action recognition
CN109886191A (en) * 2019-02-20 2019-06-14 上海昊沧系统控制技术有限责任公司 A kind of identification property management reason method and system based on AR
CN112000100A (en) * 2020-08-26 2020-11-27 德鲁动力科技(海南)有限公司 Charging system and method for robot

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566898A (en) * 2009-06-03 2009-10-28 广东威创视讯科技股份有限公司 Positioning device of electronic display system and method
CN102445701A (en) * 2011-09-02 2012-05-09 无锡智感星际科技有限公司 Image position calibration method based on direction sensor and geomagnetic sensor
CN102467341A (en) * 2010-11-04 2012-05-23 Lg电子株式会社 Mobile terminal and method of controlling an image photographing therein
CN102829775A (en) * 2012-08-29 2012-12-19 成都理想境界科技有限公司 Indoor navigation method, systems and equipment
CN103064565A (en) * 2013-01-11 2013-04-24 海信集团有限公司 Positioning method and electronic device
CN103090846A (en) * 2013-01-15 2013-05-08 广州市盛光微电子有限公司 Distance measuring device, distance measuring system and distance measuring method
CN103105993A (en) * 2013-01-25 2013-05-15 腾讯科技(深圳)有限公司 Method and system for realizing interaction based on augmented reality technology
CN103134489A (en) * 2013-01-29 2013-06-05 北京凯华信业科贸有限责任公司 Method of conducting target location based on mobile terminal
CN103220415A (en) * 2013-03-28 2013-07-24 东软集团(上海)有限公司 One-to-one cellphone live-action position trailing method and system
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system
CN103593658A (en) * 2013-11-22 2014-02-19 中国电子科技集团公司第三十八研究所 Three-dimensional space positioning system based on infrared image recognition
CN103699592A (en) * 2013-12-10 2014-04-02 天津三星通信技术研究有限公司 Video shooting positioning method for portable terminal and portable terminal
CN103761539A (en) * 2014-01-20 2014-04-30 北京大学 Indoor locating method based on environment characteristic objects
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
KR20150028430A (en) * 2013-09-06 2015-03-16 주식회사 이리언스 Iris recognized system for automatically adjusting focusing of the iris and the method thereof
CN104422439A (en) * 2013-08-21 2015-03-18 希姆通信息技术(上海)有限公司 Navigation method, apparatus, server, navigation system and use method of system
JP2015076738A (en) * 2013-10-09 2015-04-20 カシオ計算機株式会社 Photographed image processing apparatus, photographed image processing method, and program
CN104572732A (en) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 Method and device for inquiring user identification and method and device for acquiring user identification
CN104571532A (en) * 2015-02-04 2015-04-29 网易有道信息技术(北京)有限公司 Method and device for realizing augmented reality or virtual reality
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN105320725A (en) * 2015-05-29 2016-02-10 杨振贤 Method and apparatus for acquiring geographic object in collection point image
CN105354296A (en) * 2015-10-31 2016-02-24 广东欧珀移动通信有限公司 Terminal positioning method and user terminal
CN105588543A (en) * 2014-10-22 2016-05-18 中兴通讯股份有限公司 Camera-based positioning method, device and positioning system
CN106230920A (en) * 2016-07-27 2016-12-14 吴东辉 A kind of method and system of AR

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4914019B2 (en) * 2005-04-06 2012-04-11 キヤノン株式会社 Position and orientation measurement method and apparatus

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566898A (en) * 2009-06-03 2009-10-28 广东威创视讯科技股份有限公司 Positioning device of electronic display system and method
CN102467341A (en) * 2010-11-04 2012-05-23 Lg电子株式会社 Mobile terminal and method of controlling an image photographing therein
CN102445701A (en) * 2011-09-02 2012-05-09 无锡智感星际科技有限公司 Image position calibration method based on direction sensor and geomagnetic sensor
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system
CN102829775A (en) * 2012-08-29 2012-12-19 成都理想境界科技有限公司 Indoor navigation method, systems and equipment
CN103064565A (en) * 2013-01-11 2013-04-24 海信集团有限公司 Positioning method and electronic device
CN103090846A (en) * 2013-01-15 2013-05-08 广州市盛光微电子有限公司 Distance measuring device, distance measuring system and distance measuring method
CN103105993A (en) * 2013-01-25 2013-05-15 腾讯科技(深圳)有限公司 Method and system for realizing interaction based on augmented reality technology
CN103134489A (en) * 2013-01-29 2013-06-05 北京凯华信业科贸有限责任公司 Method of conducting target location based on mobile terminal
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
CN103220415A (en) * 2013-03-28 2013-07-24 东软集团(上海)有限公司 One-to-one cellphone live-action position trailing method and system
CN104422439A (en) * 2013-08-21 2015-03-18 希姆通信息技术(上海)有限公司 Navigation method, apparatus, server, navigation system and use method of system
KR20150028430A (en) * 2013-09-06 2015-03-16 주식회사 이리언스 Iris recognized system for automatically adjusting focusing of the iris and the method thereof
JP2015076738A (en) * 2013-10-09 2015-04-20 カシオ計算機株式会社 Photographed image processing apparatus, photographed image processing method, and program
CN104572732A (en) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 Method and device for inquiring user identification and method and device for acquiring user identification
CN103593658A (en) * 2013-11-22 2014-02-19 中国电子科技集团公司第三十八研究所 Three-dimensional space positioning system based on infrared image recognition
CN103699592A (en) * 2013-12-10 2014-04-02 天津三星通信技术研究有限公司 Video shooting positioning method for portable terminal and portable terminal
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN103761539A (en) * 2014-01-20 2014-04-30 北京大学 Indoor locating method based on environment characteristic objects
CN105588543A (en) * 2014-10-22 2016-05-18 中兴通讯股份有限公司 Camera-based positioning method, device and positioning system
CN104571532A (en) * 2015-02-04 2015-04-29 网易有道信息技术(北京)有限公司 Method and device for realizing augmented reality or virtual reality
CN105320725A (en) * 2015-05-29 2016-02-10 杨振贤 Method and apparatus for acquiring geographic object in collection point image
CN105354296A (en) * 2015-10-31 2016-02-24 广东欧珀移动通信有限公司 Terminal positioning method and user terminal
CN106230920A (en) * 2016-07-27 2016-12-14 吴东辉 A kind of method and system of AR

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐田帅,房胜,刘天池. 一种面向大建筑物的移动视觉定位算法.《软件导刊》, 一种面向大建筑物的移动视觉定位算法.2015,正文第71-75页. *

Also Published As

Publication number Publication date
CN106846311A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
US11393173B2 (en) Mobile augmented reality system
US11789523B2 (en) Electronic device displays an image of an obstructed target
TWI675351B (en) User location location method and device based on augmented reality
CN107367262B (en) A UAV remote real-time positioning surveying and mapping display interconnected control method
CN106846311B (en) Positioning and AR methods and systems and applications based on image recognition
JP2021526680A (en) Self-supervised training for depth estimation system
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
CN107976185A (en) A kind of alignment system and localization method and information service method based on Quick Response Code, gyroscope and accelerometer
EP3882846A1 (en) Method and device for collecting images of a scene for generating virtual reality data
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
CN108932055B (en) Method and equipment for enhancing reality content
CN106840167B (en) Two-dimensional quantity calculation method for geographic position of target object based on street view map
WO2019127320A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
TWM559036U (en) Markerless location based augmented reality system
US20240362857A1 (en) Depth Image Generation Using a Graphics Processor for Augmented Reality
CN107885333A (en) Determine that its client corresponds to id AR method and systems based on mobile phone action recognition
CN106487835A (en) A kind of information displaying method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Building 918, Building 1, Wangfu Building, No. 6 Renmin East Road, Chongchuan District, Nantong City, Jiangsu Province, 226001

Applicant after: Wu Donghui

Address before: 226019 1-109, Science Park, 58 Chongchuan Road, Nantong City, Jiangsu Province

Applicant before: Wu Donghui

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20250311

Address after: Room C0102, 3rd Floor, Room 301, No. 4 Tangdong East Road, Tianhe District, Guangzhou City, Guangdong Province 510665

Patentee after: Guangzhou Xianjian Electronic Technology Co.,Ltd.

Country or region after: China

Address before: Building 918, Building 1, Wangfu Building, No. 6 Renmin East Road, Chongchuan District, Nantong City, Jiangsu Province, 226001

Patentee before: Wu Donghui

Country or region before: China

TR01 Transfer of patent right