CN106846311B - Positioning and AR method and system based on image recognition and application - Google Patents

Positioning and AR method and system based on image recognition and application Download PDF

Info

Publication number
CN106846311B
CN106846311B CN201710044706.1A CN201710044706A CN106846311B CN 106846311 B CN106846311 B CN 106846311B CN 201710044706 A CN201710044706 A CN 201710044706A CN 106846311 B CN106846311 B CN 106846311B
Authority
CN
China
Prior art keywords
image
server
client
direction angle
sin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710044706.1A
Other languages
Chinese (zh)
Other versions
CN106846311A (en
Inventor
吴东辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710044706.1A priority Critical patent/CN106846311B/en
Publication of CN106846311A publication Critical patent/CN106846311A/en
Application granted granted Critical
Publication of CN106846311B publication Critical patent/CN106846311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image recognition and positioning, in particular to a positioning and AR method and system based on image recognition and application. The method is characterized in that: the position at the viewing angle is determined from two viewing angles of at least two determining features, or the position at the viewing angle is determined from at least one determining object one viewing angle and its viewing angle direction projection surface a ratio of line segments or areas in the object. The beneficial effects are that: the invention aims to provide an angle positioning method which can be applied to navigation positioning, AR game positioning and the like. Positioning can be achieved in any unfamiliar environment. The method is suitable for indoor positioning.

Description

Positioning and AR method and system based on image recognition and application
Technical Field
The invention relates to the field of image recognition and positioning, in particular to a positioning and AR method and system based on image recognition and application.
Background
At present, positioning navigation based on image recognition generally adopts one of photos taken by a positioning point and a plurality of preset photos to be compared to obtain the position; or establishing a 3D model diagram, and comparing the photo shot by the locating point with the 3D model diagram to obtain the position. Both methods require pre-presetting of a photo database or a 3D model database, and cannot be located if at any place without a database.
The augmented reality technology (Augmented Reality, abbreviated as AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and the goal of the technology is to fit a virtual world around the real world on a screen and interact with the virtual world.
For AR technology to be applied to positioning, only angular relationships may be considered, ignoring distance relationships.
Only the angular relationship may be considered for indoor positioning, ignoring the distance relationship.
The invention is based on image recognition, including the existing mature feature recognition, object contour recognition, color recognition, action recognition, face recognition, character recognition and the like, and different environments adopt corresponding recognition strategies.
Disclosure of Invention
The invention aims to provide an angle positioning method which can be applied to navigation positioning, AR game positioning and the like.
The idea of the invention is as follows: the position at the viewing angle is determined based on the two viewing angles of the two determination objects (features), or the ratio of the line segments or the area ratio of one viewing angle of one determination object and the projections of two line segments or two faces in the object in the viewing angle direction.
The position of O at the viewing angle is determined with reference to fig. 2, i.e. based on two viewing angles β1, β2 of two determination objects (features) A, B.
Or referring to fig. 3, a line segment ratio D2E2/E2F2 that determines a viewing angle β3 of an object and projections of two line segments DE, EF in the object in the viewing angle direction determines the position of O at the viewing angle.
Or referring to fig. 4, a position of O at the viewing angle is determined by determining an area ratio SO1/SO2 of a viewing angle β4 of an object and projections of two faces S1, S2 in the object in the viewing angle direction.
Features are defined objects that can be identified by image recognition in a computer system or defined objects that are manually specified in a computer system. The feature is not a special object, and the feature can be any object in the positioning space, but the object can be determined by image recognition in a computer system, for example, two mobile phones shoot the number '5' at different places, and the computer system determines that the two mobile phones shoot the same number '5' through image recognition (a digital recognition strategy), so that the number '5' is the feature. Or a relatively stable, fixed object or pattern, the object being identified by the image recognition software discovery identification of the computer system.
Specifically, the method comprises the steps that two clients acquire images around respective places and direction angles during shooting respectively and send the images to a server, the server determines images of common objects shot by the two clients through image recognition, and the direction angles of connecting lines of the two clients are calculated by utilizing differences of the images of the common objects and the direction angles of the images of the common objects.
Further, the two position points respectively acquire the direction angles of at least two features, and the direction angle of the connecting line of the two position points is calculated through the direction angles.
Or the two position points respectively acquire the ratio and the direction angle of the two line segments which are not on the same straight line, and the direction angle of the connecting line of the two position points is calculated through the ratio and the direction angle.
Or, the two position points respectively acquire the area ratio and the direction angle of two surfaces (the two surfaces not on the same plane) of the same object, and the direction angle of the connecting line of the two position points is calculated through the area ratio and the direction angle.
Further, a method and system for fitting virtual world around real world and interacting with each other on screen (friend screen location, merchant screen location, game object screen location, red packet screen location, virtual advertisement screen location) are provided, wherein the system can be based on an autonomous Instant Messaging (IM) platform, or through a third party service (API), or embedded into the existing IM platform, such as QQ, weChat, strange, etc.
The technical scheme adopted by the invention is as follows:
a method of positioning based on image recognition, characterized by: the position at the viewing angle is determined from at least two viewing angles of the determined object (feature), or at least one viewing angle of the determined object and a line segment ratio or area ratio of projections of two line segments or two faces in the object in the viewing angle direction.
The positioning method based on image recognition is further characterized by comprising the following steps: the two position points respectively acquire the direction angles of at least two objects (features), and the direction angle of the connecting line of the two position points is calculated through the direction angles.
The positioning method based on image recognition is further characterized by comprising the following steps: the two position points respectively acquire the ratio and the direction angle of two line segments which are not on the same straight line, and the direction angle of the connecting line of the two position points is calculated through the ratio and the direction angle.
The positioning method based on image recognition is further characterized by comprising the following steps: the two position points respectively acquire the area ratio and the direction angle of the two surfaces of the same object, and the direction angle of the connecting line of the two position points is calculated through the area ratio and the direction angle.
The positioning method based on image recognition also comprises a client and a server, and is characterized by comprising the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) Position C acquires image A1 of object (feature) a and angle A1 relative to the reference line, position C acquires image B1 of object (feature) B and angle a2 relative to the reference line, and uploads images A1 and A1, images B1 and a2 to the server;
(3) The position O acquires an image A2 of an object (feature object) A and an angle beta 1 relative to a reference line, the position O acquires an image B2 of the object (feature object) B and an angle beta 2 relative to the reference line, and the images A2 and beta 1 and the images B2 and beta 2 are uploaded to a server;
(4) The server determines that the image A1 and the image A2 are both images of the object (feature object) a through image recognition, and the server determines that the image B1 and the image B2 are both images of the object (feature object) B through image recognition, and calculates and obtains the angle γ of the OC straight line with respect to the reference line through α1, α2, β1, β2.
Namely: γ=f1 (α1, α2, β1, β2), F1 being a calculation function.
Of course, more objects (features) are acquired for calculation, and the comprehensive calculation result is beneficial to the invention.
Or alternatively, the first and second heat exchangers may be,
the positioning method based on image recognition also comprises a client and a server, and is characterized by comprising the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The method comprises the steps that a line segment DE and a line segment EF of a line segment object DEF are not on the same straight line, a vertical projection image D1E1F1 of the line segment object DEF and an angle alpha 3 between the vertical projection direction and a datum line are obtained at a position C, and the images D1E1F1 and alpha 3 are uploaded to a server;
(3) The position O acquires a vertical projection image D2E2F2 of the line segment object DEF and an angle beta 3 between the vertical projection direction and a reference line, and uploads the images D2E2F2 and beta 3 to a server;
(4) The server determines that the images D1E1F1 and the images D2E2F2 are images of line segments DEF through image recognition, and calculates and obtains an angle gamma of the OC straight line relative to the datum line through the ratio of D1E1/E1F1, the ratio of D2E2/E2F2, alpha 3 and beta 3.
Namely: γ=f2 (D1E 1/E1F1, α3, D2E2/E2F2, β3), F2 being a calculation function.
Of course, more line segments are acquired for calculation, and the comprehensive calculation result is beneficial to the invention.
Or alternatively, the first and second heat exchangers may be,
the positioning method based on image recognition also comprises a client and a server, and is characterized by comprising the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The areas of two surfaces of the three-dimensional object G are S1 and S2, a vertical projection image G1 of the three-dimensional object G and an angle alpha 4 between the vertical projection direction and a datum line are acquired at a position C, and the images G1 and alpha 4 are uploaded to a server;
(3) The position O acquires a vertical projection image G2 of the three-dimensional object G and an angle beta 4 between the vertical projection direction and a datum line, and uploads the images G2 and beta 4 to a server;
(4) The server determines that the image G1 and the image G2 are images of the three-dimensional object G through image recognition, the image area of SC1 in the image G1 is the image area of S1, the image area of SC2 in the image G2 is the image area of S1, the image area of SO2 is the image area of S2, and the angle gamma of the OC straight line relative to the datum line is obtained through calculation through the ratio of SC1/SC2, the ratio of SO1/SO2, alpha 4 and beta 4.
Namely: γ=f3 (SC 1/SC2, α4, SO1/SO2, β4), F3 being the calculation function.
Of course, more stereo objects are acquired for calculation, and the comprehensive calculation result is beneficial to the invention.
Of course, the above feature, line segment, and solid may be used for the comprehensive calculation.
Further, the image recognition-based positioning method is further characterized by comprising the following steps:
the client acquires the angle gamma calculated by the server, and positions or navigates on the client guiding map according to the angle gamma.
Or, the image recognition-based positioning method is further characterized by comprising the following steps: the client acquires the angle gamma calculated by the server, the client opens the camera to acquire a live-action image, and the guide icon of the angle gamma is displayed in the live-action image in a superimposed mode.
Still further, the angle obtained by the client side further includes a horizontal inclination angle of the vertical direction for determining a direction angle of the vertical plane (as a rule, a proof) so as to determine a direction angle of the three-dimensional space by combining the direction angle of the horizontal plane.
An AR method for positioning based on image recognition is characterized in that: the angle gamma is guided by friends, a merchant, a game target object, a red packet or an advertisement.
The AR method based on image recognition positioning is characterized by comprising the following steps of:
(1) the client 1 starts a red packet issuing program, shoots surrounding images and synchronously acquires direction angles to upload to a server;
(2) the server acquires a peripheral image shot by the client 1 and a direction angle thereof;
(3) the server generates a virtual red packet image according to red packet setting of the client 1 and associates the image acquired by the server in the step (2) and the direction angle thereof, wherein red packet action assignment comprises the direction angle, the horizontal inclination angle, the vertical height and the change of action assignment thereof;
(4) the client 2 shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(5) the server calculates and obtains the position where the client 1 releases the red packet, namely the position where the red packet is located, according to the peripheral image and the direction angle which are obtained by the client 1 and obtained by the step (2) and the peripheral image and the direction angle which are obtained by the step (4) and the client 2, and the server pushes the virtual red packet image to the client 2;
(6) the client 2 displays the virtual red package image in a screen real scene in a superposition manner according to the red package position, and meanwhile, the virtual red package image moves according to the action assignment;
(7) the client 2 acquires the red packet through the touch screen.
The AR method based on image recognition positioning is characterized by comprising the following steps of:
the server issues the advertisement steps as follows:
(1) the server acquires a peripheral image shot by any client and a direction angle thereof;
(2) the server generates a virtual advertisement image according to the setting of an advertiser and associates the image and the direction angle thereof acquired by the server in the step (1), wherein the setting of the advertiser comprises a putting position, characters, images and image actions;
(3) the advertisement receptor client shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(4) the server calculates and obtains the advertisement putting position according to the peripheral image and the direction angle of the peripheral image shot by any client side obtained in the step (1) and the peripheral image and the direction angle of the peripheral image shot by the advertisement receiver client side in the step (3);
(5) the advertisement receptor client displays the virtual advertisement image in the screen live-action according to the advertisement putting position, and meanwhile, the virtual advertisement image moves according to the action assignment;
(6) the advertisement receiver client obtains advertisement content such as links, jumps, collections and the like through a touch screen.
Further, the method for positioning based on image recognition is characterized by further comprising the following steps: and determining the distance between the two feature object connecting lines, or the size of the line segment object, or the size of the three-dimensional object, and calculating the distance between the two position connecting lines through a trigonometric function.
A system for image recognition based positioning, characterized by: comprising a server and a client side,
the server comprises:
the image recognition unit is used for determining a characteristic object, a line segment object or a three-dimensional object;
a calculation unit for calculating the direction angle of the connecting line of the two position points according to the direction angles of the two feature objects; or calculating the direction angle of the connecting line of the two position points according to the ratio of the two line segments which are not on the same straight line and the direction angle thereof; or calculating the direction angle of the connecting line of the two position points according to the area ratio of the two surfaces of the three-dimensional object and the direction angle thereof;
the calculation result pushing unit is used for pushing the calculated direction angle data of the connecting line of the two position points to the client;
the client comprises at least a direction acquisition unit, an image superposition unit and a camera.
The positioning system based on image recognition is characterized in that: the system is embedded in an existing IM system or payment system or gaming system.
The positioning system based on image recognition is characterized in that: the client is a mobile phone or a tablet computer.
The invention is based on the concrete implementation mode of the mobile phone, the mobile phone sensor at present comprises an acceleration sensor, a direction device, a gyroscope, a thermometer and the like, and the information which can be obtained comprises acceleration, a magnetic field, a rotation vector, the gyroscope, light induction, pressure, temperature, approaching and gravity.
The beneficial effects of the invention are as follows: the invention aims to provide an angle positioning method which can be applied to navigation positioning, AR game positioning and the like. Positioning can be achieved in any unfamiliar environment. The method is suitable for indoor positioning.
Drawings
FIG. 1 is a schematic diagram of the present invention for determining setpoint angles using two features.
Figure 2 the invention uses two features to determine the geometry of the setpoint angle.
FIG. 3 is a schematic diagram of the present invention for determining the setpoint angle using line segments.
Fig. 4 is a schematic diagram of the present invention for determining the anchor point angle by using a solid object.
FIG. 5 is a schematic interface diagram of AR positioning of the present invention.
Fig. 6 is a hardware configuration diagram of the present invention.
Fig. 7 is a flow chart of the present invention.
FIG. 8 is a flow chart of the AR application of the present invention.
Fig. 9 shows an example of the calculation of the angle according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
In fig. 1, a schematic diagram of determining a locating point angle by using two features is shown in the invention, 101 is a mobile phone, 101 is at O point, 102 is another mobile phone, 102 is at C point, 103 is a feature a,104 is a feature B in a locating space, the feature is an object which can be clearly determined by a server through image recognition, or a manually set marker (the marker is recorded in the server and is used for recognition, such as a determined advertisement portrait, text, etc.), and 105 is a server. Assuming that the mobile phone is used for searching 102 the mobile phone position (namely finding the OC angle), firstly, the mobile phone 102 can take a picture of the positioning space in a rotating way, specifically, for example, 360-degree continuous shooting is carried out along a horizontal line to synchronously record a direction angle and upload the server at the same time, an image of the feature A and the direction angle (visual angle) when the feature A is shot are necessarily obtained, at the moment, the direction of the feature A is vertical to the screen of the mobile phone 102 (for simplicity, only the horizontal plane is considered temporarily, the vertical plane adopts inclination angle calculation, and the calculation method is the same as that of the feature A), namely, the image of the feature A falls in the center of the screen of the mobile phone 102, the direction of the feature A and a datum line (the datum line is determined by a determined north-south angle line, for simplicity, the east-west direction is taken as the datum line in the figure, namely the WE direction or the OX direction) are shot, the feature B is shot in the same way, and the shot image and the direction angle of the feature A and the datum line are uploaded to the server; the mobile phone 101 performs rotary photographing on a positioning space to find 102 the mobile phone, also obtains images of two feature objects and direction angles thereof, and uploads the images and the direction angles to a server, the server calculates a unique direction angle gamma (included angle with a datum line OX) of the OC according to the direction angles of the two feature objects of the two places by geometry mathematics, and the invention determines a geometric figure of a positioning point angle by utilizing the two feature objects with reference to FIG. 2, and comprises the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) Acquiring an image A1 of a feature A and an angle alpha 1 relative to a reference line at a position C, acquiring an image B1 of a feature B and an angle alpha 2 relative to the reference line at the position C, and uploading the images A1 and alpha 1 and the images B1 and alpha 2 to a server;
(3) The position O acquires an image A2 of the feature A and an angle beta 1 relative to the datum line, the position O acquires an image B2 of the feature B and an angle beta 2 relative to the datum line, and the images A2 and beta 1 and the images B2 and beta 2 are uploaded to a server;
(4) The server determines that the image A1 and the image A2 are both images of the feature object a through image recognition, the server determines that the image B1 and the image B2 are both images of the feature object B through image recognition, and calculates and obtains an angle γ of the OC straight line relative to the reference line through α1, α2, β1, and β2.
Namely: γ=f1 (α1, α2, β1, β2), F1 being a calculation function.
Further, the distance between the two feature objects is determined, and the distance between the two mobile phones can be calculated through a trigonometric function.
A calculation procedure is provided below with reference to fig. 9:
in triangle ABC:
AB/sin(α1+α2)=AC/sin(δ-α2)=BC/sin(- α1- δ) =bc/sin (α1+δ), deriving ac=absin (δ - α2)/sin (α1+α2);
in triangular AOB:
AB/sin(β1-β2)=OB/sin(- δ - β1) =ob/sin (δ+β1) =oa/sin (δ+β2), deriving oa=absin (δ+β2)/sin (β1- β2);
in triangular AOC:
OA/sin(-γ+α1)=OA/ sin(γ-α1)=AC/sin(γ-β1),
substitution to obtain: ABsin (δ+β2)/sin (β1- β2) sin (γ - α1) =absin (δ - α2)/sin (α1+α2) sin (γ - β1), sin (γ - α1)/sin (γ - β1) =sin (α1+α2) sin (δ+β2)/sin (δ - α2) sin (β1- β2);
let k=sin (α1+α2) sin (δ+β2)/sin (δ - α2) sin (β1- β2), then sin (γ - α1)/sin (γ - β1) =k;
sinγcosα1-cosγsinα1=Ksinγcosβ1-Kcosγsinβ1;
(Ksinβ1-sinα1)cosγ=(Kcosβ1-cosα1)sinγ;
cosγ= sinγ(Kcosβ1-cosα1)/(Ksinβ1- sinα1);
let (Kcos β1-cos α1)/(Ksin β1-sin α1) =t, then cos γ=tsin γ;
because of cos 2 γ+sin 2 Gamma=1, so t 2 sin 2 γ+sin 2 γ=1;
sin 2 γ=1/(1+t 2 ),γ∈[0,]The time sin gamma is not less than 0, sin gamma=root number (1/(1+t) 2 ));
And finally, obtaining the gamma angle.
In fig. 3, the invention uses line segment object to determine the schematic diagram of the locating point angle, 101 is a mobile phone, 101 is at O point, 102 is another mobile phone, 102 is at C point, in the locating space, DEF is a line segment object, line segment DE and line segment EF are not on the same straight line, in particular, can be a line segment on a certain object, the line segment can be identified and determined by computer software, that is, the object which can be clearly determined by the server through image identification, or a manually set marker (the marker is recorded in the server for identification), and 105 is the server. Assuming that the mobile phone is used for searching 102 the mobile phone position (namely finding the OC angle), firstly, the mobile phone 102 can take a picture of the line segment object DEF and the direction angle (visual angle) when the line segment object DEF is shot by rotating the positioning space, at the moment, the direction of the line segment object DEF is vertical to the screen of the mobile phone 102 (for simplicity, only a horizontal plane is considered temporarily, the calculation of the vertical plane can be verified), namely, the image of the line segment object DEF falls in the center of the screen of the mobile phone 102, the line segment object DEF and a datum line (the datum line is determined by a determined north-south angle line, for simplicity, the east-west direction is taken as the datum line in the figure, namely the WE direction, or the OX direction) are obtained, and the shooting is completed to upload the shot image and the direction angle thereof to a server; the mobile phone 101 performs rotary photographing on a positioning space to find 102 the mobile phone, and also obtains an image of a line segment object DEF and a direction angle thereof, and uploads the image to a server, and the server calculates a unique direction angle gamma (included angle with a reference line OX) of the OC according to the line segment ratio and the direction angle of the image of the same line segment object DEF at two places by geometry mathematics, wherein the method comprises the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The method comprises the steps that a line segment DE and a line segment EF of a line segment object DEF are not on the same straight line, a vertical projection image D1E1F1 of the line segment object DEF and an angle alpha 3 between the vertical projection direction and a datum line are obtained at a position C, and the images D1E1F1 and alpha 3 are uploaded to a server;
(3) The position O acquires a vertical projection image D2E2F2 of the line segment object DEF and an angle beta 3 between the vertical projection direction and a reference line, and uploads the images D2E2F2 and beta 3 to a server;
(4) The server determines that the images D1E1F1 and the images D2E2F2 are images of line segments DEF through image recognition, and calculates and obtains an angle gamma of the OC straight line relative to the datum line through the ratio of D1E1/E1F1, the ratio of D2E2/E2F2, alpha 3 and beta 3.
Namely: γ=f2 (D1E 1/E1F1, α3, D2E2/E2F2, β3), F2 being a calculation function.
A line segment object is understood to be a line segment in an object that is not collinear.
Further, the size of the line segment object is determined, and the distance between the two mobile phones can be calculated through a trigonometric function.
In fig. 4, a schematic diagram of determining a positioning point angle by using a three-dimensional object according to the present invention is shown in fig. 4, where 101 is a mobile phone, 101 is at an O point, 102 is another mobile phone, 102 is at a C point, in a positioning space, G is a three-dimensional object, two surfaces of the three-dimensional object are S1 and S2, and specifically may be an object, which can be identified and determined by computer software, that is, an object that can be clearly determined by a server through image identification, or a manually set marker (the marker is recorded in the server for identification), and 105 is the server. Assuming that the mobile phone is used for searching 102 the mobile phone position (namely finding the OC angle), firstly, the mobile phone 102 can take a picture of a positioning space in a rotating way to obtain an image of the solid object G and a direction angle (visual angle) when the solid object G is shot, at the moment, the direction of the solid object G is vertical to the screen of the mobile phone 102 (for simplicity, only a horizontal plane is considered temporarily, and the vertical plane is calculated to be the same), namely, the image of the solid object G falls in the center of the screen of the mobile phone 102, and the direction of the solid object G and a datum line (the datum line is determined by a determined north-south angle line, for simplicity, the east-west direction is taken as the datum line in the figure, namely, the WE direction or the OX direction) are obtained, and the shooting is completed to upload the shot image and the direction angle to a server; the mobile phone 101 performs rotary photographing on a positioning space to search for 102 the mobile phone, and also obtains an image of a solid object G and a direction angle thereof, and uploads the image to a server, and the server calculates a unique direction angle gamma (included angle with a reference line OX) of the OC according to the area ratio and the direction angle of two faces of the image of the same solid object G at two places by geometry mathematics, wherein the steps are as follows:
(1) Determining a unified datum line through a direction device of the client;
(2) The areas of two surfaces of the three-dimensional object G are S1 and S2, a vertical projection image G1 of the three-dimensional object G and an angle alpha 4 between the vertical projection direction and a datum line are acquired at a position C, and the images G1 and alpha 4 are uploaded to a server;
(3) The position O acquires a vertical projection image G2 of the three-dimensional object G and an angle beta 4 between the vertical projection direction and a datum line, and uploads the images G2 and beta 4 to a server;
(4) The server determines that the image G1 and the image G2 are images of the three-dimensional object G through image recognition, the image area of SC1 in the image G1 is the image area of S1, the image area of SC2 in the image G2 is the image area of S1, the image area of SO2 is the image area of S2, and the angle gamma of the OC straight line relative to the datum line is obtained through calculation through the ratio of SC1/SC2, the ratio of SO1/SO2, alpha 4 and beta 4.
Namely: γ=f3 (SC 1/SC2, α4, SO1/SO2, β4), F3 being the calculation function.
Further, the size of the three-dimensional object is determined, and the distance between the two mobile phones can be calculated through a trigonometric function.
In fig. 5, an interface diagram of AR positioning according to the present invention is shown, a live-action image of a positioning space is captured in a screen of a mobile phone 101, which includes a feature A, B, and may include a line segment or a solid object, where a dynamic indication icon 501 is set to guide the mobile phone 101 to search for a position C where the mobile phone 102 is located, and 502 is a position cursor of the mobile phone 101.
The method only considers the calculation of a horizontal plane, and the actual application can be combined with a vertical plane (the same theory can prove) to form the 3D stereotactic.
FIG. 6 is a hardware configuration diagram of the present invention, including a server and a client:
the server comprises: the image recognition unit is used for recognizing and determining feature objects, line segment objects, three-dimensional objects, figures and the like, and specific image recognition strategies comprise existing mature feature recognition, object contour recognition, color recognition, action recognition, face recognition, character recognition and the like, and different environments adopt corresponding recognition strategies.
And a calculation unit for angle calculation, such as trigonometric function calculation.
And the calculation result pushing unit pushes calculation result information or calculation result information images to the client.
The client-side is the existing mobile phone, including: display screen and camera;
an image superimposing unit for AR display;
a sensor unit including a direction device for north-south direction measurement, a level sensor (two-dimensional) for measuring an inclination angle (for horizontal positioning, or vertical direction angle measurement), a gyroscope for motion measurement;
a network communication unit including a WIFI network or a wireless communication network (2G, 3G, 4G, etc.);
and the GPS and LBS positioning units are used for positioning geographic positions and implementing a comprehensive positioning strategy in combination with the invention.
FIG. 7 is a flow chart of the present invention comprising the steps of:
client 1:
determining a datum line, namely determining a unified datum line of the positioning system, determining by adopting a direction device of a client, wherein the specific direction is not limited, but once the datum line is determined, all the clients and the servers take the determined datum line as an angle measurement datum line;
the client side shoots the instant scene by synchronously recording the direction angle and uploading the direction angle to the server, and synchronously calling the direction device angle through the camera, so that the horizontal dip angle (one-dimensional or two-dimensional) of the horizontal device can be further synchronously called;
client 2:
determining a datum line, namely determining a unified datum line of the positioning system, determining by adopting a direction device of a client, wherein the specific direction is not limited, but once the datum line is determined, all the clients and the servers take the determined datum line as an angle measurement datum line;
the client side shoots the instant scene by synchronously recording the direction angle and uploading the direction angle to the server, and synchronously calling the direction device angle through the camera, so that the horizontal dip angle (one-dimensional or two-dimensional) of the horizontal device can be further synchronously called;
and (3) a server:
at least two feature objects are determined, the server acquires the image uploaded by the client and then carries out image recognition, and the feature objects are determined according to a recognition strategy;
the angle of the connection line of the client 1 and the client 2 relative to the reference line is calculated according to the angle of the client 1 and the two features relative to the reference line and the angle of the client 2 and the two features relative to the reference line.
Or alternatively, the first and second heat exchangers may be,
at least one line segment object DEF is determined, and a line segment DE and a line segment EF of the line segment object DEF are not on the same straight line;
and calculating the angle of the connecting line of the client 1 and the client 2 relative to the datum line according to the angle of the client 1 and the line segment DEF relative to the datum line and the DE/EF ratio of the image acquired by the client 1 and the angle of the client 2 and the line segment DEF relative to the datum line and the DE/EF ratio of the image acquired by the client 1.
Or alternatively, the first and second heat exchangers may be,
determining at least one solid object;
the angle of the connection line between the client 1 and the client 2 relative to the reference line is calculated according to the angle of the client 1 and the three-dimensional object relative to the reference line and the area ratio of the two surfaces of the three-dimensional object image acquired by the client 1, and the angle of the client 2 and the three-dimensional object relative to the reference line and the area ratio of the two surfaces of the three-dimensional object image acquired by the client 2.
The server sends the position angle information of the client 2 to the client 1, and the client 1 superimposes the position icon and the live-action;
the server sends the position angle information of the client 1 to the client 2, and the client 2 superimposes the position icon and the live-action;
FIG. 8 is a flow chart of the AR application of the present invention, comprising the steps of:
client 1:
determining a datum line, namely determining a unified datum line of the positioning system, determining by adopting a direction device of a client, wherein the specific direction is not limited, but once the datum line is determined, all the clients and the servers take the determined datum line as an angle measurement datum line;
the client side shoots the instant scene by synchronously recording the direction angle and uploading the direction angle to the server, and synchronously calling the direction device angle through the camera, so that the horizontal dip angle (one-dimensional or two-dimensional) of the horizontal device can be further synchronously called;
client 2:
determining a datum line, namely determining a unified datum line of the positioning system, determining by adopting a direction device of a client, wherein the specific direction is not limited, but once the datum line is determined, all the clients and the servers take the determined datum line as an angle measurement datum line;
the client side shoots the instant scene by synchronously recording the direction angle and uploading the direction angle to the server, and synchronously calling the direction device angle through the camera, so that the horizontal dip angle (one-dimensional or two-dimensional) of the horizontal device can be further synchronously called;
and (3) a server:
at least two feature objects are determined, the server acquires the image uploaded by the client and then carries out image recognition, and the feature objects are determined according to a recognition strategy;
the angle of the connection line of the client 1 and the client 2 relative to the reference line is calculated according to the angle of the client 1 and the two features relative to the reference line and the angle of the client 2 and the two features relative to the reference line.
Or alternatively, the first and second heat exchangers may be,
at least one line segment object DEF is determined, and a line segment DE and a line segment EF of the line segment object DEF are not on the same straight line;
and calculating the angle of the connecting line of the client 1 and the client 2 relative to the datum line according to the angle of the client 1 and the line segment DEF relative to the datum line and the DE/EF ratio of the image acquired by the client 1 and the angle of the client 2 and the line segment DEF relative to the datum line and the DE/EF ratio of the image acquired by the client 1.
Or alternatively, the first and second heat exchangers may be,
determining at least one solid object;
the angle of the connection line between the client 1 and the client 2 relative to the reference line is calculated according to the angle of the client 1 and the three-dimensional object relative to the reference line and the area ratio of the two surfaces of the three-dimensional object image acquired by the client 1, and the angle of the client 2 and the three-dimensional object relative to the reference line and the area ratio of the two surfaces of the three-dimensional object image acquired by the client 2.
The server locates the AR red packet and pushes the AR red packet to the client;
or alternatively, the first and second heat exchangers may be,
the server locates the virtual advertisement and pushes the virtual advertisement to the client;
or alternatively, the first and second heat exchangers may be,
the server positions the AR game objects and pushes the AR game objects to the client;
the client displays the AR image in an overlaid manner in the real scene of the client screen.
The specific application mode of the positioning and AR method based on image recognition is as follows:
(1) The instant messaging client is popular and used at present, friends in a virtual space are likely to not face each other, the instant messaging client is embedded into the existing instant messaging software such as QQ, weChat and the like, and friends of both parties can open a camera to search for each other at any place and display mutual indication icons in a mobile phone screen. When the other party is found, a virtual image such as a head portrait can be displayed in a superimposed manner in the real scene of the screen, and simultaneously, audio or vibration prompt can be provided.
(2) One party firstly shoots the positioning space image uploading server, and the other party shoots and searches the positioning space within a certain effective time range.
(3) One party is a merchant, the merchant shoots a positioning space image and uploads the image to a server, and a client shoots the positioning space to find the merchant.
(4) An AR method and system in combination with Chinese patent publication 2016105984877 provides for the location of red packets or virtual advertisements. The virtual image comprises position and direction parameters, the client hardware comprises a position and direction sensor, and the virtual image displayed by the client changes along with the direction change of the client hardware, namely VR technology.
The method comprises the following specific steps:
(1) the client 1 starts a red packet issuing program, shoots surrounding images and synchronously acquires direction angles to upload to a server;
(2) the server acquires a peripheral image shot by the client 1 and a direction angle thereof;
(3) the server generates a virtual red packet image according to red packet setting of the client 1 and associates the image acquired by the server in the step (2) and the direction angle thereof, wherein red packet action assignment comprises the direction angle, the horizontal inclination angle, the vertical height and the change of action assignment thereof;
(4) the client 2 shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(5) the server calculates and obtains the position where the client 1 releases the red packet, namely the position where the red packet is located, according to the peripheral image and the direction angle which are obtained by the client 1 and obtained by the step (2) and the peripheral image and the direction angle which are obtained by the step (4) and the client 2, and the server pushes the virtual red packet image to the client 2;
(6) the client 2 displays the virtual red package image in a screen real scene in a superposition manner according to the red package position, and meanwhile, the virtual red package image moves according to the action assignment;
(7) the client 2 acquires the red packet through the touch screen.
Of course, the virtual red envelope image may be replaced by a commercial advertisement or a virtual game image.
The server issues the advertisement steps as follows:
(1) the server acquires a peripheral image shot by any client and a direction angle thereof;
(2) the server generates a virtual advertisement image according to the setting (such as the putting position, the characters, the images and the image actions) of the advertiser and associates the images and the direction angles thereof obtained by the server in the step (1), wherein the setting of the advertiser comprises the putting position, the characters, the images and the image actions;
(3) the advertisement receptor client shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(4) the server calculates and obtains the advertisement putting position according to the peripheral image and the direction angle of the peripheral image shot by any client side obtained in the step (1) and the peripheral image and the direction angle of the peripheral image shot by the advertisement receiver client side in the step (3);
(5) the advertisement receptor client displays the virtual advertisement image in the screen live-action according to the advertisement putting position, and meanwhile, the virtual advertisement image moves according to the action assignment;
(6) the advertisement receiver client obtains advertisement content such as links, jumps, collections and the like through a touch screen.
(5) AR space positioning of game objects in a game system, such as virtual animals, virtual objects, virtual babies, etc. in a game. In an embodiment, after the mobile phone at the O position rotates to shoot the positioning space, the AR red packet, the virtual object or the virtual game object is issued at the O position, and is provided for other mobile phone clients to search for in a certain period of time.
(6) At present, the face recognition is very mature, and in addition, outdated scene recognition can be performed by combining with clothes colors and the like, so that the two parties can obtain the position angle of the other party according to the one-pass dynamic image recognition result.
The above application modes and rules do not limit the basic features of the method and system of the present invention, and do not limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A method of positioning based on image recognition, characterized by: the method comprises the steps that a server and a client are arranged, a first mobile phone is arranged at a point O, a second mobile phone is arranged at a point C, at least two characteristic object A and object B are determined in a positioning space, the first mobile phone searches for a second mobile phone position direction angle gamma, the second mobile phone photographs the positioning space, synchronously records the direction angle and uploads the same to the server, acquires an image of the characteristic object A and a direction angle when the characteristic object A is photographed, photographs the characteristic object B in the same way, and uploads a photographed image and the direction angle thereof to the server; the first mobile phone shoots the positioning space, also acquires images of two feature object and direction angles thereof, and uploads the images to the server, and the server calculates a unique OC direction angle gamma according to the direction angles of the two feature object at two places by geometric mathematics, wherein the steps are as follows:
(1) Determining a unified datum line through a direction device of the client;
(2) Acquiring an image A1 of an object A and an angle alpha 1 relative to a reference line at a position C, acquiring an image B1 of an object B and an angle alpha 2 relative to the reference line at the position C, and uploading the images A1 and alpha 1 and the images B1 and alpha 2 to a server;
(3) The position O acquires an image A2 of the object A and an angle beta 1 relative to the datum line, the position O acquires an image B2 of the object B and an angle beta 2 relative to the datum line, and the images A2 and beta 1 and the images B2 and beta 2 are uploaded to a server;
(4) The server determines that the image A1 and the image A2 are both images of the object A through image recognition, the server determines that the image B1 and the image B2 are both images of the object B through image recognition, and the direction angle gamma of the OC straight line relative to the datum line is obtained through calculation of alpha 1, alpha 2, beta 1 and beta 2.
2. A method of image recognition based positioning according to claim 1, further characterized by calculating a direction angle γ:
in triangle ABC:
AB/sin(α1+α2)=AC/sin(δ-α2)=BC/sin(- α1- δ) =bc/sin (α1+δ), deriving ac=absin (δ - α2)/sin (α1+α2);
in triangular AOB:
AB/sin(β1-β2)=OB/sin(- δ - β1) =ob/sin (δ+β1) =oa/sin (δ+β2), deriving oa=absin (δ+β2)/sin (β1- β2);
in triangular AOC:
OA/sin(-γ+α1)=OA/ sin(γ-α1)=AC/sin(γ-β1),
substitution to obtain: ABsin (δ+β2)/sin (β1- β2) sin (γ - α1) =absin (δ - α2)/sin (α1+α2) sin (γ - β1), sin (γ - α1)/sin (γ - β1) =sin (α1+α2) sin (δ+β2)/sin (δ - α2) sin (β1- β2);
let k=sin (α1+α2) sin (δ+β2)/sin (δ - α2) sin (β1- β2), then sin (γ - α1)/sin (γ - β1) =k;
sinγcosα1-cosγsinα1=Ksinγcosβ1-Kcosγsinβ1;
(Ksinβ1-sinα1)cosγ=(Kcosβ1-cosα1)sinγ;
cosγ= sinγ(Kcosβ1-cosα1)/(Ksinβ1- sinα1);
let (Kcos β1-cos α1)/(Ksin β1-sin α1) =t, then cos γ=tsin γ;
because of cos 2 γ+sin 2 Gamma=1, so t 2 sin 2 γ+sin 2 γ=1;
sin 2 γ=1/(1+t 2 ),γ∈[0, ]The time sin gamma is not less than 0, sin gamma=root number (1/(1+t) 2 ));
Finally, the direction angle gamma is obtained.
3. A method of positioning based on image recognition, characterized by: the method comprises the steps that a first mobile phone is arranged at a point O, a second mobile phone is arranged at a point C, DEF is a line segment object in a positioning space, a line segment DE and a line segment EF are not on the same straight line, the first mobile phone searches for a second mobile phone position direction angle gamma, the second mobile phone photographs the positioning space, an image of the line segment object DEF and a direction angle when the line segment object DEF is photographed are obtained, at the moment, the direction of the line segment object DEF is perpendicular to a screen of the second mobile phone, namely, the image of the line segment object DEF falls in the center of the screen of the second mobile phone, an included angle between the line segment object DEF and a reference line, namely, a direction angle when the image is photographed, is obtained according to a direction sensor in the mobile phone, and the photographed image and the direction angle are uploaded to the server; the first mobile phone photographs the positioning space to find a second mobile phone, likewise obtains an image of the line segment object DEF and a direction angle thereof, and uploads the image to the server, and the server calculates a unique OC direction angle gamma according to the line segment ratio and the direction angle of the image of the same line segment object DEF at two places by geometric mathematics, wherein the method comprises the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The method comprises the steps that a line segment DE and a line segment EF of a line segment object DEF are not on the same straight line, a vertical projection image D1E1F1 of the line segment object DEF and an angle alpha 3 between the vertical projection direction and a datum line are obtained at a position C, and the images D1E1F1 and alpha 3 are uploaded to a server;
(3) The position O acquires a vertical projection image D2E2F2 of the line segment object DEF and an angle beta 3 between the vertical projection direction and a reference line, and uploads the images D2E2F2 and beta 3 to a server;
(4) The server determines that the images D1E1F1 and the images D2E2F2 are images of line segments DEF through image recognition, and calculates and obtains the direction angle gamma of the OC straight line relative to the datum line through the ratio of D1E1/E1F1, the ratio of D2E2/E2F2, alpha 3 and beta 3.
4. A method of image recognition based positioning, further characterized by: the method comprises the steps that a first mobile phone is arranged at a point O, a second mobile phone is arranged at a point C, G is a three-dimensional object in a positioning space, two faces of the three-dimensional object are S1 and S2, the first mobile phone searches for a second mobile phone position direction angle gamma, the second mobile phone photographs the positioning space, an image of the three-dimensional object G and a direction angle when photographing the three-dimensional object G are obtained, at the moment, the direction of the three-dimensional object G is perpendicular to a screen of the second mobile phone, namely, the image of the three-dimensional object G falls in the center of the screen of the second mobile phone, and the direction angle when photographing the image is obtained according to a direction sensor in the mobile phone, namely, the direction angle when photographing the image is completed, and the photographed image and the direction angle are uploaded to the server; the first mobile phone photographs the positioning space to find a second mobile phone, also obtains the image of the solid object G and the direction angle thereof, and uploads the image to the server, and the server calculates the unique direction angle gamma of the OC according to the area ratio and the direction angle of two faces of the image of the same solid object G at two places by geometric mathematics, wherein the method comprises the following steps:
(1) Determining a unified datum line through a direction device of the client;
(2) The areas of two surfaces of the three-dimensional object G are S1 and S2, a vertical projection image G1 of the three-dimensional object G and an angle alpha 4 between the vertical projection direction and a datum line are acquired at a position C, and the images G1 and alpha 4 are uploaded to a server;
(3) The position O acquires a vertical projection image G2 of the three-dimensional object G and an angle beta 4 between the vertical projection direction and a datum line, and uploads the images G2 and beta 4 to a server;
(4) The server determines that the image G1 and the image G2 are images of the three-dimensional object G through image recognition, the image area of SC1 in the image G1 is the image area of S1, the image area of SC2 in the image G2 is the image area of S1, the image area of SO2 is the image area of S2, and the direction angle gamma of the OC straight line relative to the datum line is obtained through calculation through the ratio of SC1/SC2, the ratio of SO1/SO2, alpha 4 and beta 4.
5. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, further comprising the steps of: the client acquires the direction angle gamma calculated by the server, and positions or navigates on the client guiding map according to the direction angle gamma.
6. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, further comprising the steps of: the client acquires the direction angle gamma calculated by the server, the client opens the camera to acquire a live-action image, and the guide icon of the direction angle gamma is displayed in the live-action image in a superimposed mode.
7. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, characterized in that: the direction angle gamma is guided by friends, businesses, game targets, red packages and advertisements.
8. The method for positioning based on image recognition according to claim 7, wherein:
(1) the client 1 starts a red packet issuing program, shoots surrounding images and synchronously acquires direction angles to upload to a server;
(2) the server acquires a peripheral image shot by the client 1 and a direction angle thereof;
(3) the server generates a virtual red packet image according to red packet setting of the client 1 and associates the image acquired by the server in the step (2) and the direction angle thereof, wherein red packet action assignment comprises the direction angle, the horizontal inclination angle, the vertical height and the change of action assignment thereof;
(4) the client 2 shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(5) the server calculates and obtains the position where the client 1 releases the red packet, namely the position where the red packet is located, according to the peripheral image and the direction angle which are obtained by the client 1 and obtained by the step (2) and the peripheral image and the direction angle which are obtained by the step (4) and the client 2, and the server pushes the virtual red packet image to the client 2;
(6) the client 2 displays the virtual red package image in a screen real scene in a superposition manner according to the red package position, and meanwhile, the virtual red package image moves according to the action assignment;
(7) the client 2 acquires the red packet through the touch screen.
9. The method for positioning based on image recognition according to claim 7, wherein:
the server issues the advertisement steps as follows:
(1) the server acquires a peripheral image shot by any client and a direction angle thereof;
(2) the server generates a virtual advertisement image according to the setting of an advertiser and associates the image and the direction angle thereof acquired by the server in the step (1), wherein the setting of the advertiser comprises a putting position, characters, images and image actions;
(3) the advertisement receptor client shoots the peripheral image and synchronously acquires the direction angle and uploads the direction angle to the server;
(4) the server calculates and obtains the advertisement putting position according to the peripheral image and the direction angle of the peripheral image shot by any client side obtained in the step (1) and the peripheral image and the direction angle of the peripheral image shot by the advertisement receiver client side in the step (3);
(5) the advertisement receptor client displays the virtual advertisement image in the screen live-action according to the advertisement putting position, and meanwhile, the virtual advertisement image moves according to the action assignment;
(6) the advertisement acceptor client obtains advertisement content through a touch screen, wherein the advertisement content comprises links, jumps or collections.
10. A method of image recognition based positioning according to claim 1 or 2 or 3 or 4, further comprising the steps of: and determining the distance between the two connecting lines of the characteristic objects, or the size of the line segment object, or the size of the three-dimensional object, and calculating the distance between the two connecting lines of the positions through a trigonometric function.
11. A system for a method of image recognition based positioning according to claim 1 or 2 or 3 or 4, characterized in that: comprising a server and a client side,
the server comprises:
the image recognition unit is used for determining a characteristic object, a line segment object or a three-dimensional object;
a calculation unit for calculating the direction angle of the connecting line of the two position points according to the direction angles of the two feature objects; or calculating the direction angle of the connecting line of the two position points according to the ratio of the two line segments which are not on the same straight line and the direction angle thereof; or calculating the direction angle of the connecting line of the two position points according to the area ratio of the two surfaces of the three-dimensional object and the direction angle thereof;
the calculation result pushing unit is used for pushing the calculated direction angle data of the connecting line of the two position points to the client;
the client comprises at least a direction acquisition unit, an image superposition unit and a camera.
12. The system of a method of image recognition based positioning of claim 11, wherein: the system is embedded in an existing IM system or payment system or gaming system.
13. The system of a method of image recognition based positioning of claim 11, wherein: the client is a mobile phone or a tablet computer.
CN201710044706.1A 2017-01-21 2017-01-21 Positioning and AR method and system based on image recognition and application Active CN106846311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710044706.1A CN106846311B (en) 2017-01-21 2017-01-21 Positioning and AR method and system based on image recognition and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710044706.1A CN106846311B (en) 2017-01-21 2017-01-21 Positioning and AR method and system based on image recognition and application

Publications (2)

Publication Number Publication Date
CN106846311A CN106846311A (en) 2017-06-13
CN106846311B true CN106846311B (en) 2023-10-13

Family

ID=59119469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710044706.1A Active CN106846311B (en) 2017-01-21 2017-01-21 Positioning and AR method and system based on image recognition and application

Country Status (1)

Country Link
CN (1) CN106846311B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995097A (en) * 2017-11-22 2018-05-04 吴东辉 A kind of method and system of interaction AR red packets
CN109886191A (en) * 2019-02-20 2019-06-14 上海昊沧系统控制技术有限责任公司 A kind of identification property management reason method and system based on AR
CN112000100A (en) * 2020-08-26 2020-11-27 德鲁动力科技(海南)有限公司 Charging system and method for robot

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566898A (en) * 2009-06-03 2009-10-28 广东威创视讯科技股份有限公司 Positioning device of electronic display system and method
CN102445701A (en) * 2011-09-02 2012-05-09 无锡智感星际科技有限公司 Method for demarcating image position based on direction sensor and geomagnetism sensor
CN102467341A (en) * 2010-11-04 2012-05-23 Lg电子株式会社 Mobile terminal and method of controlling an image photographing therein
CN102829775A (en) * 2012-08-29 2012-12-19 成都理想境界科技有限公司 Indoor navigation method, systems and equipment
CN103064565A (en) * 2013-01-11 2013-04-24 海信集团有限公司 Positioning method and electronic device
CN103090846A (en) * 2013-01-15 2013-05-08 广州市盛光微电子有限公司 Distance measuring device, distance measuring system and distance measuring method
CN103105993A (en) * 2013-01-25 2013-05-15 腾讯科技(深圳)有限公司 Method and system for realizing interaction based on augmented reality technology
CN103134489A (en) * 2013-01-29 2013-06-05 北京凯华信业科贸有限责任公司 Method of conducting target location based on mobile terminal
CN103220415A (en) * 2013-03-28 2013-07-24 东软集团(上海)有限公司 One-to-one cellphone live-action position trailing method and system
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system
CN103593658A (en) * 2013-11-22 2014-02-19 中国电子科技集团公司第三十八研究所 Three-dimensional space positioning system based on infrared image recognition
CN103699592A (en) * 2013-12-10 2014-04-02 天津三星通信技术研究有限公司 Video shooting positioning method for portable terminal and portable terminal
CN103761539A (en) * 2014-01-20 2014-04-30 北京大学 Indoor locating method based on environment characteristic objects
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
KR20150028430A (en) * 2013-09-06 2015-03-16 주식회사 이리언스 Iris recognized system for automatically adjusting focusing of the iris and the method thereof
CN104422439A (en) * 2013-08-21 2015-03-18 希姆通信息技术(上海)有限公司 Navigation method, apparatus, server, navigation system and use method of system
JP2015076738A (en) * 2013-10-09 2015-04-20 カシオ計算機株式会社 Photographed image processing apparatus, photographed image processing method, and program
CN104572732A (en) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 Method and device for inquiring user identification and method and device for acquiring user identification
CN104571532A (en) * 2015-02-04 2015-04-29 网易有道信息技术(北京)有限公司 Method and device for realizing augmented reality or virtual reality
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN105320725A (en) * 2015-05-29 2016-02-10 杨振贤 Method and apparatus for acquiring geographic object in collection point image
CN105354296A (en) * 2015-10-31 2016-02-24 广东欧珀移动通信有限公司 Terminal positioning method and user terminal
CN105588543A (en) * 2014-10-22 2016-05-18 中兴通讯股份有限公司 Camera-based positioning method, device and positioning system
CN106230920A (en) * 2016-07-27 2016-12-14 吴东辉 A kind of method and system of AR

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4914019B2 (en) * 2005-04-06 2012-04-11 キヤノン株式会社 Position and orientation measurement method and apparatus

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566898A (en) * 2009-06-03 2009-10-28 广东威创视讯科技股份有限公司 Positioning device of electronic display system and method
CN102467341A (en) * 2010-11-04 2012-05-23 Lg电子株式会社 Mobile terminal and method of controlling an image photographing therein
CN102445701A (en) * 2011-09-02 2012-05-09 无锡智感星际科技有限公司 Method for demarcating image position based on direction sensor and geomagnetism sensor
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system
CN102829775A (en) * 2012-08-29 2012-12-19 成都理想境界科技有限公司 Indoor navigation method, systems and equipment
CN103064565A (en) * 2013-01-11 2013-04-24 海信集团有限公司 Positioning method and electronic device
CN103090846A (en) * 2013-01-15 2013-05-08 广州市盛光微电子有限公司 Distance measuring device, distance measuring system and distance measuring method
CN103105993A (en) * 2013-01-25 2013-05-15 腾讯科技(深圳)有限公司 Method and system for realizing interaction based on augmented reality technology
CN103134489A (en) * 2013-01-29 2013-06-05 北京凯华信业科贸有限责任公司 Method of conducting target location based on mobile terminal
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
CN103220415A (en) * 2013-03-28 2013-07-24 东软集团(上海)有限公司 One-to-one cellphone live-action position trailing method and system
CN104422439A (en) * 2013-08-21 2015-03-18 希姆通信息技术(上海)有限公司 Navigation method, apparatus, server, navigation system and use method of system
KR20150028430A (en) * 2013-09-06 2015-03-16 주식회사 이리언스 Iris recognized system for automatically adjusting focusing of the iris and the method thereof
JP2015076738A (en) * 2013-10-09 2015-04-20 カシオ計算機株式会社 Photographed image processing apparatus, photographed image processing method, and program
CN104572732A (en) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 Method and device for inquiring user identification and method and device for acquiring user identification
CN103593658A (en) * 2013-11-22 2014-02-19 中国电子科技集团公司第三十八研究所 Three-dimensional space positioning system based on infrared image recognition
CN103699592A (en) * 2013-12-10 2014-04-02 天津三星通信技术研究有限公司 Video shooting positioning method for portable terminal and portable terminal
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN103761539A (en) * 2014-01-20 2014-04-30 北京大学 Indoor locating method based on environment characteristic objects
CN105588543A (en) * 2014-10-22 2016-05-18 中兴通讯股份有限公司 Camera-based positioning method, device and positioning system
CN104571532A (en) * 2015-02-04 2015-04-29 网易有道信息技术(北京)有限公司 Method and device for realizing augmented reality or virtual reality
CN105320725A (en) * 2015-05-29 2016-02-10 杨振贤 Method and apparatus for acquiring geographic object in collection point image
CN105354296A (en) * 2015-10-31 2016-02-24 广东欧珀移动通信有限公司 Terminal positioning method and user terminal
CN106230920A (en) * 2016-07-27 2016-12-14 吴东辉 A kind of method and system of AR

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐田帅,房胜,刘天池. 一种面向大建筑物的移动视觉定位算法.《软件导刊》, 一种面向大建筑物的移动视觉定位算法.2015,正文第71-75页. *

Also Published As

Publication number Publication date
CN106846311A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
US10976803B2 (en) Electronic device displays an image of an obstructed target
US10740975B2 (en) Mobile augmented reality system
CN104376118B (en) The outdoor moving augmented reality method of accurate interest point annotation based on panorama sketch
KR102038856B1 (en) System and method for creating an environment and for sharing a location based experience in an environment
TW201816720A (en) Method and device for positioning user position based on augmented reality
US20190088030A1 (en) Rendering virtual objects based on location data and image data
US20190356936A9 (en) System for georeferenced, geo-oriented realtime video streams
US20140300775A1 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN108154558B (en) Augmented reality method, device and system
US20120246223A1 (en) System and method for distributing virtual and augmented reality scenes through a social network
CN104335649A (en) Method and system for determining location and position of image matching-based smartphone
CN106846311B (en) Positioning and AR method and system based on image recognition and application
US10102675B2 (en) Method and technical equipment for determining a pose of a device
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
CN110335351A (en) Multi-modal AR processing method, device, system, equipment and readable storage medium storing program for executing
CN106289180A (en) The computational methods of movement locus and device, terminal
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
US10867220B2 (en) Systems and methods for generating composite sets of data from different sensors
CN108932055B (en) Method and equipment for enhancing reality content
EP3430591A1 (en) System for georeferenced, geo-oriented real time video streams
CN112788443A (en) Interaction method and system based on optical communication device
KR101601726B1 (en) Method and system for determining position and attitude of mobile terminal including multiple image acquisition devices
CN106840167B (en) Two-dimensional quantity calculation method for geographic position of target object based on street view map
WO2019127320A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
CN112055034B (en) Interaction method and system based on optical communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Building 918, Building 1, Wangfu Building, No. 6 Renmin East Road, Chongchuan District, Nantong City, Jiangsu Province, 226001

Applicant after: Wu Donghui

Address before: 226019 1-109, Science Park, 58 Chongchuan Road, Nantong City, Jiangsu Province

Applicant before: Wu Donghui

GR01 Patent grant
GR01 Patent grant