CN109029464A - A kind of vision two dimensional code indoor orientation method setting pattern image certainly - Google Patents
A kind of vision two dimensional code indoor orientation method setting pattern image certainly Download PDFInfo
- Publication number
- CN109029464A CN109029464A CN201810952600.6A CN201810952600A CN109029464A CN 109029464 A CN109029464 A CN 109029464A CN 201810952600 A CN201810952600 A CN 201810952600A CN 109029464 A CN109029464 A CN 109029464A
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- camera
- coordinate
- formula
- pattern image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of from the vision two dimensional code indoor orientation method for setting pattern image, belongs to indoor positioning technologies field.Based on the characteristics of SIFT, pattern image is set, and two dimensional code is constructed using the pattern image by setting, two dimensional code is pasted in known location, two dimensional code is received using camera and is decoded, use ratio rectangular projection iterated transform algorithm determines the relative positional relationship between camera and two dimensional code, content in two dimensional code is two dimensional code own coordinate, in conjunction with the relative positional relationship between the camera and two dimensional code acquired the position of camera is obtained, iteration error controls position error when solving equation by control.Method proposed by the invention does not need deployment base station compared with existing indoor orientation method, it is only necessary to which the problem of pasting two dimensional code, eliminating deployment installation and base station power supply has greatly saved cost, and can guarantee positioning accuracy, there is great application potential.
Description
Technical field
The present invention relates to a kind of from the vision two dimensional code indoor orientation method for setting pattern image, belongs to indoor positioning technologies
Field.
Technical background
With the development of indoor positioning technologies, more and more locating schemes are suggested, and the scheme of comparative maturity has at present
LED based visible light positioning system, based on wireless signal transmission equipment as WiFi, bluetooth, RFID and UWB positioning
System, based on inertial navigation such as IMU, the indoor positioning of MEMS is positioned based on the computer vision of picture or video.However
It in existing LED visible light positioning system, needs to be modulated LED, therefore exists and need to be transformed equipment, and
The drawbacks of place is laid.Location technology based on WiFi, bluetooth is all easy to appear the big problem of signal fluctuation, and positioning accuracy is not
Height, and usually require additionally to lay new base station, higher cost.Inertial navigation has the drawback that cannot tiring out with the time
It accumulates error and long-term accurate positioning is provided, therefore inertial navigation usually requires to combine with other positioning methods, it is difficult to as
Independent method uses.
In framing field, method used at present has very much, such as the characteristic points such as SIFT, SURF, BRISK, ORB
The outlines algorithms such as the shape-based match of matching and halcon, outline algorithm are used through edge
Gradient direction and gradient value calculate matching degree, pre-establish all angles and contracting according to the rotary step of setting and zoom factor
The template under ratio is put, is a kind of exhaustive method, it is quite time-consuming, and in Feature Points Matching algorithm, SIFT algorithmic match feature
Point is most, but speed is most slow, and mismatch rate is relatively high, and SURF speed has promotion on the basis of SIFT, but for light
It is decreased obviously according to apparent scene matching accuracy is changed, the characteristic point quantity that ORB and BRISK speed is faster still extracted is too
It is few, it is relatively specific for the little scene of image change, for scene or the positioning failure of discontinuous positioning scene is widely varied
Rate is high.
Above-mentioned locating scheme there are aiming at the problem that, this invention address that be based on Scale invariant features transform (Scale-
Invariant feature transform, SIFT) scheme, reduce operand and erroneous matching and reduce maintenance with
And system cost.And use orthogonal scaled orthographic projection iterated transform algorithm (Pose from Orthography and
Scaling with iterations, POSIT) calculating position, iteration error misses when being solved equation by control to control positioning
Difference, so that quasi- realize low cost and high-precision positioning.
Summary of the invention
It is needed the purpose of the present invention is needing to be transformed hardware to solve existing indoor locating system, additionally disposing base station, base station
It additionally to power, the technological deficiency based on positioning accuracy cannot be guaranteed, propose a kind of from the vision two dimensional code room for setting pattern image
Interior localization method.
Core of the invention thought is: pattern image is arranged based on the characteristics of SIFT, and uses the characteristic pattern by setting
Shape constructs two dimensional code, pastes two dimensional code in known location, receives two dimensional code using camera and decode, use ratio is just traded
Shadow iterated transform algorithm determines the relative positional relationship between camera and two dimensional code, the content in two dimensional code be two dimensional code from
Body coordinate obtains the position of camera in conjunction with the relative positional relationship between the camera and two dimensional code acquired, passes through control
Iteration error controls position error when solving equation.
This localization method is related to being defined as follows:
Define 1. image coordinate systems: every width digital image representation is the array of M*N dimension, i.e. M row N column;In the image
Each element, that is, pixel, referred to as pixel, the numerical value in array are the gray value of picture point;Right angle seat is defined in the picture
Mark is that the coordinate (u, v) of each pixel represents columns and line number of the pixel in array in the rectangular coordinate system;So
(u, v) is the coordinate of the image coordinate system as unit of pixel;Due to (u, v) only indicate pixel be located at line number in array with
Columns, not useful physical unit represent the position of the pixel in the picture;So establishing with physical unit, millimeter is indicated
Image coordinate system coordinate;Any one pixel (u, v) the two seats in rectangular coordinate system and image coordinate system in image
Target relationship expression is formula (1):
In formula (1), (x, y) is the coordinate of the pixel under image coordinate system in millimeters;(u0,v0) it is this
Columns and line number where array central point;dx、dyIt is pel spacing in X-axis and Y direction in millimeters respectively;
It defines 2. world coordinate systems: selecting a reference coordinate in the environment to describe the position of camera, and retouched with it
State the position of any object in environment;
Define 3. camera coordinate systems: the origin of camera coordinate system is the optical center of camera, the X of x-axis and y-axis and image,
Y-axis is parallel, and z-axis is camera optical axis, it is vertical with the plane of delineation;The intersection point of optical axis and the plane of delineation, as camera coordinate
The origin of system, the rectangular coordinate system of composition are camera coordinate system;
A kind of vision two dimensional code indoor orientation method setting pattern image certainly, comprising the following steps:
Step 1: pattern image is arranged according to the feature of SIFT and generates two dimensional code, is specifically comprised the following steps:
Gradient distribution histogram is arranged with N degree stepping in step 1.1., sets up different gradient values 360/N side,
Gradient distribution histogram is generated, specifically:
Wherein, N can be divided exactly by 360, and the range of N is 1 ° to 40 °;
Step 1.2. generates the characteristic pattern of four different gradient distributions according to the gradient distribution histogram that step 1.1 generates
Shape;
The main points that pattern image is arranged are to guarantee change of gradient both not big inside pattern image, make main side again
To more obvious, with N degree stepping, 360/N side sets up different gradient values, and meets auxiliary direction and account for general direction
50% or more;And gradient distribution histogram non complete symmetry, so that pattern image still has obvious characteristic after rotation;
Wherein principal direction refers in gradient distribution histogram, rectangular where maximum of gradients;Auxiliary direction refers in ladder
It spends in distribution histogram, gradient value is greater than 80% direction of greatest gradient value;
Step 1.3. constructs two dimensional code using the pattern image generated in step 1.2, specifically:
The pattern image of four different gradient distributions is placed on four vertex as characteristic point, and in four characteristic patterns
Content regions and check field are inserted between shape respectively, form two dimensional code;
Step 2: two dimensional code being attached to and fixes position known to world coordinates, and then obtains four features on two dimensional code
The world coordinates of figure;The two dimensional code generated using camera shooting step 1, exports two dimensional code digital picture;
Wherein, it shoots in the two dimensional code digital picture of output comprising four pattern images;
Step 3: four pattern images in two dimensional code digital picture exported using SIFT algorithm identification step two, and
QR code content is extracted by the sequence of four pattern images;
Specifically, four pattern images of the two dimensional code digital picture that identification step two exports obtain four characteristic points and exist
Location of pixels (u, v) in digital picture;By formula (1) obtain four characteristic points under image coordinate system coordinate (x,
Y), the coordinate also referred to as in two dimensional code digital picture, the coordinate in the two dimensional code digital picture is exactly in the two dimensional code extracted
Hold;
Step 4: it based on the pattern image recognized in step 3, is calculated using POSIT algorithm, obtains camera
The three-dimensional coordinate of position, specifically comprises the following steps:
Step 4.1 is by coordinate value in two dimensional code digital picture of four pattern images that step 3 identifies and is taking the photograph
As the coordinate value in head coordinate system, formula (2), (3), (4) are substituted into:
Wherein, R is spin matrix in (2), specific: RH i, i=1,2,3 be the row vector of spin matrix, RT i=[Ri1 H
Ri2 H Ri3 H], i=1,2,3;Ri1 H、Ri2 H、Ri3 HUnit vector respectively on three change in coordinate axis direction of world coordinate system,
Projected size in tri- reference axis of X, Y, Z of camera coordinate system, H represent the transposition of matrix, and T is transmission matrix in (3),
Tx、Ty、TzFor the origin coordinate value in the camera coordinate system of world coordinate system;(4) x ini、yi, i=1,2,3,4 be four
Coordinate value of the pattern image in image coordinate system, unit are millimeter, are known quantity;Xci,Yci,Zci, i=1,2,3,4;
The coordinate value of four pattern images in the camera coordinate system is represented, unit is millimeter, is unknown quantity;F in formula (3)
It is the focal length of camera, unit is millimeter, is known quantity;
Step 4.2 set in world coordinate system a little as (Xw,Yw,Zw), it is obtained using the coordinate transformation relation of formula (5)
Zcx,Zcy,Zc:
Wherein, RT i=[Ri1 T Ri2 T Ri3 T], the last one equal sign of the right is 3*4 dimension, and rightmost is 4*1 dimension, final
It is tieed up to 3*1;
Step 4.3 is by formula (5) Far Left and rightmost simultaneously divided by Tz, obtain formula (6):
Wherein:
The formula of w in formula (7) is approximately formula (8) by step 4.4;
W=Zc/Tz≈1 (8)
Wherein, the reason of being approximately formula (8) by the formula of w in formula (7) is that four characteristic points are sat in camera on two dimensional code
In mark system the distribution of camera depth direction with the distance between camera TzComparing can ignore that;
Step 4.5 is brought formula (8) into formula (6) and is obtained formula (9), and is (10) by (9) abbreviation, and will be in (9) and (10)
Unknown quantity is expressed as the vector in (11):
Wherein, 8 unknown quantitys in formula (9) and (10) are sR11、sR12、sR13、sTx、 sR21、sR22、sR23And sTyIt can be into
One step is expressed as 3 vectors, respectively indicates such as (11):
Four pattern images that the world coordinates for four pattern images that step 2 obtains and step 3 are obtained are in two dimension
Code digital picture on coordinate bring formula (11) into, obtain 8 independent equations composition equation group, solve equation group solve this 8
The value of a unknown number, it may be assumed that
W=Zc/Tz(12)
Step 4.6 is brought the w that formula (12) obtains into formula (11) and is solved again, circulation solve until front and back twice
(12) difference is less than preset error threshold between the result of formula, solves transmission matrix T=[Tx Ty Tz]T;
The error of w, the i.e. difference of formula (x) and formula (x) are controlled, the error of the camera position finally solved can be controlled,
Finally play the effect of artificial control error;
The biography that step 4.7 is solved using the world coordinates and step 4.6 of four pattern images on two dimensional code in step 2
Defeated matrix T=[Tx Ty Tz]TIt makes the difference, i.e., obtains camera coordinate by (13):
Wherein, formula (13) left side (X, Y, Z) is camera coordinate;
So far, the resolving of camera position is completed to get the three-dimensional coordinate of camera position has been arrived.
Beneficial effect
The present invention is a kind of to be constructed from the vision two dimensional code indoor locating system for setting pattern image using pattern image is set certainly
Two dimensional code is simultaneously received using camera and is resolved the position of camera, compared with prior art, is had the following beneficial effects:
1. method proposed by the present invention compared with existing indoor orientation method, does not need deployment base station, it is only necessary to patch two dimension
Code eliminates the problem of disposing installation and base station power supply, has greatly saved cost;
2. the present invention using control solve equation when iteration error control position error, can guarantee positioning accuracy;
3. the present invention can be combined with modern mobile device, there is great application potential.
Detailed description of the invention
Fig. 1 is a kind of flow chart from the vision two dimensional code indoor locating system and embodiment that set pattern image of the present invention;
Fig. 2 is that a kind of set in the vision two dimensional code indoor locating system of pattern image certainly of the present invention sets pattern image certainly
Gradient distribution histogram;
Fig. 3 is that a kind of set in the vision two dimensional code indoor locating system of pattern image certainly of the present invention sets pattern image certainly
Schematic diagram;
Fig. 4, which is that the present invention is a kind of, to be constructed from the vision two dimensional code indoor locating system for setting pattern image by pattern image
Two dimensional code;
Fig. 5 is that the present invention is a kind of to be shown from the system set in the vision two dimensional code indoor locating system embodiment of pattern image
It is intended to;
Fig. 6 is that the present invention is a kind of from the result set in the vision two dimensional code indoor locating system embodiment of pattern image.
Specific embodiment
Combined with specific embodiments below come a kind of from the vision two dimensional code indoor locating system for setting pattern image to the present invention
It is described in detail.
Embodiment 1
This example describes a kind of specific reality from the vision two dimensional code indoor orientation method for setting pattern image of the present invention
It applies.
Firstly, setting pattern image devises gradient distribution histogram using 10 degree as stepping for four pattern images.
As shown in Fig. 2, under the premise of having a principal direction, 2,3,4,5 auxiliary directions are respectively provided with.According to attached shown in Fig. 2
Gradient distribution histogram generates four pattern images of attached drawing 3, then generates four pattern images according to step 1.3 attached
The two dimensional code of Fig. 4, and be attached on ceiling.
Experiment scene: experimental site: 6*7*2.7m placed two two dimensional codes on the ceiling.Position such as 5 institute of attached drawing
Showing, receiving end is receiving at the 1m of ground, and focal length f=3.4mm, pel spacing is u0=3.17 microns, field angle ψ=
120 °, two dimensional code width is 20.0cm, wherein feature spot diameter 6.0cm, each yard of width 3.0cm, in receiving plane every
1m takes a sampled point, successively samples according to the sequence of (0.5,0.5), (1.5,0.5) ....5 photographs are clapped in same sampled point
Piece takes best one as positioning result.
Attached step shown in FIG. 1 is executed to the image taken: by reading two in the image taken acquisition image
Tie up code, and step 3 and step 4 be performed simultaneously to obtained two dimensional code, obtain the world coordinates of pattern image on two dimensional code with
And coordinate in two dimensional code digital picture and bring step 4 into and solved, positioning result is obtained, is shown in figure 6, it can
See position error in 0.015m or less.
When it is implemented, to save the time, step 3 and step 4 can Parallel Implementations simultaneously;
And w absolute value of the difference is less than current w 1% is calculated before and after formula (12) twice.
Embodiment described above be the present invention it is a kind of from set pattern image vision two dimensional code indoor locating system compared with
Good embodiment, the present invention should not be limited to the embodiment and attached drawing disclosure of that.The embodiment is only intended to help
Assistant solves method and its core concept of the invention;Those skilled in the art can make various within the scope of the claims
Deformation or modification, this is not affected the essence of the present invention.
For those of ordinary skill in the art, according to the thought of the present invention, in specific embodiments and applications
There will be changes, and the contents of this specification are not to be construed as limiting the invention.It is all do not depart from it is disclosed in this invention
The lower equivalent or modification completed of spirit, both falls within the scope of protection of the invention.
Claims (3)
1. a kind of from the vision two dimensional code indoor orientation method for setting pattern image, it is characterised in that: core concept is: being based on SIFT
The characteristics of pattern image is set, and construct two dimensional code using the pattern image by setting, paste two dimensional code in known location, make
Two dimensional code is received with camera and is decoded, and use ratio rectangular projection iterated transform algorithm determines between camera and two dimensional code
Relative positional relationship, the content in two dimensional code is two dimensional code own coordinate, in conjunction with the phase between the camera acquired and two dimensional code
The position of camera is obtained to positional relationship, iteration error when solving equation by controlling controls position error;
This method is related to as given a definition:
Define 1. image coordinate systems: every width digital image representation is the array of M*N dimension, i.e. M row N column;It is each in the image
A element, that is, pixel, referred to as pixel, the numerical value in array are the gray value of picture point;Rectangular coordinate system is defined in the picture,
The coordinate (u, v) of each pixel represents columns and line number of the pixel in array in the rectangular coordinate system;So (u, v) is
The coordinate of image coordinate system as unit of pixel;Since (u, v) only indicates that pixel is located at columns and line number in array, not
Useful physical unit represents the position of the pixel in the picture;So establishing the image coordinate indicated with physical unit, millimeter
The coordinate of system;The relation table of any one pixel (u, v) the two coordinate in rectangular coordinate system and image coordinate system in image
Up to for formula (1):
In formula (1), (x, y) is the coordinate of the pixel under image coordinate system in millimeters, (u0,v0) it is the array
Columns and line number where central point;dx、dyIt is pel spacing in X-axis and Y direction in millimeters respectively;
It defines 2. world coordinate systems: selecting a reference coordinate in the environment to describe the position of camera, and describe ring with it
The position of any object in border;
Define 3. camera coordinate systems: optical center of the origin of camera coordinate system for camera, the X of x-axis and y-axis and image, Y-axis are flat
Row, z-axis are camera optical axis, it is vertical with the plane of delineation;The intersection point of optical axis and the plane of delineation, the as original of camera coordinate system
Point, the rectangular coordinate system of composition are camera coordinate system;
A kind of vision two dimensional code indoor orientation method setting pattern image certainly, comprising the following steps:
Step 1: pattern image is arranged according to the feature of SIFT and generates two dimensional code, is specifically comprised the following steps:
Gradient distribution histogram is arranged with N degree stepping in step 1.1., sets up different gradient values 360/N side, generates
Gradient distribution histogram, specifically:
Wherein, N can be divided exactly by 360, and the range of N is 1 ° to 40 °;
Step 1.2. generates the pattern image of four different gradient distributions according to the gradient distribution histogram that step 1.1 generates;
Step 1.3. constructs two dimensional code using the pattern image generated in step 1.2, specifically:
The pattern image of four different gradient distributions is placed on four vertex as characteristic point, and between four pattern images
It is inserted into content regions and check field respectively, forms two dimensional code;
Step 2: two dimensional code being attached to and fixes position known to world coordinates, and then obtains four pattern images on two dimensional code
World coordinates;The two dimensional code generated using camera shooting step 1, exports two dimensional code digital picture;
Wherein, it shoots in the two dimensional code digital picture of output comprising four pattern images;
Step 3: four pattern images in two dimensional code digital picture exported using SIFT algorithm identification step two, and press four
The sequence of a pattern image extracts QR code content;
Step 4: it based on the pattern image recognized in step 3, is calculated using POSIT algorithm, obtains camera position
Three-dimensional coordinate, specifically comprise the following steps:
It coordinate value of four pattern images that step 4.1 identifies step 3 in two dimensional code digital picture and is sat in camera
Coordinate value in mark system, substitutes into formula (2), (3) and (4):
Wherein, R is spin matrix in (2), specific: RH i, i=1,2,3 be the row vector of spin matrix, RT i=[Ri1 H Ri2 H
Ri3 H], i=1,2,3;Ri1 H、Ri2 H、Ri3 HUnit vector respectively on three change in coordinate axis direction of world coordinate system, is imaging
Projected size in tri- reference axis of X, Y, Z of head coordinate system,H represents the transposition of matrix,(3) inTIt is transmission matrix, Tx、Ty、
TzFor the origin coordinate value in the camera coordinate system of world coordinate system;(4) x ini、yi, i=1,2,3,4 be four characteristic patterns
Coordinate value of the shape in image coordinate system, unit are millimeter, are known quantity;Xci,Yci,Zci, i=1,2,3,4;Represent four
The coordinate value of a pattern image in the camera coordinate system, unit are millimeter, are unknown quantity;F in formula (3) is camera
Focal length, unit is millimeter, be known quantity;
Step 4.2 set in world coordinate system a little as (Xw,Yw,Zw), Z is obtained using the coordinate transformation relation of formula (5)cx,Zcy,
Zc:
Wherein, RH i=[Ri1 H Ri2 H Ri3 H], the last one equal sign of the right is 3*4 dimension, and rightmost is 4*1 dimension, finally obtains 3*1
Dimension;
Step 4.3 is by formula (5) Far Left and rightmost simultaneously divided by Tz, obtain formula (6):
Wherein:
The formula of w in formula (7) is approximately formula (8) by step 4.4;
W=Zc/Tz≈1 (8)
Wherein, the reason of being approximately formula (8) by the formula of w in formula (7) be on two dimensional code four characteristic points in camera coordinate system
The distribution of middle camera depth direction with the distance between camera TzComparing can ignore that;
Step 4.5 is brought formula (8) into formula (6) and is obtained formula (9), and is (10) by (9) abbreviation, and not by 8 in (9) and (10)
The amount of knowing is expressed as the vector in (11):
Wherein, 8 unknown quantitys in formula (9) and (10) are sR11、sR12、sR13、sTx、sR21、sR22、sR23And sTy;Further table
Up to for 3 vectors, respectively indicate such as (11):
Four pattern images that the world coordinates for four pattern images that step 2 obtains and step 3 are obtained are in two-dimentional yardage
Coordinate on word image brings formula (11) into, obtains the equation group of 8 independent equations composition, solves equation group and solves this 8 not
Know several values, it may be assumed that
W=Zc/Tz
Step 4.6 is brought the w that formula (12) obtains into formula (11) and is solved again, and circulation is solved until front and back (12) formula twice
Result between difference be less than preset error threshold, solve transmission matrix T=[Tx Ty Tz]T;
The error for controlling w, can control the error of the camera position finally solved, finally play the effect of artificial control error;
The transmission square that step 4.7 is solved using the world coordinates and step 4.6 of four pattern images on two dimensional code in step 2
Battle array T=[Tx Ty Tz]TIt makes the difference, i.e., obtains camera coordinate by (13):
Wherein, formula (13) left side (X, Y, Z) is camera coordinate;
So far, the resolving for completing camera position has obtained the three-dimensional coordinate of camera position.
2. according to claim 1 a kind of from the vision two dimensional code indoor orientation method for setting pattern image, it is characterised in that:
It is to guarantee change of gradient both not big inside pattern image that the main points of pattern image are arranged in step 1.2, makes main side again
To more obvious, with N degree stepping, 360/N side sets up different gradient values, and meets auxiliary direction accounts for general direction 50%
More than;And gradient distribution histogram non complete symmetry, so that pattern image still has obvious characteristic after rotation;
Wherein, principal direction refers in gradient distribution histogram, rectangular where maximum of gradients;Auxiliary direction refers in gradient point
In cloth histogram, gradient value is greater than 80% direction of greatest gradient value.
3. according to claim 1 a kind of from the vision two dimensional code indoor orientation method for setting pattern image, it is characterised in that:
Four pattern images of the two dimensional code digital picture that identification step two exports in step 3 obtain four characteristic points in digital picture
On location of pixels (u, v);Coordinate (x, y) of four characteristic points under image coordinate system is obtained by formula (1), also referred to as two
The coordinate in code digital picture is tieed up, the coordinate in the two dimensional code digital picture is exactly the QR code content extracted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810952600.6A CN109029464B (en) | 2018-08-21 | 2018-08-21 | Visual two-dimensional code indoor positioning method with self-designed characteristic graph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810952600.6A CN109029464B (en) | 2018-08-21 | 2018-08-21 | Visual two-dimensional code indoor positioning method with self-designed characteristic graph |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109029464A true CN109029464A (en) | 2018-12-18 |
CN109029464B CN109029464B (en) | 2021-05-14 |
Family
ID=64626673
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810952600.6A Active CN109029464B (en) | 2018-08-21 | 2018-08-21 | Visual two-dimensional code indoor positioning method with self-designed characteristic graph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109029464B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110481602A (en) * | 2019-07-15 | 2019-11-22 | 广西柳钢东信科技有限公司 | A kind of real-time location method and device of rail-based conveyor system |
CN110580721A (en) * | 2019-09-04 | 2019-12-17 | 吴怡锦 | Continuous area positioning system and method based on global identification map and visual image identification |
CN112229380A (en) * | 2020-10-15 | 2021-01-15 | 西北工业大学 | Passive target real-time positioning method based on multi-rotor unmanned aerial vehicle cooperation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140163868A1 (en) * | 2012-12-10 | 2014-06-12 | Chiun Mai Communication Systems, Inc. | Electronic device and indoor navigation method |
CN106969766A (en) * | 2017-03-21 | 2017-07-21 | 北京品创智能科技有限公司 | A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign |
CN107421542A (en) * | 2017-06-07 | 2017-12-01 | 东莞理工学院 | A kind of indoor locating system and localization method based on machine vision and WSN |
CN107689063A (en) * | 2017-07-27 | 2018-02-13 | 南京理工大学北方研究院 | A kind of robot indoor orientation method based on ceiling image |
-
2018
- 2018-08-21 CN CN201810952600.6A patent/CN109029464B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140163868A1 (en) * | 2012-12-10 | 2014-06-12 | Chiun Mai Communication Systems, Inc. | Electronic device and indoor navigation method |
CN106969766A (en) * | 2017-03-21 | 2017-07-21 | 北京品创智能科技有限公司 | A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign |
CN107421542A (en) * | 2017-06-07 | 2017-12-01 | 东莞理工学院 | A kind of indoor locating system and localization method based on machine vision and WSN |
CN107689063A (en) * | 2017-07-27 | 2018-02-13 | 南京理工大学北方研究院 | A kind of robot indoor orientation method based on ceiling image |
Non-Patent Citations (1)
Title |
---|
曹琳: "一种应用于室内移动机器人的快速二维码定位技术", 《企业科技与发展》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110481602A (en) * | 2019-07-15 | 2019-11-22 | 广西柳钢东信科技有限公司 | A kind of real-time location method and device of rail-based conveyor system |
CN110580721A (en) * | 2019-09-04 | 2019-12-17 | 吴怡锦 | Continuous area positioning system and method based on global identification map and visual image identification |
CN112229380A (en) * | 2020-10-15 | 2021-01-15 | 西北工业大学 | Passive target real-time positioning method based on multi-rotor unmanned aerial vehicle cooperation |
Also Published As
Publication number | Publication date |
---|---|
CN109029464B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6338021B2 (en) | Image processing apparatus, image processing method, and image processing program | |
CN109029464A (en) | A kind of vision two dimensional code indoor orientation method setting pattern image certainly | |
CN112066879A (en) | Air floatation motion simulator pose measuring device and method based on computer vision | |
CN105654476B (en) | Binocular calibration method based on Chaos particle swarm optimization algorithm | |
CN106408601B (en) | A kind of binocular fusion localization method and device based on GPS | |
CN111968177B (en) | Mobile robot positioning method based on fixed camera vision | |
CN108629829B (en) | Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera | |
CN104333675A (en) | Panoramic electronic image stabilization method based on spherical projection | |
CN106295512B (en) | Vision data base construction method and indoor orientation method in more correction lines room based on mark | |
CN109448054A (en) | The target Locate step by step method of view-based access control model fusion, application, apparatus and system | |
CN104766309A (en) | Plane feature point navigation and positioning method and device | |
Ying et al. | Fisheye lenses calibration using straight-line spherical perspective projection constraint | |
CN105447856B (en) | Reference points matching method based on robot motion's parameter and feature vector | |
CN104463791A (en) | Fisheye image correction method based on spherical model | |
CN104657982A (en) | Calibration method for projector | |
CN106444846A (en) | Unmanned aerial vehicle and method and device for positioning and controlling mobile terminal | |
CN109102546A (en) | A kind of scaling method of the robot camera based on more scaling boards | |
CN106157322B (en) | A kind of camera installation site scaling method based on plane mirror | |
CN108489398A (en) | Laser adds the method that monocular vision measures three-dimensional coordinate under a kind of wide-angle scene | |
CN105335977B (en) | The localization method of camera system and target object | |
CN110361005A (en) | Positioning method, positioning device, readable storage medium and electronic equipment | |
CN106570907A (en) | Camera calibrating method and device | |
CN109003309A (en) | A kind of high-precision camera calibration and object's pose estimation method | |
CN103994779B (en) | Panorama camera scaling method based on three-dimensional laser point cloud | |
CN105955260B (en) | Position of mobile robot cognitive method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |