CN105243366A - Two-dimensional code based vehicle positioning method - Google Patents

Two-dimensional code based vehicle positioning method Download PDF

Info

Publication number
CN105243366A
CN105243366A CN201510650214.8A CN201510650214A CN105243366A CN 105243366 A CN105243366 A CN 105243366A CN 201510650214 A CN201510650214 A CN 201510650214A CN 105243366 A CN105243366 A CN 105243366A
Authority
CN
China
Prior art keywords
quick response
response code
camera
vehicle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510650214.8A
Other languages
Chinese (zh)
Inventor
黄志建
刘天建
韩飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Oso Solutions Technology Co Ltd
Original Assignee
Beijing Oso Solutions Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Oso Solutions Technology Co Ltd filed Critical Beijing Oso Solutions Technology Co Ltd
Priority to CN201510650214.8A priority Critical patent/CN105243366A/en
Publication of CN105243366A publication Critical patent/CN105243366A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a two-dimensional code based vehicle positioning method. According to the method, a location of a vehicle is calculated by depending on two-dimensional code information read from the ground. The vehicle comprises a light source and a visual camera, the camera can be placed at the rear or in the front of the vehicle and the camera faces the ground at a certain angle. Two-dimensional codes are distributed on the floor that the camera faces; when the vehicle operates, at least one two-dimensional code is ensured to be in a shooting area of the camera at any time; and each two-dimensional code corresponds to a unique ID, so that the location of the vehicle in the specific environment can be calculated in real time by an image processing technology, thereby providing location information for navigation of the vehicle.

Description

A kind of vehicle positioning method based on Quick Response Code
Technical field
The present invention relates to the navigation locating method of moving vehicle, particularly relate to the vehicle being furnished with vision positioning system, by identifying that ground Quick Response Code determines the localization method of the positional information of vehicle.
Background technology
The automatic driving vehicle in the present age, all can be equipped with various sensor, identify that the navigation that the information of geographical environment is vehicle provides foundation by these sensors.These sensors comprise: the displacement transducer of sonac, physical contact, laser sensor, and the vision sensor that can gather 2D and 3D.
But method of the prior art also exists or expensive or arithmetic speed is slow or the shortcoming such as low precision, therefore, this area needs that a kind of cost is low, processing speed fast and the method that can meet the demands.
Summary of the invention
The object of the present invention is to provide a kind of vehicle positioning method based on Quick Response Code, mainly rely on the position calculating vehicle place from the feature of surface readout 2 D code information, cost is low, and processing speed is fast.
The technical solution used in the present invention is as follows.
First, from camera reading images information, then pre-service is carried out, as color conversion, image enhaucament, image scaling to the image obtained; Then the Quick Response Code in detected image, the image coordinate information of angle point is obtained from the Quick Response Code image, calculate the distance of camera to Quick Response Code angle point, and identify the ID of special Quick Response Code, obtain the position of vehicle under this map environment and attitude according to the geographical relationship of Quick Response Code ID and place map.
Wherein, when the coordinate determining vehicle, vision system must perceive the existence of at least one Quick Response Code.
Wherein, the Quick Response Code ID in place is ID unique in this place, and is associated with place map; Be supplied to vehicle on-board vision system, thus confirm the locating information of vehicle.
The present invention compared with prior art tool has the following advantages.
1, simple: to use simple method, giving the geography information of the vision system of car and the geometric relationship of map. the automatic identification technology developed rapidly at present---Quick Response Code is as one of the information recording medium of this technology lowermost layer, carry the advantage (low cost of himself, high storage density, hypervelocity identification, stronger error correcting capability), play an important role in automatic identification field, the present invention utilizes the simple, convenient of Quick Response Code and the feature that discrimination is high, identifies the positional information of vehicle.
2, processing procedure fast: method of the present invention carries out pre-service by vision camera to the image obtained, as color conversion, image enhaucament, image scaling, then the Quick Response Code in detected image, the image coordinate information of angle point is obtained from the Quick Response Code image, from the Quick Response Code coordinate file of place, read the volume coordinate of angle point according to Quick Response Code ID, estimate position and the attitude of camera according to the image coordinate of angle point and volume coordinate.Because carried out convergent-divergent, color conversion to image, and only need extract the angle point information of Quick Response Code, so to the calculated amount reduced in image processing process, achieve processing speed fast.
3, feature extraction: according to feature and the frame design of special Quick Response Code, frame angular coordinate can be utilized, calculate locating information; By the decoding of Quick Response Code, obtain unique ID.
4, high accuracy three-dimensional information: the three-dimensional system of coordinate initial point of vision sensor can reach a centimetre rank to X, Y, Z range measurement accuracy of the three-dimensional system of coordinate initial point of Quick Response Code.
5, robustness: for vision system, the change of illumination condition and the uneven of luminance brightness will reduce accuracy of identification, even can cause recognition failures.The present invention starts with from increase secondary light source and algorithm itself, improves the robustness of system, can meet the requirement of general scene.
7, ambient As is simple: little to the environment change amount of vehicle operating.
8, favorable expandability: the quantity that can increase vision camera according to different needs, and again need not write code, only need revise the parameter of part wherein, meet the demand of many orders identification
Accompanying drawing explanation
Fig. 1 is algorithm flow chart of the present invention.
Fig. 2 be mark in camera and 3 dimension spaces between transformation relation schematic diagram.
Fig. 3 is a kind of embodiment of Quick Response Code used in the present invention.
Embodiment
Shown in accompanying drawing 1 is the process flow diagram of method of the present invention.According to method of the present invention, in the process of vehicle location, first from camera reading images information, then pre-service is carried out, as color conversion, image enhaucament, image scaling to the image obtained.Subsequently, Quick Response Code (MARKER) in detected image, the image coordinate information of angle point is obtained from the Quick Response Code image, calculate the distance of camera to Quick Response Code angle point, and identify the ID of special Quick Response Code, according to the geographical relationship of Quick Response Code ID and place map, obtain the position of vehicle under this map environment and attitude.
When the coordinate determining vehicle, vision system must perceive the existence of at least one Quick Response Code.
Wherein, the Quick Response Code ID in place is ID unique in this place, and is associated with place map; Be supplied to vehicle on-board vision system, thus confirm the locating information of vehicle.
Being described as follows of the step of the image coordinate of angle point is obtained from the MARKER image.
One, Contour extraction is carried out to bianry image.
Determine that in bianry image, the step of profile is as follows:
1) profile starting point is searched:
If 1. f (i, j)=1 and f (i, j-1)=0, then current pixel point (i, j) is the starting point of profile outer rim; And making current outline numbering (CCC) increase by 1, (i2, j2) assignment is (i, j-1);
If 2. f (i, j) >=1 and f (i, j+1)=0, then current pixel point (i, j) is the starting point of profile inner edge, CCC+1; (i2, j2) assignment is (i, j+1); Otherwise, go to step 3).
2) Contour extraction from the starting point of profile:
1., in the neighborhood of current pixel (i, j), start to search first non-zero pixels point from (i2, j2) along clockwise direction, if find pixel, be then designated as (i1, j1); Otherwise, make f (i, j)=-CCC, turn 3);
2. (i2, j2) assignment is (i1, j1), and (i3, j3) assignment is (i, j);
3. in the neighborhood of (i3, j3), start from the next pixel of (i2, j2) in the counterclockwise direction, search first non-zero pixels point, be designated as (i4, j4);
4. the value of pixel (i3, j3) is changed: if f (i3, j3+1) ≠ 0, and f (i3, j3)=1, then f (i3, j3)=CCC; If f (i3, j3+1)=0, and f (i3, j3)=1, f (i3, j3-1)=0, then f (i3, j3)=CCC; If f (i3, j3+1)=0, and
F (i3, j3)=1, then f (i3, j3)=-CCC; Otherwise f (i3, j3) remains unchanged;
If 5. f (i4, j4)=(i, j) and (i3, j3)=(i1, j1), then get back to the starting point of profile, turn 3); Otherwise (i2, j2) assignment is (i3, j3), (i3, j3) assignment is (i4, j4), turns 3.;
3) condition of scanning is judged. start to continue scanning from (i, j+1) pixel, if till the pixel arriving the image lower right corner. j+1 is greater than picture traverse, then put j=0, i=i+1, scans from next line.
Two, the flex point of profile is found.
The method finding the flex point of profile is as follows:
M = Σ w ( x , y ) I x 2 I x I y I x I y I y 2 → R - 1 λ 1 0 0 λ 2 R
By real symmetric matrix diagonalization process, R is regarded as twiddle factor here, it does not affect the change component of orthogonal directions.After diagonalization process, the variable quantity component " extraction " of two orthogonal directionss is out become λ 1, λ 2next (eigenwert), just can ideal-like angle point, edge, flat site analyze.
Step above has got accurate mark angle point, can simulate in camera and 3 dimension spaces the conversion between marking thus.In this process, the euclidean transformation between camera and object only comprises rotation and conversion.
As shown in Figure 2, O is the center being expressed as camera, and A, B, C, D are the points of 3 dimensions of world coordinates axle, and a, b, c, d are their projections in camera image plane.Object below uses inherent matrix and goes for out the conversion between the position of 3 dimension space known mark and camera O at the point that the plane of delineation is known.
Owing to marking always square and all summits all at same plane, in order to obtain the position coordinates being marked at 3 dimension spaces, therefore define their angle point as shown in Figure 3.
Shown in Fig. 3 is an embodiment of Quick Response Code used in the present invention, and it preferably uses has standard-sized black and white frame and form, to improve the success ratio of identification.
Mark is placed in xy coordinate system (i.e. z=0), and the central point of mark is placed in (0,0,0) place.The starting point of left side system is exactly the central point of mark.
After having identified Quick Response Code, the point set namely tieed up by 2 dimensions-3 has found out the position of camera.
The algorithm flow finding the three-dimensional reconstruction of the position of camera is as follows.
Suppose attitude to be asked, comprise rotation matrix R and translation vector T, be respectively
R = R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 = R 1 T R 2 T R 3 T
T = T x T y T z
Perspective projection transformation is:
x = f Z c X c
y = f Z c Y c
(x, y) in formula is the coordinate of the image coordinate system in units of millimeter, (X c, Y c, Z c) be the coordinate of camera coordinate system in units of millimeter.
F in above formula is the focal length of video camera, and its occurrence is unimportant, the ratio importantly between f and x and y, fx and fy according to camera internal reference matrix number can obtain this ratio.Directly can make f=1 in actual computing, but corresponding x and y also proportionally to set.Such as, be the camera of [fx, fy, u0, v0] for intrinsic parameter, if the position of a pixel is (u, v), then corresponding x and y should be
x=(u-u 0)f/f x
y=(v-v 0)f/f y
If in world coordinate system is a bit (X w, Y w, Z w), then
Z c x Z c y Z c = f X c fY c Z c = fR 1 T fT x fR 2 T fT y R 3 T T z X w Y w Z w 1
Both sides are simultaneously divided by T z, obtain
w x w y w = sR 1 T sT x sR 2 T sT y R 3 T / T z 1 X w Y w Z w 1 (original equation)
Wherein
w=Z c/T z
W=[X wY wZ w] T
s=f/T z
The concrete meaning of above rotation matrix R and translation vector T is described as follows:
I-th row of R represents the coordinate of vector of unit length in world coordinate system of i-th change in coordinate axis direction in camera coordinate system;
The coordinate of vector of unit length in camera coordinate system of i-th change in coordinate axis direction in world coordinate system is shown in i-th list of R;
T is just in time the coordinate of initial point at camera coordinate system of world coordinate system, and especially, Tz represents " degree of depth " of the initial point in camera coordinate system of world coordinate system.
Suppose " thickness " of object in Z-direction, i.e. the Z changes in coordinates scope of body surface each point in camera coordinate system, much smaller than the mean depth of this object in Z-direction.Described " thickness " and " degree of depth " are all for the Z axis of camera coordinate system.Can think that mean depth is exactly the Tz component in translation vector T when immediate vicinity at object of the initial point of world coordinate system, namely the mean value of the Zc of each point is Tz, and the variation range of Zc is very little relative to Tz, therefore can think, Zc all the time near Tz, Zc ≈ Tz.
According to this approximation relation, can obtain
W=Z c/T z≈1
The initial value of iteration that Here it is.In this initial situation, suppose that the institute of object is a little in the same degree of depth, perspective transform at this moment is just degenerated in order to a scaled orthographic projection POS.Namely, iteration at this moment starts from a scaled orthographic projection.
Obtain above:
w x w y w = sR 1 T sT x sR 2 T sT y R 3 T / T Z 1 X w Y w Z w 1
Because w has had an estimated value, therefore first can be regarded as known quantity, delete the third line (just lacked 4 unknown quantitys in this spline equation, more convenient solve), obtain
w x w y = sR 1 T sT x sR 2 T sT y X w Y w Z w 1 = sR 11 sR 12 sR 13 sT x sR 21 sR 22 sR 34 ST y X w Y w Z w 1
w x = s R 11 X w + s R 12 Y w + s R 13 Z w + s T x w y = sR 21 X w + sR 22 Y w + sR 23 Z w + sT y (iterative equation)
Because w is counted as known, the iterative equation therefore can be regarded as 8 unknown quantitys, is sR respectively 11sR 12sR 13sT xsR 21sR 22sR 23sT y,
These 8 unknown quantitys can split into 3 vectors, respectively:
sR 1 = s R 11 sR 12 sR 13 , sR 2 = s R 21 sR 22 sR 23 , s T = s T x sT y
After given a pair coordinate, (one is the coordinate of world coordinate system, one is the coordinate of image coordinate system, their corresponding same points), just can obtain 2 independently equations, needs 8 independent equations altogether, therefore at least need given 4 pairs of coordinates, and these 4 points of correspondence can not be coplanar in world coordinate system.If the 4th point and first three point coplanar, so " homogeneous coordinates " of this point just can by " homogeneous coordinates " linear expression of other three points, and the right side of iterative equation uses is exactly homogeneous coordinates, not just independent equation by the 4th equation that point obtains like this.Here why emphasize " homogeneous coordinates ", as long as be because three points not conllinear, " ordinary coor " of every other point (even if not coplanar) can by " ordinary coor " linear expression of these three points, but " homogeneous coordinates " then require coplanar.
If obtained 4 not coplanar points and coordinate thereof, and obtained 8 unknown quantitys by iterative equation.At this moment just vectorial sR can be calculated 1and sR 2mould long.And due to R 1and R 2itself be all vector of unit length, namely mould length is 1.Therefore can obtain s, and then try to achieve R 1and R 2and Tz=f/s:
More than show and describe ultimate principle of the present invention, principal character and advantage of the present invention.Those skilled in the art should understand the present invention and not be restricted to the described embodiments; what describe in above-described embodiment and instructions just illustrates principle of the present invention; without departing from the spirit and scope of the present invention; the present invention also has various changes and modifications, and these changes and improvements all fall in claimed scope.

Claims (8)

1. a vehicle positioning method, described vehicle being provided with the camera for detecting ground Quick Response Code, it is characterized in that, said method comprising the steps of:
Step 1, from camera reading images information;
Step 2, carries out pre-service to the image obtained;
Step 3, the Quick Response Code in detected image, obtains the image coordinate information of angle point from the Quick Response Code image;
Step 4, calculates the distance of camera to Quick Response Code angle point;
Step 5, identifies the ID of Quick Response Code, according to the geographical relationship of Quick Response Code ID and place map, obtains the position of vehicle under this map environment and attitude.
2. method according to claim 1, is characterized in that, described pre-service comprises color conversion, image enhaucament and/or image scaling.
3. method according to claim 1, is characterized in that, described step 3 comprises further:
Step 3.1, carries out Contour extraction to bianry image;
Step 3.2, finds the flex point of profile.
4. method according to claim 3, is characterized in that, described step 3.1 comprises further:
1) profile starting point is searched:
If 1. view data f (i, j)=1 and f (i, j-1)=0, then current pixel point (i, j) is the starting point of profile outer rim; And making current outline numbering CCC increase by 1, (i2, j2) assignment is (i, j-1);
If 2. view data f (i, j) >=1 and f (i, j+1)=0, then current pixel point (i, j) is the starting point of profile inner edge, CCC+1; (i2, j2) assignment is (i, j+1); Otherwise, go to step 3);
2) Contour extraction from the starting point of profile:
1., in the neighborhood of current pixel (i, j), start to search first non-zero pixels point from (i2, j2) along clockwise direction, if find pixel, be then designated as (i1, j1); Otherwise, make f (i, j)=-CCC, turn 3);
2. (i2, j2) assignment is (i1, j1), and (i3, j3) assignment is (i, j);
3. in the neighborhood of (i3, j3), start from the next pixel of (i2, j2) in the counterclockwise direction, search first non-zero pixels point, be designated as (i4, j4);
4. the value of pixel (i3, j3) is changed: if f (i3, j3+1) ≠ 0, and f (i3, j3)=1, then f (i3, j3)=CCC; If f (i3, j3+1)=0, and f (i3, j3)=1, f (i3, j3-1)=0, then f (i3, j3)=CCC; If f (i3, j3+1)=0, and
F (i3, j3)=1, then f (i3, j3)=-CCC; Otherwise f (i3, j3) remains unchanged;
If 5. f (i4, j4)=(i, j) and (i3, j3)=(i1, j1), then get back to the starting point of profile, turn 3); Otherwise (i2, j2) assignment is (i3, j3), (i3, j3) assignment is (i4, j4), turns 3.;
3) condition of scanning is judged. start to continue scanning from (i, j+1) pixel, if till the pixel arriving the image lower right corner. j+1 is greater than picture traverse, then put j=0, i=i+1, scans from next line.
5. method according to claim 3, is characterized in that, the method finding the flex point of profile in described step 3.2 is as follows:
M = Σ w ( x , y ) I x 2 I x I y I x I y I y 2 → R - 1 λ 1 0 0 λ 2 R
By real symmetric matrix diagonalization process, R is twiddle factor, and it does not affect the change component of orthogonal directions.After diagonalization process, the variable quantity component extraction of two orthogonal directionss is out become eigenvalue λ 1, λ 2, next angle point, edge, flat site are analyzed.
6. method according to claim 1, is characterized in that, described Quick Response Code is formed by having standard-sized black and white frame.
7. method according to claim 1, is characterized in that, described camera is placed on the afterbody of vehicle and/or the front portion of vehicle, and described camera presses certain angle towards ground.
8. method according to claim 7, is characterized in that, described camera towards ground on be distributed with Quick Response Code, to ensure during vehicle operating that any time all has a Quick Response Code at least in the shooting area of camera.
CN201510650214.8A 2015-10-10 2015-10-10 Two-dimensional code based vehicle positioning method Pending CN105243366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510650214.8A CN105243366A (en) 2015-10-10 2015-10-10 Two-dimensional code based vehicle positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510650214.8A CN105243366A (en) 2015-10-10 2015-10-10 Two-dimensional code based vehicle positioning method

Publications (1)

Publication Number Publication Date
CN105243366A true CN105243366A (en) 2016-01-13

Family

ID=55041008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510650214.8A Pending CN105243366A (en) 2015-10-10 2015-10-10 Two-dimensional code based vehicle positioning method

Country Status (1)

Country Link
CN (1) CN105243366A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation
CN106507045A (en) * 2016-11-07 2017-03-15 立德高科(北京)数码科技有限责任公司 Wandered away the method and system of crowd, terminal, server platform with identification code positioning
CN107301367A (en) * 2017-05-31 2017-10-27 深圳Tcl数字技术有限公司 Distance detection and display methods, terminal, display device and storage medium
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN109189076A (en) * 2018-10-24 2019-01-11 湖北三江航天万山特种车辆有限公司 A kind of heavy guiding vehicle localization method and heavy guiding vehicle of view-based access control model sensor
CN109920266A (en) * 2019-02-20 2019-06-21 武汉理工大学 A kind of intelligent vehicle localization method
CN110262507A (en) * 2019-07-04 2019-09-20 杭州蓝芯科技有限公司 A kind of camera array robot localization method and device based on 5G communication
CN110345937A (en) * 2019-08-09 2019-10-18 东莞市普灵思智能电子有限公司 Appearance localization method and system are determined in a kind of navigation based on two dimensional code
CN112308899A (en) * 2020-11-09 2021-02-02 北京经纬恒润科技股份有限公司 Trailer angle identification method and device
CN113176594A (en) * 2021-03-12 2021-07-27 中国软件评测中心(工业和信息化部软件与集成电路促进中心) Vehicle-mounted early warning test method and device based on sand table, computer and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430204A (en) * 2007-11-09 2009-05-13 北京华旗资讯数码科技有限公司 Method for implementing area navigation through reading image code
US20100241343A1 (en) * 2009-03-20 2010-09-23 Electronics And Telecommunications Research Institute Apparatus and method for recognizing traffic line
CN104142683A (en) * 2013-11-15 2014-11-12 上海快仓智能科技有限公司 Automated guided vehicle navigation method based on two-dimension code positioning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430204A (en) * 2007-11-09 2009-05-13 北京华旗资讯数码科技有限公司 Method for implementing area navigation through reading image code
US20100241343A1 (en) * 2009-03-20 2010-09-23 Electronics And Telecommunications Research Institute Apparatus and method for recognizing traffic line
CN104142683A (en) * 2013-11-15 2014-11-12 上海快仓智能科技有限公司 Automated guided vehicle navigation method based on two-dimension code positioning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王建功等: "室内图书运载车的计算机视觉定位定向方法", 《闽江学院学报》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106323294B (en) * 2016-11-04 2023-06-09 新疆大学 Positioning method and positioning device for substation inspection robot
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation
CN106507045A (en) * 2016-11-07 2017-03-15 立德高科(北京)数码科技有限责任公司 Wandered away the method and system of crowd, terminal, server platform with identification code positioning
CN106507045B (en) * 2016-11-07 2018-09-18 立德高科(北京)数码科技有限责任公司 The method and system, terminal, server platform of the crowd that wanders away are positioned with identification code
CN107301367B (en) * 2017-05-31 2021-08-03 深圳Tcl数字技术有限公司 Distance detection and display method, terminal, display device and storage medium
CN107301367A (en) * 2017-05-31 2017-10-27 深圳Tcl数字技术有限公司 Distance detection and display methods, terminal, display device and storage medium
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN109189076B (en) * 2018-10-24 2021-08-31 湖北三江航天万山特种车辆有限公司 Heavy guided vehicle positioning method based on visual sensor and heavy guided vehicle
CN109189076A (en) * 2018-10-24 2019-01-11 湖北三江航天万山特种车辆有限公司 A kind of heavy guiding vehicle localization method and heavy guiding vehicle of view-based access control model sensor
CN109920266A (en) * 2019-02-20 2019-06-21 武汉理工大学 A kind of intelligent vehicle localization method
CN110262507A (en) * 2019-07-04 2019-09-20 杭州蓝芯科技有限公司 A kind of camera array robot localization method and device based on 5G communication
CN110262507B (en) * 2019-07-04 2022-07-29 杭州蓝芯科技有限公司 Camera array robot positioning method and device based on 5G communication
CN110345937A (en) * 2019-08-09 2019-10-18 东莞市普灵思智能电子有限公司 Appearance localization method and system are determined in a kind of navigation based on two dimensional code
CN112308899A (en) * 2020-11-09 2021-02-02 北京经纬恒润科技股份有限公司 Trailer angle identification method and device
CN112308899B (en) * 2020-11-09 2024-05-07 北京经纬恒润科技股份有限公司 Trailer angle identification method and device
CN113176594A (en) * 2021-03-12 2021-07-27 中国软件评测中心(工业和信息化部软件与集成电路促进中心) Vehicle-mounted early warning test method and device based on sand table, computer and storage medium

Similar Documents

Publication Publication Date Title
CN105243366A (en) Two-dimensional code based vehicle positioning method
Wang et al. Intensity scan context: Coding intensity and geometry relations for loop closure detection
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
Guindel et al. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups
US6906620B2 (en) Obstacle detection device and method therefor
Lategahn et al. Vision-only localization
US8180100B2 (en) Plane detector and detecting method
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN113096183B (en) Barrier detection and measurement method based on laser radar and monocular camera
US20230251097A1 (en) Efficient map matching method for autonomous driving and apparatus thereof
JP2023021098A (en) Map construction method, apparatus, and storage medium
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
Hara et al. Vehicle localization based on the detection of line segments from multi-camera images
US20090226094A1 (en) Image correcting device and method, and computer program
Higuchi et al. 3D measurement of large structure by multiple cameras and a ring laser
Kim et al. Automatic multiple lidar calibration based on the plane features of structured environments
CN115761684A (en) AGV target recognition and attitude angle resolving method and system based on machine vision
Schilling et al. Mind the gap-a benchmark for dense depth prediction beyond lidar
Kawasaki et al. Line-based SLAM using non-overlapping cameras in an urban environment
Liu et al. The robust semantic slam system for texture-less underground parking lot
US11348278B2 (en) Object detection
US20230252751A1 (en) Method for aligning at least two images formed by three-dimensional points
KR102624644B1 (en) Method of estimating the location of a moving object using vector map
Sutherland Ordering landmarks in a view

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160113