CN113220818A - Automatic mapping and high-precision positioning method for parking lot - Google Patents

Automatic mapping and high-precision positioning method for parking lot Download PDF

Info

Publication number
CN113220818A
CN113220818A CN202110581367.7A CN202110581367A CN113220818A CN 113220818 A CN113220818 A CN 113220818A CN 202110581367 A CN202110581367 A CN 202110581367A CN 113220818 A CN113220818 A CN 113220818A
Authority
CN
China
Prior art keywords
visual
vehicle
parking lot
visual mark
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110581367.7A
Other languages
Chinese (zh)
Other versions
CN113220818B (en
Inventor
赵君峤
李思远
邢昊宇
熊璐
邓振文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Intelligent New Energy Vehicle Research Institute
Original Assignee
Nanchang Intelligent New Energy Vehicle Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Intelligent New Energy Vehicle Research Institute filed Critical Nanchang Intelligent New Energy Vehicle Research Institute
Priority to CN202110581367.7A priority Critical patent/CN113220818B/en
Publication of CN113220818A publication Critical patent/CN113220818A/en
Application granted granted Critical
Publication of CN113220818B publication Critical patent/CN113220818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to an automatic mapping and high-precision positioning method for a parking lot, which is applied to automatic mapping and high-precision positioning methods in parking lots and similar GNSS-free environments and is suitable for autonomous unmanned driving of unmanned vehicles in parking lot environments. The invention uses a low-cost front-view monocular camera to perform robust identification on visual marks pre-distributed in the environment; text detection is carried out on the rectangular box extracted from the image by utilizing a character detection and recognition neural network; observing the course and the speed of the vehicle by using low-cost inertial navigation equipment; and the SLAM algorithm is utilized to realize the optimized estimation of the visual mark position in the environment, the automatic construction of the garage plane distribution map is realized, and the real-time high-precision meter-level positioning of the unmanned vehicle is realized by utilizing the map construction result.

Description

Automatic mapping and high-precision positioning method for parking lot
Technical Field
The invention belongs to the technical field of automotive electronics, and particularly relates to an automatic mapping and high-precision positioning method for a parking lot, which is applied to the environment of the parking lot and similar GNSS (global navigation satellite system), and is suitable for autonomous unmanned driving of unmanned vehicles in the environment of the parking lot.
Background
With the rise of artificial intelligence, the automobile industry has also changed over the ground, and a great deal of manpower and material resources are invested in the internet huge industry and the traditional automobile industry to participate in the research and development of the related technology of automatic driving and landing. The high-precision map is an essential link for automatically driving the automobile, can help the automatically driving automobile to sense the complex information of the road in advance, provides real-time high-precision self-automobile positioning, and combines intelligent path planning to make the automobile make a correct decision. However, real-time mapping, updating and positioning of high-precision maps are huge industrial obstacles, and in the traditional mode, a ground information acquisition vehicle is used for acquiring information all over the country, then map data is processed and supplemented, an update package is sent to a user side after mapping is completed, and the user can update the map data once after completing data updating at the terminal. The map updating process is tedious and long, the requirements of automatic driving vehicles on map updating and positioning cannot be met, the high-precision map requires that the vehicle position deviation is less than 10 cm, real-time road condition information can be reflected, and higher requirements are provided for map updating.
At present, the method for realizing the positioning of the indoor parking lot mainly depends on wireless equipment, and on one hand, a large amount of base station equipment needs to be arranged, so the cost is high; on the other hand, the positioning precision (meter level) is difficult to meet the requirement of unmanned driving. In addition, the simultaneous indoor positioning and mapping SLAM of the robot adopting the laser radar is limited by the cost of the laser radar, and the robot is difficult to be equipped in mass-production vehicle models; the robot SLAM method using a pure vision system is limited by lack of abundant textures in the parking lot environment, and cannot guarantee precision and usability.
Disclosure of Invention
The invention provides an automatic mapping and high-precision positioning method for a parking lot, which uses a low-cost front-view monocular camera to perform robust identification on visual marks pre-laid in the environment; text detection is carried out on the rectangular box extracted from the image by utilizing a character detection and recognition neural network; observing the course and the speed of the vehicle by using low-cost inertial navigation equipment; and the SLAM algorithm is utilized to realize the optimized estimation of the visual mark position in the environment, the automatic construction of the garage plane distribution map is realized, and the real-time high-precision meter-level positioning of the unmanned vehicle is realized by utilizing the map construction result. The technical scheme adopted by the invention is as follows: a parking lot automatic map building and high-precision positioning method comprises the following steps:
step 1, obtain the image information in front of the vehicle through the monocular camera before looking, the image information is the visual mark of posting in the garage of monocular camera collection, including arranging, demarcating and the arrangement of visual mark of visual sensor:
step 1.1, 1 monocular vision camera is adopted and arranged between a front windshield and a rearview mirror of an automobile;
step 1.2, calibrating the internal parameters of the monocular vision camera by a checkerboard internal parameter calibration method;
step 1.3, printing a visual mark, and then pasting the visual mark on an upright post and a wall surface of a parking lot;
step 2, identifying four corner points of a rectangular frame in a scene from a forward-looking image, performing homography transformation on the rectangular frame by using a plane hypothesis, and identifying by using an optical character recognition OCR neural network:
step 2.1, detecting a rectangular frame from the image information and extracting an angular point;
2.2, converting the obliquely-projected rectangular frame into an orthographic projection drawing by utilizing homography transformation;
step 2.3, detecting the orthographic projection drawing obtained in the step 2.2 by using an OCR neural network, and storing the recognized character result into a text file;
step 3, reading the OCR neural network recognition result, screening the visual marks in all the rectangular frames, and calculating the relative position relationship between the vehicle and the visual marks:
step 3.1, respectively reading character recognition results corresponding to the detected rectangular frame, if the character recognition results are the same as the known visual mark contents, judging that the rectangular frame is the visual mark, and numbering;
3.2, calculating the relative position relation between the vehicle and the known visual marker through the corresponding perspective point pair PnP;
step 4, performing off-line closed loop optimization on the visual markers repeatedly observed by using the serial numbers of the visual markers to eliminate accumulated noise;
5, optimizing and fusing vehicle position information solved based on the visual mark and vehicle displacement information obtained through inertial navigation observation in real time through an extended Kalman filter to realize optimized mapping;
and 6, optimizing the mapping result for multiple times by using a mapping optimization method, so that the mapping accuracy is improved:
step 6.1, a graph optimization model is constructed by utilizing observation data, nodes and edges in the constructed graph are uniformly sampled, and global graph optimization is carried out on the sampled graph optimization model;
step 6.2, local graph optimization is carried out on the original graph optimization model by taking the optimized nodes as a reference;
step 7, storing the mapping result to generate a parking lot plane map; and during online positioning, the generated mapping result is used for identifying and optimizing the position of the current vehicle in real time through the visual mark, so that the high-precision positioning in the decimeter level is realized.
The invention has the beneficial effects that: (1) the method comprises the steps of providing a high-precision real-time positioning result in an environment with sparse visual features and without GNSS signals in a parking lot and the like by utilizing a low-cost visual sensor and an inertial navigation device in combination with a visual marker; (2) a low-cost sensor and a few paper visual marks are used, so that the practicability is high; (3) the automatically constructed positioning map can simultaneously support the positioning requirement of autonomous unmanned driving of the unmanned vehicle and the navigation requirement of human in the parking lot, and accords with the cognitive rule of the human on the map. (4) The speed of positioning and mapping can meet the real-time requirement in practical application, and the navigation map can be continuously optimized in an iteration mode while being used, and finally tends to be stable.
Drawings
FIG. 1 is an image obtained by a front monocular camera of the parking lot automatic mapping and high-precision positioning method of the present invention;
FIG. 2 is a diagram of the effect of visual marker arrangement in the automatic parking lot mapping and high-precision positioning method of the present invention;
FIG. 3 is a schematic diagram showing the correspondence between visual markers in a monocular camera image and visual markers in a three-dimensional space coordinate system according to the automatic parking lot mapping and high-precision positioning method of the present invention;
fig. 4 is an example of a visual marker mapping result of the parking lot automatic mapping and high-precision positioning method of the present invention.
The noun explains: GNSS: global Navigation Satellite System, Global Navigation Satellite System; SLAM: simultaneously positioning and mapping simultaneouslly; PnP: perspective Point pair, Perspective-n-Point; DB: differentiable Binarization, differential Binarization; CRNN: convolution recurrent neural network: a conditional recovery Neural Network.
Detailed Description
Based on the technical scheme of the invention, the technical scheme of the invention is described in more detail by combining the accompanying drawings, and the automatic drawing and high-precision positioning method for the parking lot comprises the following steps:
step 1, obtain the image information in front of the vehicle through the monocular camera before looking, the image information is the visual mark of posting in the garage of monocular camera collection, including arranging, demarcating and the arrangement of visual mark of visual sensor:
step 1.1, 1 monocular vision camera is adopted and arranged between a front windshield and a rearview mirror of an automobile;
step 1.2, calibrating the internal parameters of the monocular vision camera by a checkerboard internal parameter calibration method;
step 1.3, printing a visual mark, and then pasting the visual mark on an upright post and a wall surface of a parking lot;
step 2, identifying four corner points of a rectangular frame in a scene from a forward-looking image, performing homography transformation on the rectangular frame by using a plane hypothesis, and identifying by using an optical character recognition OCR neural network:
step 2.1, detecting a rectangular frame from the image information and extracting an angular point;
2.2, converting the obliquely-projected rectangular frame into an orthographic projection drawing by utilizing homography transformation;
step 2.3, detecting the orthographic projection drawing obtained in the step 2.2 by using an OCR neural network, and storing the recognized character result into a text file;
step 3, reading the OCR neural network recognition result, screening the visual marks in all the rectangular frames, and calculating the relative position relationship between the vehicle and the visual marks:
step 3.1, respectively reading character recognition results corresponding to the detected rectangular frame, if the character recognition results are the same as the known visual mark contents, judging that the rectangular frame is the visual mark, and numbering;
3.2, calculating the relative position relation between the vehicle and the known visual marker through the corresponding perspective point pair PnP;
step 4, performing off-line closed loop optimization on the visual markers repeatedly observed by using the serial numbers of the visual markers to eliminate accumulated noise;
5, optimizing and fusing vehicle position information solved based on the visual mark and vehicle displacement information obtained through inertial navigation observation in real time through an extended Kalman filter to realize optimized mapping;
and 6, optimizing the mapping result for multiple times by using a mapping optimization method, so that the mapping accuracy is improved:
step 6.1, a graph optimization model is constructed by utilizing observation data, nodes and edges in the constructed graph are uniformly sampled, and global graph optimization is carried out on the sampled graph optimization model;
step 6.2, local graph optimization is carried out on the original graph optimization model by taking the optimized nodes as a reference;
step 7, storing the mapping result to generate a parking lot plane map; and during online positioning, the generated mapping result is used for identifying and optimizing the position of the current vehicle in real time through the visual mark, so that the high-precision positioning in the decimeter level is realized.
Preferentially, step 2.1 is specifically:
because the visual mark has black and white borders, the operation of searching the visual mark is simplified to search quadrangles of four side areas with darker interior than exterior, the adopted method is similar to the basic method of an Apriltag detector, the gradient direction and amplitude of each pixel are calculated from the line in the detected image, and the pixels are clustered into component line segments with similar gradient direction and amplitude;
the clustering algorithm is a graph-based method: creating a graph in which each node represents a pixel; adding edges between adjacent pixels, the edge weight of which is equal to the difference of the pixels in the gradient direction; for each edge, testing whether the two pixels connected thereto should be connected together; given a pixel n, its range of gradient directions is denoted as d (n), its range of amplitudes is denoted as m (n), d (n) and m (n) are scalar values, representing the difference between the maximum and minimum values of gradient directions and amplitudes, respectively; for two pixels n and m, they are clustered together if both of the following conditions are met:
D(n∪m)≤min(D(n),D(m))+KD/|n∪m|
M(n∪m)≤min(M(n),M(m))+KM/|n∪m|
wherein, KD,KMFor self-setting parameters, once the clustering operation is completed, each pixel point can be weighted according to the gradient size by using a traditional least square method, and finally all pixels are fitted into a line segment; simultaneously adjusting each line segment to make the dark side on the left and the bright side on the right; after finding the line segment, the rectangular box starts to be detected, i.e. a recursive depth-first search with a depth of 4: in the first layer, each line segment is taken as an initial line segment in sequence; and searching line segments close enough to the starting line segment end point from the second layer to the fourth layer, defaulting that each line segment is in a counterclockwise direction, and increasing the robustness of a closed region and a segmentation error by adjusting a threshold value close enough.
Preferentially, step 2.2 is specifically:
because of the shooting angle problem of the front-view monocular camera, the visual mark in the image is actually an oblique-view angle, and if the oblique-view angle can be converted into a front-view angle, the accuracy of the character detection and identification network can be greatly improved; therefore, perspective transformation is used for projecting an image to a new viewing plane, the pixel content surrounded by four corner points of the rectangular frame identified in the previous step is transformed into a square picture and stored, the operation of character identification in the next step is facilitated, and the transformation formula is as follows:
Figure BDA0003086204430000041
wherein, A is a perspective transformation matrix, the right side of the equation is a target point to be moved, and the left side of the equation is a target point to be moved; since the image plane is still obtained after perspective transformation, the following results are obtained:
Figure BDA0003086204430000042
as described above, each pair of points provides 2 equations, and the corner points of the four rectangular boxes provide 8 equations, let a33The remaining 8 unknowns of the perspective transformation matrix a are solved for 1, thereby transforming the visual indicia of squinting into an elevation angle.
Preferentially, step 2.3 is specifically:
in the text detection step, a differentiable binary DB is used as a text detector, a Convolution Recurrent Neural Network (CRNN) is used as a PP-OCR system of a text recognizer, the input is the image which is subjected to homography transformation in the previous step, and the output is a text file containing the recognition result.
Preferentially, step 3.2 is specifically:
after detecting the coordinates of the corner points of the visual marker, since the four corner points in the image lie on the same plane in the world coordinate system, the equation can be listed for each corner point:
R·K-1·x+t=X
wherein K is an internal parameter of the camera, X is a coordinate of an angular point in an image space coordinate system, R and t are external parameters of the camera at the moment and respectively represent rotation and translation, and X is a coordinate of the angular point in a three-dimensional space;
because the position relation among the four corner points is known, a three-dimensional space rectangular coordinate system which takes the center of the visual mark as an origin, the x axis is right along the edge direction of the mark, the y axis is upward along the edge direction of the mark, and the z axis is vertical to the mark plane and faces inwards is established according to the position relation; in the coordinate system, X is known, in addition, X is detected and determined, K is obtained through camera calibration, R and t degrees of freedom DOF are 6, and then R and t can be solved iteratively only by three or more points;
performing Rodrigue transformation on the R to obtain an included angle a between a connecting line from the monocular camera to the center of the visual mark and the heading of the vehicle, and taking a second norm of t as the distance d from the monocular camera to the center of the visual mark; and constructing a visual mark in a coordinate system of the vehicle body, wherein the coordinates are as follows:
x=sin(a)·d
y=cos(a)·d
preferentially, step 6 is specifically:
the graph optimization method is characterized in that an optimization graph model is established in a graph mode, the graph is composed of nodes and edges, and the nodes refer to positions where vehicles are located and positions where visual marks are located at each moment; the side refers to an estimation result based on inertial navigation between two moments or a relative coordinate difference between a vehicle and a visual mark at each moment; and when a new visual mark or vehicle displacement is observed, adding a new node and an edge in the graph model, listing an edge error equation according to the relationship between the edge and the node in the graph model after all data are added, and performing iteration optimization by using a Gauss-Newton method.
Preferably, the graph model is described by the following error equation:
Figure BDA0003086204430000051
Figure BDA0003086204430000052
wherein:
k represents any node in the graph model, and the node is a visual mark or a vehicle;
c represents the whole node set in the graph model;
xkrepresenting the position information of the kth node, and if the node is a visual mark, storing the position coordinate of the visual mark; if the node is a vehicle, storing the position coordinates and the course of the vehicle;
zkindicating location information of a kth node obtained from observations related to the kth node;
Ωkrepresenting a covariance matrix obtained from observations associated with the kth node;
ek(xk,zk) Denotes xkAnd zkAn error function of (a);
f (x) is the error function of the entire graph model, x*Representing a globally optimal solution of the graph model.
Preferably, in order to find the global optimal solution x ^ the error equation of the above formula is solved iteratively by gauss-newton method:
first, f (x) is subjected to a first order taylor expansion:
Figure BDA0003086204430000053
Figure BDA0003086204430000054
Figure BDA0003086204430000055
wherein:
c, b is coefficient term in the first order Taylor expansion of F (x);
Figure BDA0003086204430000056
an initial value representing node location information;
Δ x represents a correction value for x;
Jkindicates to start the value
Figure BDA0003086204430000057
Substitution of ekCalculating to obtain a Jacobian matrix;
by solving the gauss newton equation:
HΔx=g
g and H are Gauss Newton equation coefficient terms:
Figure BDA0003086204430000061
Figure BDA0003086204430000062
solving to obtain a new global optimal solution
Figure BDA0003086204430000063
And substituting F (x) as an initial value for a new iteration.
Preferentially, the vision mark observation and the vehicle displacement observation obtained at each moment both contain errors, and the position of the vehicle is the unknown quantity to be solved, the problem is described by using an extended Kalman filtering formula:
Figure BDA0003086204430000064
Figure BDA0003086204430000065
Figure BDA0003086204430000066
Figure BDA0003086204430000067
Figure BDA0003086204430000068
Figure BDA0003086204430000069
Pk|k=(I-KkHk)Pk|k-1
wherein:
k is the current moment, and k-1 is the last moment;
Figure BDA00030862044300000610
predicting an unknown quantity matrix of the current moment according to prior information of the previous moment, wherein the unknown quantity matrix comprises a vehicle position, a course and a visual mark position;
Figure BDA00030862044300000611
is the unknown quantity matrix of the last time obtained in the last iteration;
uk-1representing inertial navigation data of the previous moment as a control vector;
the function f () represents the unknown quantity at the previous moment
Figure BDA00030862044300000612
Control quantity uk-1The effect on the unknown quantity at this time;
Fk-1is the jacobian matrix corresponding to function f ();
Qk-1representing noise contained in the empirical model;
zkan observation value matrix at the current moment contains the observation of the vehicle on the visual mark;
Figure BDA00030862044300000613
the observation value z at the current moment is measured for the Kalman gain at the current momentkAnd the predicted value
Figure BDA00030862044300000614
A difference of (a);
the function h () represents the corresponding conversion relation between the observed value matrix and the unknown quantity matrix;
Skis and
Figure BDA00030862044300000615
a corresponding covariance matrix;
Hkis the jacobian matrix corresponding to function h ();
Rkrepresenting noise contained in the observed value;
Kkthe reliability of the observed value is measured;
Figure BDA0003086204430000071
is an optimized unknown quantity estimated value at the current moment;
Pk-1|k-1、Pk|k-1、Pk|kis and unknown quantity
Figure BDA0003086204430000072
A one-to-one correspondence covariance matrix;
i is an identity matrix of the same order as the covariance matrix;
the current position of the vehicle can be obtained in an incremental mode through the above method, and the position of the visual mark in the map is updated in real time.
Preferentially, storing an offline map of the parking lot plane map, and storing the position coordinates and the attribute information of the visual markers in the map as text information after finishing the map building optimization work, wherein the attribute information comprises the serial numbers, the positions and the directions of the visual markers for online positioning;
the parking lot plane map offline map is used in online positioning, offline map information is read from a text, map initialization is carried out, the visual marking information in the garage is identified by the detection method in the steps 2 and 3, online optimization is carried out by the step 5, finally the position of the vehicle in the offline map is obtained in real time, and the offline map is further refined to achieve the effect of meter-level positioning.
Although embodiments of the present invention have been described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A parking lot automatic mapping and high-precision positioning method is characterized by comprising the following steps:
step 1, obtain the image information in front of the vehicle through the monocular camera before looking, the image information is the visual mark of posting in the garage of monocular camera collection, including arranging, demarcating and the arrangement of visual mark of visual sensor:
step 1.1, 1 monocular vision camera is adopted and arranged between a front windshield and a rearview mirror of an automobile;
step 1.2, calibrating the internal parameters of the monocular vision camera by a checkerboard internal parameter calibration method;
step 1.3, printing a visual mark, and then pasting the visual mark on an upright post and a wall surface of a parking lot;
step 2, identifying four corner points of a rectangular frame in a scene from a forward-looking image, performing homography transformation on the rectangular frame by using a plane hypothesis, and identifying by using an optical character recognition OCR neural network:
step 2.1, detecting a rectangular frame from the image information and extracting an angular point;
2.2, converting the obliquely-projected rectangular frame into an orthographic projection drawing by utilizing homography transformation;
step 2.3, detecting the orthographic projection drawing obtained in the step 2.2 by using an OCR neural network, and storing the recognized character result into a text file;
step 3, reading the OCR neural network recognition result, screening the visual marks in all the rectangular frames, and calculating the relative position relationship between the vehicle and the visual marks:
step 3.1, respectively reading character recognition results corresponding to the detected rectangular frame, if the character recognition results are the same as the known visual mark contents, judging that the rectangular frame is the visual mark, and numbering;
3.2, calculating the relative position relation between the vehicle and the known visual marker through the corresponding perspective point pair PnP;
step 4, performing off-line closed loop optimization on the visual markers repeatedly observed by using the serial numbers of the visual markers to eliminate accumulated noise;
5, optimizing and fusing vehicle position information solved based on the visual mark and vehicle displacement information obtained through inertial navigation observation in real time through an extended Kalman filter to realize optimized mapping;
and 6, optimizing the mapping result for multiple times by using a mapping optimization method, so that the mapping accuracy is improved:
step 6.1, a graph optimization model is constructed by utilizing observation data, nodes and edges in the constructed graph are uniformly sampled, and global graph optimization is carried out on the sampled graph optimization model;
step 6.2, local graph optimization is carried out on the original graph optimization model by taking the optimized nodes as a reference;
step 7, storing the mapping result to generate a parking lot plane map; and during online positioning, the generated mapping result is used for identifying and optimizing the position of the current vehicle in real time through the visual mark, so that the high-precision positioning in the decimeter level is realized.
2. The automatic mapping and high-precision positioning method for the parking lot according to claim 1, wherein the step 2.1 specifically comprises:
because the visual mark has black and white borders, the operation of searching the visual mark is simplified to search quadrangles of four side areas with darker interior than exterior, the adopted method is similar to the basic method of an Apriltag detector, the gradient direction and amplitude of each pixel are calculated from the line in the detected image, and the pixels are clustered into component line segments with similar gradient direction and amplitude;
the clustering algorithm is a graph-based method: creating a graph in which each node represents a pixel; adding edges between adjacent pixels, the edge weight of which is equal to the difference of the pixels in the gradient direction; for each edge, testing whether the two pixels connected thereto should be connected together; given a pixel n, its range of gradient directions is denoted as d (n), its range of amplitudes is denoted as m (n), d (n) and m (n) are scalar values, representing the difference between the maximum and minimum values of gradient directions and amplitudes, respectively; for two pixels n and m, they are clustered together if both of the following conditions are met:
D(n∪m)≤min(D(n),D(m))+KD/|n∪m|
M(n∪m)≤min(M(n),M(m))+KM/|n∪m|
wherein, KD,KMFor self-setting parameters, once clustering operation is completed, weighting each pixel point according to the gradient size of the pixel point by using a traditional least square method, and finally fitting all pixels into a line segment; simultaneously adjusting each line segment to make the dark side on the left and the bright side on the right; after finding the line segment, the rectangular box starts to be detected, i.e. a recursive depth-first search with a depth of 4: in the first layer, each line segment is taken as an initial line segment in sequence; and searching line segments close enough to the starting line segment end point from the second layer to the fourth layer, defaulting that each line segment is in a counterclockwise direction, and increasing the robustness of a closed region and a segmentation error by adjusting a threshold value close enough.
3. The automatic mapping and high-precision positioning method for the parking lot according to claim 1, wherein the step 2.2 specifically comprises:
because of the shooting angle problem of the front-view monocular camera, the visual mark in the image is actually an oblique-view angle, and if the oblique-view angle can be converted into a front-view angle, the accuracy of the character detection and identification network is greatly improved; therefore, perspective transformation is used for projecting an image to a new viewing plane, the pixel content surrounded by four corner points of the rectangular frame identified in the previous step is transformed into a square picture and stored, the operation of character identification in the next step is facilitated, and the transformation formula is as follows:
Figure FDA0003086204420000021
wherein, A is a perspective transformation matrix, the right side of the equation is a target point to be moved, and the left side of the equation is a target point to be moved; since the image plane is still obtained after perspective transformation, the following results are obtained:
Figure FDA0003086204420000022
as described above, each pair of points provides 2 equations, and the corner points of the four rectangular boxes provide 8 equations, let a33The remaining 8 unknowns of the perspective transformation matrix a are solved for 1, thereby transforming the visual indicia of squinting into an elevation angle.
4. The automatic mapping and high-precision positioning method for the parking lot according to claim 1, wherein the step 2.3 specifically comprises:
in the text detection step, a differentiable binary DB is used as a text detector, a Convolution Recurrent Neural Network (CRNN) is used as a PP-OCR system of a text recognizer, the input is the image which is subjected to homography transformation in the previous step, and the output is a text file containing the recognition result.
5. The automatic mapping and high-precision positioning method for the parking lot according to claim 1, wherein the step 3.2 specifically comprises:
after detecting the coordinates of the corner points of the visual marker, since the four corner points in the image lie on the same plane in the world coordinate system, the equation can be listed for each corner point:
R·K-1·x+t=X
wherein K is an internal parameter of the camera, X is a coordinate of an angular point in an image space coordinate system, R and t are external parameters of the camera at the moment and respectively represent rotation and translation, and X is a coordinate of the angular point in a three-dimensional space;
because the position relation among the four corner points is known, a three-dimensional space rectangular coordinate system which takes the center of the visual mark as an origin, the x axis is right along the edge direction of the mark, the y axis is upward along the edge direction of the mark, and the z axis is vertical to the mark plane and faces inwards is established according to the position relation; in the coordinate system, X is known, in addition, X is detected and determined, K is obtained through camera calibration, R and t degrees of freedom DOF are 6, and then R and t can be solved iteratively only by three or more points;
performing Rodrigue transformation on the R to obtain an included angle a between a connecting line from the monocular camera to the center of the visual mark and the heading of the vehicle, and taking a second norm of t as the distance d from the monocular camera to the center of the visual mark; and constructing a visual mark in a coordinate system of the vehicle body, wherein the coordinates are as follows:
x=sin(a)·d
y=cos(a)·d
6. the automatic mapping and high-precision positioning method for the parking lot according to claim 1, wherein the step 6 specifically comprises:
the graph optimization method is characterized in that an optimization graph model is established in a graph mode, the graph is composed of nodes and edges, and the nodes refer to positions where vehicles are located and positions where visual marks are located at each moment; the side refers to an estimation result based on inertial navigation between two moments or a relative coordinate difference between a vehicle and a visual mark at each moment; and when a new visual mark or vehicle displacement is observed, adding a new node and an edge in the graph model, listing an edge error equation according to the relationship between the edge and the node in the graph model after all data are added, and performing iteration optimization by using a Gauss-Newton method.
7. The method for automatically mapping and positioning the parking lot with high precision as claimed in claim 6, wherein the map model is described by the following error equation:
Figure FDA0003086204420000031
Figure FDA0003086204420000032
wherein:
k represents any node in the graph model, and the node is a visual mark or a vehicle;
c represents the whole node set in the graph model;
xkrepresenting the position information of the kth node, and if the node is a visual mark, storing the position coordinate of the visual mark; if the node is a vehicle, storing the position coordinates and the course of the vehicle;
zkindicating location information of a kth node obtained from observations related to the kth node;
Ωkrepresenting a covariance matrix obtained from observations associated with the kth node;
ek(xk,zk) Denotes xkAnd zkAn error function of (a);
f (x) is the error function of the entire graph model, x*Representing a globally optimal solution of the graph model.
8. The method as claimed in claim 7, wherein the method comprises finding the global optimal solution x ^ s*And iteratively solving the error equation of the formula by using a Gauss Newton method:
first, f (x) is subjected to a first order taylor expansion:
Figure FDA0003086204420000041
Figure FDA0003086204420000042
Figure FDA0003086204420000043
wherein:
c, b is coefficient term in the first order Taylor expansion of F (x);
Figure FDA0003086204420000044
an initial value representing node location information;
Δ x represents a correction value for x;
Jkindicates to start the value
Figure FDA00030862044200000415
Substitution of ekCalculating to obtain a Jacobian matrix;
by solving the gauss newton equation:
HΔx=g
g and H are Gauss Newton equation coefficient terms:
Figure FDA0003086204420000045
Figure FDA0003086204420000046
solving to obtain a new global optimal solution
Figure FDA0003086204420000047
And substituting F (x) as an initial value for a new iteration.
9. The method according to claim 1, wherein in step 5, the vision mark observation and the vehicle displacement observation obtained at each moment both contain errors, and the position of the vehicle is the unknown quantity to be solved, and the problem is described by using an extended kalman filter formula:
Figure FDA0003086204420000048
Figure FDA0003086204420000049
Figure FDA00030862044200000410
Figure FDA00030862044200000411
Figure FDA00030862044200000412
Figure FDA00030862044200000413
Pk|k=(I-KkHk)Pk|k-1
wherein:
k is the current moment, and k-1 is the last moment;
Figure FDA00030862044200000414
predicting an unknown quantity matrix of the current moment according to prior information of the previous moment, wherein the unknown quantity matrix comprises a vehicle position, a course and a visual mark position;
Figure FDA0003086204420000051
is the unknown quantity matrix of the last time obtained in the last iteration;
uk-1representing inertial navigation data of the previous moment as a control vector;
the function f () represents the unknown quantity at the previous moment
Figure FDA0003086204420000052
Control quantity uk-1The effect on the unknown quantity at this time;
Fk-1is the jacobian matrix corresponding to function f ();
Qk-1representing noise contained in the empirical model;
zkan observation value matrix at the current moment contains the observation of the vehicle on the visual mark;
Figure FDA0003086204420000053
the observation value z at the current moment is measured for the Kalman gain at the current momentkAnd the predicted value
Figure FDA0003086204420000054
A difference of (a);
the function h () represents the corresponding conversion relation between the observed value matrix and the unknown quantity matrix;
Skis and
Figure FDA0003086204420000055
a corresponding covariance matrix;
Hkis the jacobian matrix corresponding to function h ();
Rkrepresenting noise contained in the observed value;
Kkthe reliability of the observed value is measured;
Figure FDA0003086204420000056
is optimized unknown at the current momentA quantity estimate;
Pk-1|k-1、Pk|k-1、Pk|kis and unknown quantity
Figure FDA0003086204420000057
A one-to-one correspondence covariance matrix;
i is an identity matrix of the same order as the covariance matrix;
the current position of the vehicle can be obtained in an incremental mode through the above method, and the position of the visual mark in the map is updated in real time.
10. The automatic mapping and high-precision positioning method for the parking lot according to claim 1, wherein the step 7 specifically comprises:
storing the off-line map of the parking lot plane map, and storing the position coordinates and the attribute information of the visual markers in the map as text information after finishing the map building optimization work, wherein the attribute information comprises the serial numbers, the positions and the directions of the visual markers and is used for on-line positioning;
the parking lot plane map offline map is used in online positioning, offline map information is read from a text, map initialization is carried out, the visual marking information in the garage is identified by the detection method in the steps 2 and 3, online optimization is carried out by the step 5, finally the position of the vehicle in the offline map is obtained in real time, and the offline map is further refined to achieve the effect of meter-level positioning.
CN202110581367.7A 2021-05-27 2021-05-27 Automatic mapping and high-precision positioning method for parking lot Active CN113220818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110581367.7A CN113220818B (en) 2021-05-27 2021-05-27 Automatic mapping and high-precision positioning method for parking lot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110581367.7A CN113220818B (en) 2021-05-27 2021-05-27 Automatic mapping and high-precision positioning method for parking lot

Publications (2)

Publication Number Publication Date
CN113220818A true CN113220818A (en) 2021-08-06
CN113220818B CN113220818B (en) 2023-04-07

Family

ID=77099107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110581367.7A Active CN113220818B (en) 2021-05-27 2021-05-27 Automatic mapping and high-precision positioning method for parking lot

Country Status (1)

Country Link
CN (1) CN113220818B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703443A (en) * 2021-08-12 2021-11-26 北京科技大学 Unmanned vehicle autonomous positioning and environment exploration method independent of GNSS
CN113884098A (en) * 2021-10-15 2022-01-04 上海师范大学 Iterative Kalman filtering positioning method based on specific model
CN114111774A (en) * 2021-12-06 2022-03-01 纵目科技(上海)股份有限公司 Vehicle positioning method, system, device and computer readable storage medium
CN114495109A (en) * 2022-01-24 2022-05-13 山东大学 Grabbing robot based on matching of target and scene characters and grabbing method and system
CN118262336A (en) * 2024-05-30 2024-06-28 南昌智能新能源汽车研究院 Indoor parking lot positioning method and system based on visual SLAM

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763287A (en) * 2018-04-13 2018-11-06 同济大学 On a large scale can traffic areas driving map construction method and its unmanned application process
CN108983769A (en) * 2018-06-22 2018-12-11 理光软件研究所(北京)有限公司 Immediately the optimization method and device of positioning and map structuring
US20190088011A1 (en) * 2017-09-20 2019-03-21 Boe Technology Group Co., Ltd. Method, device, terminal and system for visualization of vehicle's blind spot and a vehicle
CN110490900A (en) * 2019-07-12 2019-11-22 中国科学技术大学 Binocular visual positioning method and system under dynamic environment
CN110517216A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of SLAM fusion method and its system based on polymorphic type camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190088011A1 (en) * 2017-09-20 2019-03-21 Boe Technology Group Co., Ltd. Method, device, terminal and system for visualization of vehicle's blind spot and a vehicle
CN108763287A (en) * 2018-04-13 2018-11-06 同济大学 On a large scale can traffic areas driving map construction method and its unmanned application process
CN108983769A (en) * 2018-06-22 2018-12-11 理光软件研究所(北京)有限公司 Immediately the optimization method and device of positioning and map structuring
CN110490900A (en) * 2019-07-12 2019-11-22 中国科学技术大学 Binocular visual positioning method and system under dynamic environment
CN110517216A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of SLAM fusion method and its system based on polymorphic type camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YEWEI HUANG等: "vision-based semantic mapping and localization for autonomous indoor parking", 《2018 IEEE INTELLIGENT VEHICLES SYSPOSIUM》 *
林泽: "面向自动泊车环境的视觉SLAM技术研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703443A (en) * 2021-08-12 2021-11-26 北京科技大学 Unmanned vehicle autonomous positioning and environment exploration method independent of GNSS
CN113703443B (en) * 2021-08-12 2023-10-13 北京科技大学 GNSS independent unmanned vehicle autonomous positioning and environment exploration method
CN113884098A (en) * 2021-10-15 2022-01-04 上海师范大学 Iterative Kalman filtering positioning method based on specific model
CN113884098B (en) * 2021-10-15 2024-01-23 上海师范大学 Iterative Kalman filtering positioning method based on materialization model
CN114111774A (en) * 2021-12-06 2022-03-01 纵目科技(上海)股份有限公司 Vehicle positioning method, system, device and computer readable storage medium
CN114111774B (en) * 2021-12-06 2024-04-16 纵目科技(上海)股份有限公司 Vehicle positioning method, system, equipment and computer readable storage medium
CN114495109A (en) * 2022-01-24 2022-05-13 山东大学 Grabbing robot based on matching of target and scene characters and grabbing method and system
CN118262336A (en) * 2024-05-30 2024-06-28 南昌智能新能源汽车研究院 Indoor parking lot positioning method and system based on visual SLAM

Also Published As

Publication number Publication date
CN113220818B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113220818B (en) Automatic mapping and high-precision positioning method for parking lot
CN101598556B (en) Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment
CN107180215A (en) Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically
CN111652179A (en) Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
Senlet et al. Satellite image based precise robot localization on sidewalks
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN111862673A (en) Parking lot vehicle self-positioning and map construction method based on top view
Suger et al. Global outer-urban navigation with openstreetmap
CN114693787B (en) Parking garage map building and positioning method, system and vehicle
CN115407357A (en) Low-beam laser radar-IMU-RTK positioning mapping algorithm based on large scene
CN112556719A (en) Visual inertial odometer implementation method based on CNN-EKF
CN115574816B (en) Bionic vision multi-source information intelligent perception unmanned platform
CN114019552A (en) Bayesian multi-sensor error constraint-based location reliability optimization method
CN114964276B (en) Dynamic vision SLAM method integrating inertial navigation
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
Rehder et al. Submap-based SLAM for road markings
Li et al. Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system
CN115409965A (en) Mining area map automatic generation method for unstructured roads
CN113554705A (en) Robust positioning method for laser radar in changing scene
CN114199250A (en) Scene matching navigation method and device based on convolutional neural network
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
Hu et al. Efficient Visual-Inertial navigation with point-plane map
CN117470259A (en) Primary and secondary type space-ground cooperative multi-sensor fusion three-dimensional map building system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant